paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
iclr_2018_rkr1UDeC-
Large scale distributed neural network training through online distillation
Techniques such as ensembling and distillation promise model quality improvements when paired with almost any base model. However, due to increased test-time cost (for ensembles) and increased complexity of the training pipeline (for distillation), these techniques are challenging to use in industrial settings. In this paper we explore a variant of distillation which is relatively straightforward to use as it does not require a complicated multi-stage setup or many new hyperparameters. Our first claim is that online distillation enables us to use extra parallelism to fit very large datasets about twice as fast. Crucially, we can still speed up training even after we have already reached the point at which additional parallelism provides no benefit for synchronous or asynchronous stochastic gradient descent. Two neural networks trained on disjoint subsets of the data can share knowledge by encouraging each model to agree with the predictions the other model would have made. These predictions can come from a stale version of the other model so they can be safely computed using weights that only rarely get transmitted. Our second claim is that online distillation is a cost-effective way to make the exact predictions of a model dramatically more reproducible. We support our claims using experiments on the Criteo Display Ad Challenge dataset, ImageNet, and the largest to-date dataset used for neural language modeling, containing 6×1011 tokens and based on the Common Crawl repository of web data.
accepted-poster-papers
meta score: 7 The paper introduces an online distillation technique to parallelise large scale training. Although the basic idea is not novel, the presented experimentation indicates that the authors' have made the technique work. Thus this paper should be of interest to practitioners. Pros: - clearly written, the approach is well-explained - good experimentation on large-scale common crawl data with 128-256 GPUs - strong experimental results Cons: - the idea itself is not novel - the range of experimentation could be wider (e.g. different numbers of GPUs) but this is expensive! Overall the novelty is in making this approach work well in practice, and demonstrating it experimentally.
train
[ "SJ7PzWDeM", "SyOiDTtef", "Bk09mAnlG", "rkmWh7jMG", "SkVb6QsMM", "B1Hj5YOMG", "HyFBYTVZf", "HyCju6EZz", "H1e5ETNWM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "This paper provides a very original & promising method to scale distributed training beyond the current limits of mini-batch stochastic gradient descent. As authors point out, scaling distributed stochastic gradient descent to more workers typically requires larger batch sizes in order to fully utilize computational resource, and increasing the batch size has a diminishing return. This is clearly a very important problem, as it is a major blocker for current machine learning models to scale beyond the size of models and datasets we currently use. Authors propose to use distillation as a mechanism of communication between workers, which is attractive because prediction scores are more compact than model parameters, model-agnostic, and can be considered to be more robust to out-of-sync differences. This is a simple and sensible idea, and empirical experiments convincingly demonstrate the advantage of the method in large scale distributed training.\n\nI would encourage authors to experiment in broader settings, in order to demonstrate that the general applicability of the proposed method, and also to help readers better understand its limitations. Authors only provide a single positive data point; that co-distillation was useful in scaling up from 128 GPUs to 258 GPUs, for the particular language modeling problem (commoncrawl) which others have not previously studied. In order for other researchers who work on different problems and different system infrastructure to judge whether this method will be useful for them, however, they need to understand better when codistillation succeeds and when it fails. It will be more useful to provide experiments with smaller and (if possible) larger number of GPUs (16, 32, 64, and 512?, 1024?), so that we can more clearly understand how useful this method is under the regime mini-batch stochastic gradient continues to scale. Also, more diversity of models would also help understanding robustness of this method to the model. Why not consider ImageNet? Goyal et al reports that it took an hour for them to train ResNet on ImageNet with 256 GPUs, and authors may demonstrate it can be trained faster.\n\nFurthermore, authors briefly mention that staleness of parameters up to tens of thousands of updates did not have any adverse effect, but it would good to know how the learning curve behaves as a function of this delay. Knowing how much delay we can tolerate will motivate us to design different methods of communication between teacher and student models.", "The paper proposes an online distillation method, called co-distillation, where the two different models are trained to match the predictions of other model in addition to minimizing its own loss. The proposed method is applied to two large-scale datasets and showed to perform better than other baselines such as label smoothing, and the standard ensemble. \n\nThe paper is clearly written and was easy to understand. My major concern is the significance and originality of the proposed method. As written by the authors, the main contribution of the paper is to apply the codistillation method, which is pretty similar to Zhang et. al (2017), at scale. But, because from Zhang's method, I don't see any significant difficulty in applying to large-scale problems, I'm not sure that this can be a significant contribution. Rather, I think, it would have been better for the authors to apply the proposed methods to a smaller scale problems as well in order to explore more various aspects of the proposed methods including the effects of number of different models. In this sense, it is also a limitation that the authors showing experiments where only two models are codistillated. Usually, ensemble becomes stronger as the number of model increases.\n", "Although I am not an expert on this area, but this paper clearly explains their contribution and provides enough evidences to prove their results.\nOnline distillation technique is introduced to accelerate traditional algorithms for large-scale distributed NN training.\nCould the authors add more results on the CNN ?", "We have added the CNN results on ImageNet and (brief) results on CIFAR-100 to the paper.\nThey can now be found in sections 3.1, 3.3.1, and 3.4.1 as well as figure 3.", "We were able to add the ImageNet experiments you suggested, as well as experiments on the staleness of predictions.\n\nImageNet results can be found in sections 3.1, 3.3.1, and 3.4.1 as well as figure 3 of the latest version.\n\nSection 3.4 and figure 4 now cover the prediction staleness issue.\n", "In response to reviewer feedback, we have improved the manuscript and at this point we believe the new version addresses all reviewer concerns.\n\nWe have added results on ImageNet that show that codistillation works there as well, even though it is a very different problem from language modeling. We achieve a state of the art number of steps to reach 75% accuracy on ImageNet.\n\nWe also reran experiments from Zhang et al. on CIFAR-100 and show that online and offline distillation actually produce the same accuracy, when offline distillation is done correctly, contrary to what their table shows. These results support our claim that our paper is the first to show the true benefits of online distillation.\n\nWe have also added delay sensitivity experiments on Common Crawl.\n", "Thank you for your review. We agree that online distillation is quite straightforward to come up with, in fact Geoff Hinton described the idea in a 2014 talk (https://www.youtube.com/watch?v=EK61htlw8hY). However, he told us that he did not publish the idea in a paper because numerous subsequent experiments showed that it did not outperform distilling an ensemble into a new model. Appreciating the real practical benefit of codistillation (extra parallelism without needing a subsequent distillation phase) over offline distillation and demonstrating it at scale is far from trivial because it is essential to first exhaust simpler forms of parallelism. For instance, just from reading Zhang et al., no one would want to use online distillation because the authors do not claim any training time improvement or other benefit beyond a small and dubious quality improvement. Since the submission deadline, we have investigated and reproduced some of Zhang et al.'s experiments and we now believe that overfitting in the teacher model mostly explains the worse performance of regular, offline distillation that they report. As far as we know, Zhang et al. is an unreviewed manuscript draft that, unlike our work, does not provide clear evidence for the benefits of online distillation.\n\nOur contribution is to articulate and demonstrate the practical value of online distillation algorithms, including benefits to reproducibility and training speed. We demonstrate these benefits at scale, in a hopefully convincing fashion. If our paper did not exist, it might be many years before people tried these algorithms again, because reading Zhang et al. alone makes it seem like a modest quality improvement is the only benefit (a quality improvement that becomes even smaller when the teacher model in offline distillation does not overfit). \n", "Thank you for your review. Were there any specific results you think would be particularly useful? We would like to add some results with CNNs on ImageNet, but these are somewhat expensive, so we will update the thread if we can get them done to our satisfaction.\n\nWe have some CIFAR results with CNNs we plan to add to help explain what we said in the manuscript about Zhang et al.'s results.\n", "Thank you for your review. We have demonstrated large improvements on many non-public datasets that we were unable to include, so your point is well taken. We will try to add some ImageNet results. If we can get them done in time, we will update the paper and this discussion. We agree that these would greatly strengthen the paper.\n\nUnfortunately, we will not be able to move to a larger number of GPUs during the review period. However, given the limits of the baselines, it would probably not be much better to use more GPUs than we did in the CommonCrawl experiments.\n\nWe will run additional experiments with different checkpoint exchange intervals to investigate the question about delay sensitivity in more depth.\n" ]
[ 8, 4, 6, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rkr1UDeC-", "iclr_2018_rkr1UDeC-", "iclr_2018_rkr1UDeC-", "HyCju6EZz", "SJ7PzWDeM", "iclr_2018_rkr1UDeC-", "SyOiDTtef", "Bk09mAnlG", "SJ7PzWDeM" ]
iclr_2018_BJ0hF1Z0b
Learning Differentially Private Recurrent Language Models
We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy. Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent. In particular, we add user-level privacy protection to the federated averaging algorithm, which makes large step updates from user-level data. Our work demonstrates that given a dataset with a sufficiently large number of users (a requirement easily met by even small internet-scale datasets), achieving differential privacy comes at the cost of increased computation, rather than in decreased utility as in most prior work. We find that our private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset.
accepted-poster-papers
This paper uses known methods for learning a differentially private models and applies it to the task of learning a language model, and find they are able to maintain accuracy results on large datasets. Reviewers found the method convincing and original saying it was "interesting and very important to the machine learning ... community", and that in terms of results it was a "very strong empirical paper, with experiments comparable to industrial scale". There were some complaints as to the clarity of the work, with requests for more clear explanations of the methods used.
train
[ "rJG5vkH4z", "BJ1XIR_ef", "Bkg5_kcxG", "ryImKM5lG", "HJRVC6-Gz", "BkMETpWMz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "\n1. Noise\n\nThanks for the reference. It might indeed be an LSTM issue!\n\n2. Clipping\n\nOh right, I didn't thought about the bias introduces, that is a good point!\n\n3. Optimizers\n\"Certainly an interesting direction, but beyond the scope of the current work.\"\n\nIndeed!\n", "Summary: The paper provides the first evidence of effectively training large RNN based language models under the constraint of differential privacy. The paper focuses on the user-level privacy setting, where the complete contribution of a single user is protected as opposed to protecting a single training example. The algorithm is based on the Federated Averaging and Federated Stochastic gradient framework.\n\nPositive aspects of the paper: The paper is a very strong empirical paper, with experiments comparable to industrial scale. The paper uses the right composition tools like moments accountant to get strong privacy guarantees. The main technical ideas in the paper seem to be i) bounding the sensitivity for weighted average queries, and ii) clipping strategies for the gradient parameters, in order to control the norm. Both these contributions are important in the effectiveness of the overall algorithm.\n\nConcern: The paper seems to be focused on demonstrating the effectiveness of previous approaches to the setting of language models. I did not find strong algorithmic ideas in the paper. I found the paper to be lacking in that respect. ", "\nSummary of the paper\n-------------------------------\n\nThe authors propose to add 4 elements to the 'FederatedAveraging' algorithm to provide a user-level differential privacy guarantee. The impact of those 4 elements on the model'a accuracy and privacy is then carefully analysed.\n\nClarity, Significance and Correctness\n--------------------------------------------------\n\nClarity: Excellent\n\nSignificance: I'm not familiar with the literature of differential privacy, so I'll let more knowledgeable reviewers evaluate this point.\n\nCorrectness: The paper is technically correct.\n\nQuestions\n--------------\n\n1. Figure 1: Adding some noise to the updates could be view as some form of regularization, so I have trouble understand why the models with noise are less efficient than the baseline.\n2. Clipping is supposed to help with the exploding gradients problem. Do you have an idea why a low threshold hurts the performances? Is it because it reduces the amplitude of the updates (and thus simply slows down the training)?\n3. Is your method compatible with other optimizers, such as RMSprop or ADAM (which are commonly used to train RNNs)?\n\nPros\n------\n\n1. Nice extensions to FederatedAveraging to provide privacy guarantee.\n2. Strong experimental setup that analyses in details the proposed extensions.\n3. Experiments performed on public datasets.\n\nCons\n-------\n\nNone\n\nTypos\n--------\n\n1. Section 2, paragraph 3 : \"is given in Figure 1\" -> \"is given in Algorithm 1\"\n\nNote\n-------\n\nSince I'm not familiar with the differential privacy literature, I'm flexible with my evaluation based on what other reviewers with more expertise have to say.", "This paper extends the previous results on differentially private SGD to user-level differentially private recurrent language models. It experimentally shows that the proposed differentially private LSTM achieves comparable utility compared to the non-private model.\n\nThe idea of training differentially private neural network is interesting and very important to the machine learning + differential privacy community. This work makes a pretty significant contribution to such topic. It adapts techniques from some previous work to address the difficulties in training language model and providing user-level privacy. The experiment shows good privacy and utility.\n\nThe presentation of the paper can be improved a bit. For example, it might be better to have a preliminary section before Section2 introducing the original differentially private SGD algorithm with clipping, the original FedAvg and FedSGD, and moments accountant as well as privacy amplification; otherwise, it can be pretty difficult for readers who are not familiar with those concepts to fully understand the paper. Such introduction can also help readers understand the difficulty of adapting the original algorithms and appreciate the contributions of this work.\n", "We thank the reviewer for the thoughtful review and good questions, which we address below:\n\n1. Figure 1: Adding some noise to the updates could be view as some form of regularization, so I have trouble understand why the models with noise are less efficient than the baseline.\n\nIndeed, we were hoping to see some regularization benefit from noise, but there does not appear to be a significant effect, at least for these models. In Figure 3, which isolates the noise addition, we do see a slight improvement with a modest amount of noise early in training (blue line, noise around 0.012), but otherwise the Gaussian noise we add does not appear to help. We did not do training set evaluation on these models, it is possible (and likely based on results from \"Deep learning with differential privacy\", Figs. 3 and 6) that the addition of noise decreases the gap between test and training accuracy. Other work has also observed that adding noise may not work well as a regularizer for LSTMs, see the \"negative results\" paragraph in Sec 4 of https://openreview.net/pdf?id=rkjZ2Pcxe \n\n2. Clipping is supposed to help with the exploding gradients problem. Do you have an idea why a low threshold hurts the performances? Is it because it reduces the amplitude of the updates (and thus simply slows down the training)?\n\nThis is an important direction for future work, but we have some preliminary thoughts. First, to clarify, note we are clipping each user's update before averaging across users, whereas traditional clipping is applied to a single minibatch update after averaging over examples, and so it is possible that these two types of clipping behave differently. \n\nWe suspect two primary reasons for the drop in performance with over-aggressive clipping: (1) reduction in the amplitude of the updates, as you suggest; and (2) clipping introduces bias into the way updates from different users are weighted, essentially changing the loss function being optimized. Some preliminary subsequent experiments indicate that both effects are significant, and that the effect of (1) can be somewhat offset by rescaling the updates on the server. Nevertheless, we emphasize that our primary result is that despite these effects, it is possible to set the clipping parameter large enough that we can still train high-accuracy models.\n\n3. Is your method compatible with other optimizers, such as RMSprop or ADAM (which are commonly used to train RNNs)?\n\nThere are multiple ways these optimizers could be extended to the federated setting. Either algorithm could be applied locally on each client (that is, inside UserUpdateFedAvg) to compute the update, and our approach would work without modification. Running these algorithms across clients while combining them with the additional local computation done by FederatedAverging (which we found to be important for achieving DP) would essentially mean designing a new optimization procedure --- certainly an interesting direction, but beyond the scope of the current work.\n", "We thank the reviewer for the thoughtful review, and will attempt to improve the presentation in the final version. While it will be difficult to fit a complete introduction of all the topics mentioned into the page limit, we will add additional coverage of this material." ]
[ -1, 7, 7, 8, -1, -1 ]
[ -1, 4, 2, 4, -1, -1 ]
[ "HJRVC6-Gz", "iclr_2018_BJ0hF1Z0b", "iclr_2018_BJ0hF1Z0b", "iclr_2018_BJ0hF1Z0b", "Bkg5_kcxG", "ryImKM5lG" ]
iclr_2018_SJ-C6JbRW
Mastering the Dungeon: Grounded Language Learning by Mechanical Turker Descent
Contrary to most natural language processing research, which makes use of static datasets, humans learn language interactively, grounded in an environment. In this work we propose an interactive learning procedure called Mechanical Turker Descent (MTD) that trains agents to execute natural language commands grounded in a fantasy text adventure game. In MTD, Turkers compete to train better agents in the short term, and collaborate by sharing their agents' skills in the long term. This results in a gamified, engaging experience for the Turkers and a better quality teaching signal for the agents compared to static datasets, as the Turkers naturally adapt the training data to the agent's abilities.
accepted-poster-papers
This paper provides a game-based interface to have Turkers compete to analyze data for a learning task over multiple rounds. Reviewers found the work interesting and clear written, saying "the paper is easy to follow and the evaluation is meaningful." They also note that there is clear empirical benefit "the results seem to suggest that MTD provides an improvement over non-HITL methods." They also like the task compared to synthetic grounding experiments. There was some concern about the methodology of the experiments but the authors provide reasonable explanations and clarification. One final concern that I hope the readers take into account. While the reviewers were convinced by the work and did not require it, I feel like the work does not engage enough with the literature of crowd-sourcing in other disciplines. While there are likely some unique aspects to ML use of crowdsourcing, there are many papers about encouraging crowd-workers to produce more useful data.
train
[ "r14hglcez", "ByLXrM9eG", "SyXWKhaxM", "rJZaKFhXG", "SkRXUrXMf", "HkGz8Bmff", "BJxl8BQfM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The authors propose a framework for interactive language learning, called Mechanical Turker Descent (MTD). Over multiple iterations, Turkers provide training examples for a language grounding task, and they are incentivized to provide new training examples that quickly improve generalization. The framework is straightforward and makes few assumptions about the task, making it applicable to potentially more than grounded language. Unlike recent works on \"grounded language\" using synthetic templates, this work operates over real language while maintaining interactivity.\n \nResult show that the interactive learning outperforms the static learning baseline, but there are potential problems with the way the test set is collected. In MTD, the same users inherently provide both training and test examples. In the collaborative-only baseline, it is possible to ensure that the train and test sets are never annotated by the same user (which is ideal for testing generalization). If train and test sets are split this way, it would give an unfair advantage to MTD. Additionally, there is potentially a different distribution of language for gamified and non-gamified settings. By aggregating the test set over 3 MTD scenarios and 1 static scenario, the test set could be skewed towards gamified language, again making it unfair to the baseline. I would like to see the results over the different test subsets, allowing us to verify whether MTD outperforms the baseline for the baseline's test data.", "TL;DR of paper: Improved human-in-the-loop data collection using crowdsourcing. The basic gist is that on every round, N mechanical turkers will create their own dataset. Each turker gets a copy of a base model which is trained on their own dataset, and each trained model is evaluated on all the other turker datasets. The top-performing models get a cash bonus, incentivizing turkers to provide high quality training data. A new base model is trained on the pooled-together data of all the turkers, and a new round begins. The results indicate an improvement over static data collection.\n\nThis idea of HITL dataset creation is interesting, because the competitive aspect incentivizes turkers to produce high quality data. Judging by the feedback given by turkers in the appendix, the workers seem to enjoy the competitive aspect, which would hopefully lead to better data. The results seem to suggest that MTD provides an improvement over non-HITL methods.\n\nThe authors repeatedly emphasize the \"collaborative\" aspect of MTD, saying that the turkers have to collaborate to produce similar dataset distributions, but this is misleading because the turkers don't get to see other datasets. MTD is mostly competitive, and the authors should reduce the emphasis on a stretched definition of collaboration.\n\nOne questionable aspect of MTD is that the turkers somehow have to anticipate what are the best examples for the model to train with. That is, the turkers have to essentially perform the example selection process in active learning with relatively little interaction with the training model. While the turkers are provided immediate feedback when the model already correctly classifies the proposed training example, it seems difficult for turkers to anticipate when an example is too hard, because they have no idea about the learning process.\n\nMy biggest criticism is that MTD seems more like an NLP paper rather than an ICLR paper. I gave a 7 because I like the idea, but I wouldn't be upset if the AC recommends submitting to an NLP conference instead.", "The paper provides an interesting data collection scheme that improves upon standard collection of static databases that have multiple shortcomings -- End of Section 3 clearly summarizes the advantages of the proposed algorithm. The paper is easy to follow and the evaluation is meaningful.\n\nIn MTD, both data collection and training the model are intertwined and so, the quality of the data can be limited by the learning capacity of the model. It is possible that after some iterations, the data distribution is similar to previous rounds in which case, the dataset becomes similar to static data collection (albeit at a much higher cost and effort). Is this observed ? Further, is it possible to construct MTD variants that lead to constantly improving datasets by being agnostic to the actual model choice ? For example, utilizing only the priors of the D_{train_all}, mixing model and other humans' predictions, etc.\n\n\n\n", "We have updated the paper with some small changes that clarify various points in consideration of the reviewers comments. Specifically for reviewer 3 we have updated the text with respect to the collaborative vs. competitive aspects of MTD, those changes appear in Section 3 and Appendix I. ", "- How is the test set broken down and what are the results in each part?\nThe breakdown of the test set into the three portions (MTD limit, Baseline Test and Pilot Study) is shown in Table 5. MTD (with AC-Seq2Seq) outperforms the baseline on each portion (but by different amounts). Because MTD emphasizes a curriculum the data is generally more difficult than the baseline data which is why the MTD-trained models are better at predicting longer sequences (see Table 4), which is why performance of all models is worse on that data compared to the baseline data, and the gap between the models is bigger.\n", "- Collaborative aspect of MTD\nMTD is collaborative in the sense that the human players are building a shared model over multiple iterations, which is important for learning a good model. More specifically, the turkers collaborate in two ways. First, after each round, the data of all Turkers are merged to train a single model. It is clear that having 30 different models would make any of those models worse than a single model built with the collaborative baseline. Second, a Turker in the current round benefits from other Turkers in previous rounds, which ensures that worse-off Turkers from previous rounds can still compete. As we point out in the paper, this is related to the publication model of the research community, where researchers collaborate by using others’ results to build research for the next conference (in MTD, it is the same, where those results are in the form of data and models, rather than papers). We have indeed thought about showing the Turker’s samples of the other datasets, and even implemented that at one point, but decided against in our experiments as it introduced unnecessary complexity. We will add text soon to the paper discussing these issues further.\n\n- It is hard for Turkers to anticipate the correct curriculum.\nYes, currently the best feedback they get is the immediate output from the model when they type an example, they know whether the model can already do it or not, which we think is pretty good feedback. We also experimented with giving the predictions of the model (rather than just a message of whether it could do it or not). We thought this would be good for expert labelers, but we decided against using it in our experiments because we thought it would be too complicated for casual Turkers. One could also show examples that the model is currently good or bad at from the last round, as mentioned in the previous point, but again this would add complexity to our experiments for this paper. However, we believe MTD is extensible in many ways. Note that these points are also already discussed in Appendix I.\n\n- MTD seems more like an NLP paper rather than an ICLR paper\nThe core of our paper is about a method for learning representations for language grounding, which includes a pipeline of interaction with humans, a learning environment and an embodied agent which performs the learning. Although we evaluated on a language task, the same method could be used on many other tasks. We believe that both the method and the task that we chose are of interest to the ICLR audience, and all of the reviewers appear to agree that it is interesting (and we as ML researchers like it too!). Past ICLR conferences have had papers utilizing language, vision, speech, etc. In the call for papers it is also written that the following are relevant topics: “applications in vision, audio, speech, natural language processing, robotics, neuroscience, or any other field”. Hence, we think ICLR is one of the most suitable conferences for this type of work.", "- Q: Could the quality of the data be limited by the model? Is this observed?\nWe did not observe this yet, but it is possible -- the data is optimized for the model, it might not be optimal for e.g. a higher capacity model. On the other hand, since we optimize hyperparameters of the model each round, it can increase its capacity on the fly, which would mitigate this effect to some extent. If a high-capacity model cannot fit some complex data, however, e.g. due to optimization challenges, it is possible that the data distribution would gradually become static. In this case, the bottleneck is actually our optimization algorithms and models, rather than the data collection paradigm; i.e., MTD is doing its best in terms of coordinating the training data distribution to provide a good curriculum. Empirically, in Fig 3 we show learning curves for different models and approaches, which have not saturated after 5 rounds. \n\n- Q: Is it possible to construct MTD variants that lead to constantly improving datasets by being agnostic to the actual model choice ? \nWe’re not clear on how to do that, but if you have ideas then we’d love to hear them! The model is used to score the human’s data, so you would need to replace it with a model-agnostic automatic scoring function somehow. The benefit of using a model in the loop, as we do, is that you are actually optimizing for what your model can do (the human teacher is optimizing the curriculum for the model).\n" ]
[ 7, 7, 8, -1, -1, -1, -1 ]
[ 4, 4, 5, -1, -1, -1, -1 ]
[ "iclr_2018_SJ-C6JbRW", "iclr_2018_SJ-C6JbRW", "iclr_2018_SJ-C6JbRW", "iclr_2018_SJ-C6JbRW", "r14hglcez", "ByLXrM9eG", "SyXWKhaxM" ]
iclr_2018_Hyg0vbWC-
Generating Wikipedia by Summarizing Long Sequences
We show that generating English Wikipedia articles can be approached as a multi- document summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoder- decoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations.
accepted-poster-papers
This paper presents a new multi-document summarization task of trying to write a wikipedia article based on its sources. Reviewers found the paper and the task clear to understand and well-explained. The modeling aspects are clear as well, although lacking justification. Reviewers are split on the originality of the task, saying that it is certainly big, but wondering if that makes it difficult to compare with. The main split was the feeling that "the paper presents strong quantitative results and qualitative examples. " versus a frustration that the experimental results did not take into account extractive baselines or analysis. However the authors provide a significantly updated version of the work targeting many of these concerns, which does alleviate some of the main issues. For these reasons, despite one low review, my recommendation is that this work be accepted as a very interesting contribution.
train
[ "r129mGrxf", "BJGaExqgz", "H1VuTvqgG", "SyEJe4v7M", "SkUg9xLfz", "Hy663x8ff", "SJ3RoxLzz", "S1npqlIMf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper considers the task of generating Wikipedia articles as a combination of extractive and abstractive multi-document summarization task where input is the content of reference articles listed in a Wikipedia page along with the content collected from Web search and output is the generated content for a target Wikipedia page. The authors at first reduce the input size by using various extractive strategies and then use the selected content as input to the abstractive stage where they leverage the Transformer architecture with interesting modifications like dropping the encoder and proposing alternate self-attention mechanisms like local and memory compressed attention. \n\nIn general, the paper is well-written and the main ideas are clear. However, my main concern is the evaluation. It would have been nice to see how the proposed methods perform with respect to the existing neural abstractive summarization approaches. Although authors argue in Section 2.1 that existing neural approaches are applied to other kinds of datasets where the input/output size ratios are smaller, experiments could have been performed to show their impact. Furthermore, I really expected to see a comparison with Sauper & Barzilay (2009)'s non-neural extractive approach of Wikipedia article generation, which could certainly strengthen the technical merit of the paper.\n\nMore importantly, it was not clear from the paper if there was a constraint on the output length when each model generated the Wikipedia content. For example, Figure 5-7 show variable sizes of the generated outputs. With a fixed reference/target Wikipedia article, if different models generate variable sizes of output, ROUGE evaluation could easily pose a bias on a longer output as it essentially counts overlaps between the system output and the reference. \n\nIt would have been nice to know if the proposed attention mechanisms account for significantly better results than the T-ED and T-D architectures. Did you run any statistical significance test on the evaluation results? \n\nAuthors claim that the proposed model can generate \"fluent, coherent\" output, however, no evaluation has been conducted to justify this claim. The human evaluation only compares two alternative models for preference, which is not enough to support this claim. I would suggest to carry out a DUC-style user evaluation (http://www-nlpir.nist.gov/projects/duc/duc2007/quality-questions.txt) methodology to really show that the proposed method works well for abstractive summarization.\n\nDoes Figure 8 show an example input after the extractive stage or before? Please clarify.\n\n---------------\nI have updated my scores as authors clarified most of my concerns.", "This paper proposes an approach to generating the first section of Wikipedia articles (and potentially entire articles). \nFirst relavant paragraphs are extracted from reference documents and documents retrieved through search engine queries through a TD-IDF-based ranking. Then abstractive summarization is performed using a modification of Transformer networks (Vasvani et al 2017). A mixture of experts layer further improves performance. \nThe proposed transformer decoder defines a distribution over both the input and output sequences using the same self-attention-based network. On its own this modification improves perplexity (on longer sequences) but not the Rouge score; however the architecture enables memory-compressed attention which is more scalable to long input sequences. It is claimed that the transformer decoder makes optimization easier but no complete explanation or justification of this is given. Computing self-attention and softmaxes over entire input sequences will significantly increase the computational cost of training.\n\nIn the task setup the information retrieval-based extractive stage is crucial to performance, but this contribution might be less important to the ICLR community. It willl also be hard to reproduce without significant computational resources, even if the URLs of the dataset are made available. The training data is significantly larger than the CNN/DailyMail single-document summarization dataset.\n\nThe paper presents strong quantitative results and qualitative examples. Unfortunately it is hard to judge the effectiveness of the abstractive model due to the scale of the experiments, especially with regards to the quality of the generated output in comparison to the output of the extractive stage.\nIn some of the examples the system output seems to be significantly shorter than the reference, so it would be helpful to quantify this, as well how much the quality degrades when the model is forced to generate outputs of a given minimum length. While the proposed approach is more scalable, it is hard to judge the extend of this.\n\nSo while the performance of the overall system is impressive, it is hard to judge the significance of the technical contribution made by the paper.\n\n---\nThe additional experiments and clarifications in the updated version give substantially more evidence in support of the claims made by the paper, and I would like to see the paper accepted. \n", "The main significance of this paper is to propose the task of generating the lead section of Wikipedia articles by viewing it as a multi-document summarization problem. Linked articles as well as the results of an external web search query are used as input documents, from which the Wikipedia lead section must be generated. Further preprocessing of the input articles is required, using simple heuristics to extract the most relevant sections to feed to a neural abstractive summarizer. A number of variants of attention mechanisms are compared, including the transofer-decoder, and a variant with memory-compressed attention in order to handle longer sequences. The outputs are evaluated by ROUGE-L and test perplexity. There is also a A-B testing setup by human evaluators to show that ROUGE-L rankings correspond to human preferences of systems, at least for large ROUGE differences.\n\nThis paper is quite original and clearly written. The main strength is in the task setup with the dataset and the proposed input sources for generating Wikipedia articles. The main weakness is that I would have liked to see more analysis and comparisons in the evaluation.\n\nEvaluation:\nCurrently, only neural abstractive methods are compared. I would have liked to see the ROUGE performance of some current unsupervised multi-document extractive summarization methods, as well as some simple multi-document selection algorithms such as SumBasic. Do redundancy cues which work for multi-document news summarization still work for this task?\n\nExtractiveness analysis:\nI would also have liked to see more analysis of how extractive the Wikipedia articles actually are, as well as how extractive the system outputs are. Does higher extractiveness correspond to higher or lower system ROUGE scores? This would help us understand the difficulty of the problem, and how much abstractive methods could be expected to help. \n\nA further analysis which would be nice to do (though I have less clear ideas how to do it), would be to have some way to figure out which article types or which section types are amenable to this setup, and which are not. \n\nI have some concern that extraction could do very well if you happen to find a related article in another website which contains encyclopedia-like or definition-like entries (e.g., Baidu, Wiktionary) which is not caught by clone detection. In this case, the problem could become less interesting, as no real analysis is required to do well here.\n\nOverall, I quite like this line of work, but I think the paper would be a lot stronger and more convincing with some additional work.\n\n----\nAfter reading the authors' response and the updated submission, I am satisfied that my concerns above have been adequately addressed in the new version of the paper. This is a very nice contribution.\n", "Thanks for your detailed response and follow-up work!", "- We found common, actionable feedback from the three reviewers to augment the evaluation section of the paper and believe we have significantly improved it.\n - We added results from a DUC-style linguistic quality human evaluation, showing our model significantly outperforms pure extractive methods we tried as well as an abstractive baseline.\n - We quantified the performance of extractive-only methods on our proposed task, adding results of two more well-cited methods, SumBasic and TextRank (in addition to tf-idf). We show that the abstractive stage indeed adds significant lift to ROUGE and human evaluation performance.\n - We added a section with a comparison with Sauper and Barzilay.\n\n- We quantify the extractiveness of the dataset in section 2.1. Although we mentioned we had a Wiki clone-detection algorithm, we didn’t quantify the results. In Table 1, we show that the proportion of unigrams/words in the output co-occurring in the input is much lower than in other summarization datasets at 59.2%. \n\n- Some other clarifications made in paper\n - Modified caption for Figure 8 to make it clear it is the output of the extractive stage and what the abstractive model uses as input for article generation.\n - We note output length is not constrained in abstractive models. There’s a length penalty hyper-parameter, \\alpha, that is tuned on the validation set. For ROUGE-L scores, we report the more appropriate F1 flavor, which is the harmonic \nmean of ROUGE-Recall (favors long summaries) and ROUGE-Precision (favors short summaries).\nelaborated on justification for T-D architecture for monolingual text-to-text problems\n\n- On feasibility of reproducibility: We will be providing a script that generates the data locally from a local copy of the CommonCrawl dataset, which can be downloaded from http://commoncrawl.org/. Because the script will run locally it will be significantly faster than downloading webpages from the Internet.\n\nOverall we believe the paper is much stronger after the suggestions from the reviewers. Please re-consider scores after this revision. Thank you!", "“It would have been nice to see how the proposed methods perform with respect to the existing neural abstractive summarization approaches. Although authors argue in Section 2.1 that existing neural approaches are applied to other kinds of datasets where the input/output size ratios are smaller, experiments could have been performed to show their impact.”\n\n- In addition to our proposed neural architectures, we compare to very strong existing baselines, seq2seq with attention, which gets state-of-the-art on the Gigaword summarization task as well as the Transformer encoder-decoder, which gets state-of-the-art on translation, a related task from which most summarization techniques arise. We show our models significantly outperform those competitive abstractive methods.\n\n\n“Furthermore, I really expected to see a comparison with Sauper & Barzilay (2009)'s non-neural extractive approach of Wikipedia article generation, which could certainly strengthen the technical merit of the paper.”\n\n- This is a fair point and we added a section comparing with Sauper & Barzilay in Experiments.\n\n\n“More importantly, it was not clear from the paper if there was a constraint on the output length when each model generated the Wikipedia content. For example, Figure 5-7 show variable sizes of the generated outputs. With a fixed reference/target Wikipedia article, if different models generate variable sizes of output, ROUGE evaluation could easily pose a bias on a longer output as it essentially counts overlaps between the system output and the reference.”\n\n- The models are not constrained to output a certain length. Instead we generate until an end-of-sequence token is encountered. There is a length-penalty, alpha, that we tune based on performance of the validation set. In our case, the ROUGE F1 evaluation is fair because it is the harmonic mean of ROUGE-Recall (favors long summaries) and ROUGE-Precision (favors short summaries). As a result, longer output is penalized if it is not useful and related to the target. We tried to clarify this in the Experiments section.\n\n\n“It would have been nice to know if the proposed attention mechanisms account for significantly better results than the T-ED and T-D architectures.”\n\n- We don’t claim for this task that T-D does better than T-ED for short input lengths. However, we hope Figure 3 makes it clear that the T-ED architecture begins to fail and is no longer competitive for longer inputs. In particular, our architecture improvements allow us to consider much larger input lengths, which results in significantly higher ROUGE and and human evaluation scores.\n\n\n“Authors claim that the proposed model can generate \"fluent, coherent\" output, however, no evaluation has been conducted to justify this claim. The human evaluation only compares two alternative models for preference, which is not enough to support this claim. I would suggest to carry out a DUC-style user evaluation (http://www-nlpir.nist.gov/projects/duc/duc2007/quality-questions.txt) methodology to really show that the proposed method works well for abstractive summarization.”\n\n- This is a good point and we followed your suggestion and added a DUC-style human evaluation of linguistic quality. We hope we make it clear that the best abstractive model proposed is significanlty much more fluent/coherent than the best extractive method we tried and another baseline abstractive method (seq2seq). The quality scores are also high in the absolute sense.\n\n\n“Does Figure 8 show an example input after the extractive stage or before? Please clarify.”\n\n- We clarified in the paper (now Figure 10) that it is the output of the extractive stage, before the abstractive stage.\n", "Thank you for the detailed review with actionable feedback. We found common feedback from the three reviewers to augment the evaluation section of the paper and believe we have significantly improved it. In particular, please see responses below in-line as well as rebuttals for other reviews and the summary of changes above.\n\n\n“In the task setup the information retrieval-based extractive stage is crucial to performance, but this contribution might be less important to the ICLR community.”\n\n- We added additional analysis to demonstrate that the extractive stage while important, is far from sufficient to produce good wikipedia articles. We show that ROUGE and human evaluation are greatly improved by the abstractive stage.\n\n\n“It willl also be hard to reproduce without significant computational resources, even if the URLs of the dataset are made available.”\n\n- We will be providing a script for generating the dataset from the CommonCrawl dataset (which is freely available for download). It will run locally instead of downloading over the Internet, and so will be relatively much faster.\n\n\n“Unfortunately it is hard to judge the effectiveness of the abstractive model due to the scale of the experiments, especially with regards to the quality of the generated output in comparison to the output of the extractive stage.”\n\n- We added analysis of the incremental performance of the abstractive model over the extractive output in terms of ROUGE and human evaluation (DUC-style linguistic quality evaluation). We believe we make a strong case that the abstractive model is doing something highly non-trivial and significant. \n\n\n“In some of the examples the system output seems to be significantly shorter than the reference, so it would be helpful to quantify this, as well how much the quality degrades when the model is forced to generate outputs of a given minimum length.”\n- We clarified in the paper that the models are not constrained to output a certain length. Instead we generate until an end-of-sequence token is encountered. There is a length-penalty, alpha, that we tune based on performance of the validation set.\n\n\n“So while the performance of the overall system is impressive, it is hard to judge the significance of the technical contribution made by the paper.”\n- In addition to the proposed task/dataset, we believe the technical significance is demonstrating how very long text-to-text sequence transduction tasks can be done. Previous related work in translation or summarization focused on much shorter sequences. We had to introduce new model architectures to solve this new problem and believe it would be of great interest to the ICLR community. We hope our added evaluations make this claim more convincing.\n", "Thank you for the detailed review with actionable feedback. We found common feedback from the three reviewers to augment the evaluation section of the paper and believe we have significantly improved it. In particular, please see responses below in-line where we address all of your feedback. \n\n“This paper is quite original and clearly written. The main strength is in the task setup with the dataset and the proposed input sources for generating Wikipedia articles. “\n\n- In addition to the task setup, we believe we’ve demonstrated how to do very long (much longer than previously attempted) text-to-text sequence transduction and introduced a new model architecture to do it. We believe this is of great interest to the ICLR community.\n\n“Currently, only neural abstractive methods are compared. I would have liked to see the ROUGE performance of some current unsupervised multi-document extractive summarization methods, as well as some simple multi-document selection algorithms such as SumBasic. Do redundancy cues which work for multi-document news summarization still work for this task?”\n\n- We implemented SumBasic and TextRank (along with tf-idf) to evaluate extractive methods on their own and evaluated them on this task. I believe we show convincingly in the results (e.g. extractive bar-plot) that the abstractive stage indeed adds a lot to the extractive output in terms of ROUGE and human evaluation of linguistic quality and that redundancy cues are not enough.\n\n“I would also have liked to see more analysis of how extractive the Wikipedia articles actually are, as well as how extractive the system outputs are. Does higher extractiveness correspond to higher or lower system ROUGE scores? This would help us understand the difficulty of the problem, and how much abstractive methods could be expected to help.”\n\n- In Section 2.1 we computed the proportion of unigrams/words in the output co-occurring in the input for our task and for the Gigaword and CNN/DailyMail datasets and showed that by this measure WikiSum is much less extractive. In particular, the presence of wiki-clones in the input would give a score of 100%, whereas we report 59.2%.\n\n“A further analysis which would be nice to do (though I have less clear ideas how to do it), would be to have some way to figure out which article types or which section types are amenable to this setup, and which are not.”\n\n- We added a comparison in the paper with Sauper & Barzilay on two Wiki categories. It turns out we do worse on Diseases compared to Actors. We think this is because we use a single model for all categories and the training data is heavily biased toward people.\n\n“I have some concern that extraction could do very well if you happen to find a related article in another website which contains encyclopedia-like or definition-like entries (e.g., Baidu, Wiktionary) which is not caught by clone detection. In this case, the problem could become less interesting, as no real analysis is required to do well here.”\n- We hope our added analysis in Section 2.1 mentioned above should address this concern.\n" ]
[ 7, 8, 7, -1, -1, -1, -1, -1 ]
[ 5, 3, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Hyg0vbWC-", "iclr_2018_Hyg0vbWC-", "iclr_2018_Hyg0vbWC-", "SJ3RoxLzz", "iclr_2018_Hyg0vbWC-", "r129mGrxf", "BJGaExqgz", "H1VuTvqgG" ]
iclr_2018_rkYTTf-AZ
Unsupervised Machine Translation Using Monolingual Corpora Only
Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time.
accepted-poster-papers
This work presents some of the first results on unsupervised neural machine translation. The group of reviewers is highly knowledgeable in machine translation, and they were generally very impressed by the results and the think it warrants a whole new area of research noting "the fact that this is possible at all is remarkable.". There were some concerns with the clarity of the details presented and how it might be reproduced, but it seems like much of this was cleared up in the discussion. The reviewers generally praise the thoroughness of the method, the experimental clarity, and use of ablations. One reviewer was less impressed, and felt more comparison should be done.
val
[ "B1POjpKef", "HJlJ_aqgf", "r1uaaZRxf", "BkJl1m6mf", "rkFvP_9mG", "Sy8bGbVmf", "BJd4kZNmz", "r19B0xEmM", "SJNmPo-Xf", "Sk7cfh-zG", "BJQe95agz", "Hy59njgkG", "rJPiWDkJf", "SkrSo6EC-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "public", "author", "author", "public", "public" ]
[ "This paper describes an approach to train a neural machine translation system without parallel data. Starting from a word-to-word translation lexicon, which was also learned with unsupervised methods, this approach combines a denoising auto-encoder objective with a back-translation objective, both in two translation directions, with an adversarial objective that attempts to fool a discriminator that detects the source language of an encoded sentence. These five objectives together are sufficient to achieve impressive English <-> German and Engish <-> French results in Multi30k, a bilingual image caption scenario with short simple sentences, and to achieve a strong start for a standard WMT scenario.\n\nThis is very nice work, and I have very little to criticize. The approach is both technically interesting, and thorough in that it explores and combines a host of ideas that could work in this space (initial bilingual embeddings, back translation, auto-encoding, and adversarial techniques). And it is genuinely impressive to see all these pieces come together into something that translates substantially better than a word-to-word baseline. But the aspect I like most about this paper is the experimental analysis. Considering that this is a big, complicated system, it is crucial that the authors included both an ablation experiment to see which pieces were most important, and an experiment that indicates the amount of labeled data that would be required to achieve the same results with a supervised system.\n\nIn terms of specific criticisms:\n\nIn Equations (2), consider replacing C(y) with C(M(x)), or use compose notation, in order to make x-hat's relationship to x clear and self-contained within the equation.\n\nI am glad you take the time to give your model selection criterion it's own section in 3.2, as it does seem to be an important part of this puzzle. However, it would be nice to provide actual correlation statistics rather than an anecdotal illustration of correlation.\n\nIn the first paragraph of Section 4.5, I disagree with the sentence, \"Similar observations can be made for the other language pairs we considered.\" In fact, I would go so far as to say that the English to French scenario described in that paragraph is a notable outlier, in that it is the other language pair where you beat the oracle re-ordering baseline in both Multi30k and WMT.\n\nWhen citing Shen et al., 2017, consider also mentioning the following:\n\nControllable Invariance through Adversarial Feature Learning; Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, Graham Neubig; NIPS 2017; https://arxiv.org/abs/1705.11122\n\nResponse read -- thanks.", "The authors present an approach for unsupervised MT which uses a weighted loss function containing 3 components: (i) self reconstruction (ii) cross reconstruction and (iii) adversarial loss. The results are interesting (but perhaps less interesting than what is hinted in the abstract). \n\n1) In the abstract the authors mention that they achieve a BLEU score of 32.8 but omit the fact that this is only on Multi30K dataset and not on the more standard WMT datasets. At first glance, most people from the field would assume that this is on the WMT dataset. I request the authors to explicitly mention this in the abstract itself (there is clearly space and I don't see why this should be omitted)\n\n2) In section 2.3, the authors talk about the Noise Model which is inspired by the standard Denoising Autoencoder setup. While I understand the robustness argument in the case of AEs I am not convinced that the same applies to languages. Such random permutations can often completely alter the meaning of the sentence. The ablation test seems to suggest that this process helps. I read another paper which suggests that this noise does not help (which intuitively makes sense). I would like the authors to comment on this (of course, I am not asking you to compare with the other paper but I am just saying that I have read contradicting observations - one which seems intuitive and the other does not).\n\n3) How were the 3 lambdas in Equation 3 selected ? What ranges did you consider. The three loss terms seem to have very different ranges. How did you account for that?\n\n4) Clarification: In section 2.5 what exactly do you mean by \"as long as the two monolingual corpora exhibit strong structure in feature space.\" How do you quantify this ?\n\n5) In section 4.1, can you please mention the exact number of sentences that you sampled from WMT'14. You mention that selected sentences from 15M random pairs but how many did you select ? The caption of one of the figure mentions that there were 10M sentences. Just want to confirm this.\n\n6) The improvements are much better on the Multi30k dataset. I guess this is because this dataset has smaller sentences with smaller vocabulary. Can you provide a table comparing the average number of sentences and vocabulary size of the two datasets (Multi30k and WMT).\n\n7) The ablation results are provided only for the Multi30k dataset. Can you provide similar results for the WMT dataset. Perhaps this would help in answering my query in point (2) above.\n\n8) Can you also check the performance of a PBSMT system trained on 100K parallel sentences? Although NMT outperforms PBSMT when the data size is large, PBSMT might still be better suited for low resource settings.\n\n9) There are some missing citations (already pointed by others in the forum) . Please add those.\n\n\n+++++++++++++++++++++++\nI have noted the clarifications posted by the authors. I still have concerns about a couple of things. For example, I am still not convinced about the justification given for word order. I understand that empirically it works better but I don't get the intuition. Similarly, I don't get the argument about \"strong structure in feature space\". This is just a conjecture and it is very hard to measure it. I would request the authors to not emphasize on it or give a different more grounded intuition. \n\nI do acknowledge the efforts put in by the authors to address some of my comments and for that I would like to change my rating a bit.\n", "This paper introduces an architecture for training a MT model without any parallel material, and tests it on benchmark datasets (WMT and captions) for two language pairs. Although the resulting performance is only about half that of a more traditional model, the fact that this is possible at all is remarkable.\n\nThe method relies on fairly standard components which will be familiar to most readers: a denoising auto-encoder and an adversarial discriminator. Not much detail is given on the actual models used, for which the authors mainly refer to prior work. This is disappointing: the article would be more self-contained by providing even a high-level description of the models, such as provided (much too late) for the discriminator architecture.\n\nMisc comments:\n\n\"domain\" seems to be used interchangeably with \"language\". This is unfortunate as \"domain\" has another, specific meaning in NLP in general and SMT in partiular. Is this intentional (if so what is the intention?) or is this just a carry-over from other work in cross-domain learning?\n\nSection 2.3: How do you sample permutations for the noise model, with the constraint on reordering range, in the general case of sentences of arbitrary lengths?\n\nSection 2.5: \"the previously introduced loss [...] mitigates this concern\" -- How? Is there a reference backing this?\n\nFigure 3: In the caption, what is meant by \"(t) = 1\"? Are these epochs only for the first iteration (from M(1) to M(2))?\n\nSection 4.1: Care is taken to avoid sampling corresponding src and tgt sentences. However, was the parallel corpus checked for duplicates or near duplicates? If not, \"aligned\" segments may still be present. (Although it is clear that this information is not used in the algorithm)\n\nThis yields a natural question: Although the two monolingual sets extracted from the parallel data are not aligned, they are still very close. It would be interesting to check how the method behaves on really comparable corpora where its advantage would be much clearer.\n\nSection 4.2 and Table 1: Is the supervised learning approach trained on the full parallel corpus? On a parallel corpus of similar size?\n\nSection 4.3: What are the quoted accuracies (84.48% and 77.29%) measured on?\n\nSection 4.5: Experimental results show a regular inprovement from iteration 1 to 2, and 2 to 3. Why not keep improving performance? Is the issue training time?\n\nReferences: (He, 2016a/b) are duplicates\n\nResponse read -- thanks.", "We would like to thank the reviewers for all their comments and constructive feedback. We answered to each review individually, and uploaded a revision of the paper. Here is brief summary of the changes we made:\n- updated some of the claims in the paper\n- added details about the results in the abstract\n- provided some statistics about the correlation between our unsupervised criterion and the BLEU test score\n- added result with phrase based baseline\n- added details about the model architecture\n- explained how we generate random permutations with a constraint on the reordering\n- simplified some notation\n- clarified a few sentences / fixed typos\n- added a table to compare the average number of sentences and the vocabulary size in the considered datasets\n- added missing citations", "Hello, thank you for your note.\n\nWe computed the BLEU score using the Moses perl script: https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl\nIt is standard to multiply the score by 100 so that the BLEU is between 0 and 100 instead of 0 and 1, this is what the Moses script does and what we reported.\n\nRegarding the WBW baseline, it should be simple to reproduce because we used the default hyper-parameters in MUSE. Note that for the Multi30k dataset we used the fastText monolingual embeddings (we did not train them on the Multitask30k dataset because it’s too small).\nFinally, the translation is simply done word-by-word given the source reference file, and directly evaluated using the Moses script. Source words that are not in the dictionary were simply ignored.\n\nPlease, let us know if you have other questions.\nThanks.", "We thank the reviewer for the feedback and comments. We address each of them in turn:\n\n1) It is true that the 32.8 BLEU score in the abstract was misleading, thank you for pointing this out. We updated the paper as follows: “We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time.”\n\n2) Modifying word orders and dropping words definitely alters the meaning of the sentence. However, without this component, we observed that the autoencoder was simply learning to copy the words in the input sentence one-by-one. Adding noise to the input sentence turned out to be an efficient solution that prevents the model from converging to that trivial solution. It is true that in some other tasks like sentence classification or machine translation, adding noise to the input sentence might alter the meaning of the sentence and deteriorate the overall performance of the system, but in the case of auto-encoding, this turned out to be necessary for us. In particular, on WMT en-fr / fr-en we obtain 5.24 / 6.57 without word shuffling (but with word dropout), and 1.69 / 5.54 when not using any form of noise. Please see response to 7) below for more results on the ablation study for WMT.\n\n3) We also expected the tuning of these coefficients to be critical. We ran a few experiments with values from 1e-5 to 10, but in practice, we observed very small differences using different coefficients compared to fixing everything to 1.\n\n4) “as long as the two monolingual corpora exhibit strong structure in feature space.\" This sentence was indeed incorrect, thank you for spotting this. We corrected it to “as long as the two latent representations exhibit strong structure in feature space”. What we meant is that if the learned word embeddings were iid distributed, then we would not be able to align them as any rotation would yield an equivalent matching of the two distributions. The reason why we can align well is because there are asymmetries which the algorithm exploits to align the two spaces. We are not aware of any study quantifying the structure in embedding space to the quality of the alignment. Here, we just meant to provide an intuition for our approach.\n\n5) Thanks for noticing this, there was indeed a mistake in the caption. The unsupervised method uses 15M sentences (each for French and English, so 30M total) with WMT14 en-fr, and 3.6M sentences with WMT16 de-en (so 1.8M for each language).\n\n6) We provided some statistics about the vocabulary size, and the number of monolingual sentences both for WMT and MMT1, in Table 1 of the updated version of the paper.\n\n7) The experimental setup used in this paper is quite expensive, this is why we initially considered the much smaller MMT1 dataset to perform the ablations study, and to use that to decide on the best parameters for WMT. We ran new experiments to study the impact of each component when training on the WMT dataset, for the en-fr and fr-en language pairs. In particular, we obtain on en-fr / fr-en:\n- Without word shuffle, but with word dropout: 5.24 / 6.57\n- Without word shuffle, and without word dropout: 1.69 / 5.54\n- Without adversarial training: 10.45 / 10.29\n- Without auto-encoding: 1.20 / 1.21\n- Without word embeddings pretraining: 11.11 / 10.86\n- Without word embeddings, and without cross-domain training: 1.44 / 1.30\n- Without cross-domain training: 2.65 / 2.44\n\n8) We ran experiments with a phrase-based system and obtained much better results than with a standard NMT system. Actually, our phrase-based models with 10k parallel sentences obtained 15.5 and 16.0 BLEU, which is roughly on par with what we report in the paper with our NMT model for 100k pairs. Note that our supervised NMT baseline in the paper is a bit weak, as we set a very large minimum word-count cutoff to reduce the vocabulary size and accelerate the experiments (and our unsupervised approach suffers from the same issue). We are currently running further experiments for other parallel corpora sizes and the de-en pair, and will report results with PBSMT in the paper very soon. Thank you for your suggesting this, these results are significantly better than what we expected, this will be very valuable to the paper, and opens new research directions.\n\n9) We added the relevant citations as suggested in the comments.", "We thank the reviewer for the feedback and comments.\n\nWe did not provide details about the architecture mainly because of lack of space, but we will add it in the updated version of the paper. Briefly, our architecture was composed of a standard encoder-decoder, with 3 LSTM layers, and an attention model without input-feeding. The embedding and LSTM hidden dimensions were set to 300.\n\nWe now address the comments in turn:\n\n- We casted machine translation in the unsupervised setting as the problem to match distributions of latent features, which can be seen as a particular instance of domain adaptation where “domain” refers to a particular language. We will make sure to clarify this in the next version of the paper.\n- To generate random permutations with the reordering range constraint, we generate a random vector of the size of the sentence, and sort it by indexes. In NumPy, for a sentence of size n, it will look like: ` x = (np.arange(n) + alpha * np.random.rand(n)).argsort()'` Where alpha is a tunable parameter. alpha = 0 implies that x is the identity, alpha = infinity can return any permutation. With alpha = 4, for all i in [0, n[ we have |x[i] - i| <= 3. This has been added in the section 2.3 of the paper.\n- \"the previously introduced loss [...] mitigates this concern\": we are not aware of any reference about this, but this is the intuition we had while designing our loss function. The intuition is that auto-encoding with adversarial training ensures that latent representations of sentences in the two languages have similar distributions, but nothing constrains the system to actually translate (e.g., the sentence “je parle français” could be mapped into a latent space which could be decoded into the English sentence “the car is red”, which is a correct English sentence but not a good translation). However, the back-translation term does make sure that the latent representations actually (and eventually) correspond to each other, as the system has to produce a ground truth translation from a noisy source (and the auto-encoding term helps mapping noisy sentences into the same latent representation). \n- In the caption of Figure 3, “(t) = 1” indeed represents the training from M(1) to M(2). We clarified this in the updated version of the paper.\n- We did some experiments to investigate whether some duplicates of the removed sentences might be present among the selected sentences. To do so, we used a simple technique based on weighted bag-of-words embeddings, to retrieve the most similar sentences, and overall we have not been able to find very good matching pairs. We concluded that our two selected set of sentences were different enough in the sense that most sentences will not have an equivalent translation in the opposite language. However, it is true that the two domains remain similar. We are planning to investigate on the impact of the similarity of the two monolingual corpora on the translation quality in the future.\n- The supervised learning approach is trained on the full corpora (both for Multi30k and WMT).\n- The accuracies are measured on the word translation retrieval: given a test dictionary of 5000 pairs of words, we estimate how frequently a source word is properly mapped to its associated target word.\n- We ran a fourth iteration on Multi30k, but did not observe any improvement. The results would have improved by about 0.5/1 BLEU point if our unsupervised criterion had been perfect (see response to reviewer 1 about the quality of the criterion). However, using our criterion, this fourth iteration gave the same BLEU as the third iteration.", "We thank the reviewer for the feedback and comments.\n\nWe clarified Equation (2), and also provided a correlation score between the unsupervised criterion and the actual test performance, thanks for the suggestion. The paper now contains: “The unsupervised model selection criterion is used both to a) determine when to stop training and b) to select the best hyper-parameter setting across different experiments. In the former case, the Spearman correlation coefficient between the proposed criterion and BLEU on the test set is 0.95 in average. In the latter case, the coefficient is in average 0.75, which is fine but not nearly as good. For instance, the BLEU score on the test set of models selected with the unsupervised criterion are sometimes up to 1 or 2 BLEU points below the score of models selected using a small validation set of 500 parallel sentences.”\n\nThe work of Xie et al. was indeed relevant and we added it as a citation in the related work section of the updated version.\n\nAs for the first paragraph of Section 4.5, we will clarify that “similar observations” refer to improvements as we iterate. Thank you for pointing this out.", "Authors, \n\nCould I ask you to respond to the reviewers for discussion? While the reviewers here are quite positive, there are some points of clarification and concerns that would be nice to hash out. ", "Our team reproduced word-by-word translation (WBW) baseline from the study. Based on our experiment, the WBW baseline is reproducible and we believe the whole study would be reproducible once the authors’ code is released to the public with further clarification on the BLEU score matric and hyperpaprameter section. \n\nDataset: Clear references to the datasets were provided, thus we were able to acquire them by a Google search. We used Multi30k-Task1 dataset for our experiment. Since the preprocessing steps were clearly explained, we obtained the same monolingual corpora for each language. \n\nCode: All components of the code such as for fastText, BLEU score, and WBW were accessible. The authors provided the source of the code for fastText and a clear reference of the previous study on WBW. The code of WBW was published by the Facebook Research Team for the project MUSE which presents a word embedding model that can be trained either in a supervised or unsupervised way. The WBW code was implemented to datasets containing individual words. Because the dataset for the current study contain sentences, modification for the code was needed. To implement the code of WBW to Multi30k-Task1 dataset, we coded a method to translate each word of each sentence in the dataset. In addition, the BLEU score calculation package was found under the nltk.translate.api module. \n\nImplementation: Since the size of the dataset is large and our personal computers were not able to efficiently perform the training, we used Google Cloud Platform to run the code on remote CPUs. \nOur work focused on the unsupervised training, and the training process was smooth and successful. We were able to use Pytorch tensors to accelerate data processing on CPUs; We ran the code Python 2.7; We compiled Faiss with a Python interface and use it for our experiment. \nThe challenge is the settings for parameters and hyperparameters. The default settings of the hyperparameters come with the code of WBW for the study of Conneau et al. However, the tuned hyperparameters are not identified in the current study. We decided to focus on the reproducibility of methods and used the default settings in Conneau et al for our experiment.\n\nResult: We were unable to obtain the same BLEU score as the study did. There might be two possible reasons. First, the hyperparameters used by the author or their implementation procedures are not the same as our experiment. It would be useful to present the values used for the baseline model. Second, according to Papineni et al. we learnt that the BLEU score metric normally ranges between 0 to 1 but the study presents scores that are not within the range. We suggest including an explanation on how the BLEU scores were calculated in order to improve the reproducibility. \n\nReference:\nPapineni, Kishore, et al. \"BLEU: a method for automatic evaluation of machine translation.\" Proceedings of the 40th annual meeting on association for computational linguistics. Association for Computational Linguistics, 2002.\n", "Thank you for your note. This paper indeed builds upon previous work, and we did our best to give credit to what we thought were the most relevant papers; in fact, we have 2 pages of citations already.\nAs per your suggestion, we will add some of the references you mention. However, please keep in mind that our paper focuses on machine translation, while the references you pointed us at are more pertinent to the work on learning a bilingual dictionary.", "Thank you for pointing out these references. We will definitely revise the paper accordingly. In particular, our model is reminiscent of CycleGAN, as well as other methods successfully applied in vision (such as the Coupled Generative Adversarial Networks of Liu et al.). The major conceptual difference is that we cannot easily chain as they do in these other works, because we deal with a discrete sequence of symbols as opposed to continuous vectors. Therefore, we resort to using the model at the previous iteration to produce translations in the other language, but we do not back-prop through this. The iterative nature of our approach together with the weight sharing between our encoder/decoders are the most important differences.", "This looks like great work, and I think merits a clear accept.\n\nThat said, I am also concerned about the lack of discussion of prior work. Almost all of the contributions which enabled the work are relegated to the companion article (\"Word translation without parallel data\"). While I understand the authors want to focus on their own contribution, the introduction/related work to a major achievement like this should convey how the community as a whole reached this point, especially since much of the progress was made outside of the big labs.\n\nFrom following the citations, it seems like some key steps towards unsupervised translation were:\n\n1. Monolingual high quality word vectors (https://arxiv.org/abs/1301.3781)\n2. The linear transform for word translation from small dictionaries (https://arxiv.org/abs/1309.4168)\n3. The orthogonal transform/SVD to improve resilience to low quality dictionaries (https://arxiv.org/abs/1702.03859, http://www.anthology.aclweb.org/D/D16/D16-1250.pdf)\n4. The use of a GAN, regularized towards orthogonal transform, to obtain unsupervised bilingual word vectors (http://www.aclweb.org/anthology/P17-1179)\n5. The iterative SVD procedure to enhance the GAN solution to supervised accuracy (http://www.aclweb.org/anthology/P17-1042)\n6. The realization that aligned mean word vector provides a surprisingly good bilingual sentence space (https://arxiv.org/abs/1702.03859)\n7. Finally the (significant) contribution of this work is to iterate this initial unsupervised shared sentence space towards a word order dependent translation model. A similar paper was submitted to ICLR simultaneously, \"Unsupervised Neural Machine Translation\".\n\nMy apologies to other important prior work I have missed!", "The related work section can be improved by providing more references to earlier research on learning unsupervised alignment in different domains.\n\nFor example:\nhttps://arxiv.org/abs/1611.02200 - Unsupervised cross domain image generation\nhttps://arxiv.org/abs/1612.05424 - Unsupervised Pixel–Level Domain Adaptation with Generative Adversarial Networks\nhttps://arxiv.org/abs/1606.03657 - InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets\nhttps://arxiv.org/abs/1703.10593 - Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks" ]
[ 8, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rkYTTf-AZ", "iclr_2018_rkYTTf-AZ", "iclr_2018_rkYTTf-AZ", "iclr_2018_rkYTTf-AZ", "Sk7cfh-zG", "HJlJ_aqgf", "r1uaaZRxf", "B1POjpKef", "iclr_2018_rkYTTf-AZ", "iclr_2018_rkYTTf-AZ", "rJPiWDkJf", "SkrSo6EC-", "iclr_2018_rkYTTf-AZ", "iclr_2018_rkYTTf-AZ" ]
iclr_2018_HkAClQgA-
A Deep Reinforced Model for Abstractive Summarization
Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.
accepted-poster-papers
This work extends upon recent ideas to build a complete summarization system using clever attention, copying, and RL training. Reviewers like the work but have some criticisms. Particularly in terms of its originality and potential significance noting "It is a good incremental research, but the downside of this paper is lack of innovations since most of the methods proposed in this paper are not new to us.". Still reviewers note the experimental results are of high quality performing excellent on several datasets and building "a strong summarization model." Furthermore the model is extensively tested including in "human readability and relevance assessments ". The work itself is well written and clear.
train
[ "ryxZURtlf", "HyzQdZqez", "BkQAkH5lM", "S1nwDXnQM", "B1c32sZmM", "SyqgjnIWM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "public" ]
[ "The paper proposes a model for abstractive document summarization using a self-critical policy gradient training algorithm, which is mixed with maximum likelihood objective. The Seq2seq architecture incorporates both intra-temporal and intra-decoder attention, and a pointer copying mechanism. A hard constraint is imposed during decoding to avoid trigram repetition. Most of the modelling ideas already exists, but this paper show how they can be applied as a strong summarization model.\n\nThe approach obtains strong results on the CNN/Daily Mail and NYT datasets. Results show that intra-attention improves performance for only one of the datasets. RL results are reported with only the best-performing attention setup for each dataset. My concern with that is that the authors might be using the test set for model selection; It is not a priori clear that the setup that works better for ML should also be better for RL, especially as it is not the same across datasets. So I suggest that results for RL should be reported with and without intra-attention on both datasets, at least on the validation set.\n\nIt is shown that intra-decoder attention decoder improves performance on longer sentences. It would be interesting to see more analysis on this, especially analyzing what the mechanism is attending to, as it is less clear what its interpretation should be than for intra-temporal attention. Further ablations such as the effect of the trigram repetition constraint will also help to analyse the contribution of different modelling choices to the performance. \n\nFor the mixed decoding objective, how is the mixing weight chosen and what is its effect on performance? If it is purely a scaling factor, how is the scale quantified? It is claimed that readability correlates with perplexity, so it would be interesting to see perplexity results for the models. The lack of correlation between automatic and human evaluation raises interesting questions about the evaluation of abstractive summarization that should be investigated further in future work.\n\nThis is a strong paper that presents a significant improvement in document summarization.\n", "This is a very clearly written paper, and a pleasure to read.\n\nIt combines some mechanisms known from previous work for summarization (intra-temporal attention; pointing mechanism with a switch) with novel architecture design components (intra-decoder attention), as well as a new training objective drawn from work from reinforcement learning, which directly optimizes ROUGE-L. The model is trained by a policy gradient algorithm. \n\nWhile the new mechanisms are simple variants of what is taken from existing work, the entire combination is well tested in the experiments. ROUGE results are reported for the full hybrid RL+ML model, as well as various versions that drop each of the new components (RL training; intra-attention). The best method finally outperforms the lea-3d baseline for summarization. What makes this paper more compelling is that they compared against a recent extractive method (Durret et al., 2016), and the fact that they also performed human readability and relevance assessments to demonstrate that their ML+RL model doesn't merely over-optimize on ROUGE. It was a nice result that only optimizing ROUGE directly leads to lower human evaluation scores, despite the fact that that model achieves the best ROUGE-1 and ROUGE-L performance on CNN/Daily Mail.\n\nSome minor points that I wonder about:\n - The heuristic against repeating trigrams seems quite crude. Is there a more sophisticated method that can avoid redundancy without this heuristic?\n - What about a reward based on a general language model, rather than one that relies on L_{ml} in Equation (14)? If the LM part really is to model grammaticality and coherence, a general LM might be suitable as well.\n - Why does ROUGE-L seem to work better than ROUGE-1 or ROUGE-2 as the reward? Do you have any insights are speculations regarding this?", "The paper is generally well-written and the intuition is very clear. It combines the advanced attention mechanism, pointer networks and REINFORCE learning signal to train a sequence-to-sequence model for text summarization. The experimental results show that the model is able to achieve the state-of-the-art performance on CNN/Daily Main and New York Times datasets. It is a good incremental research, but the downside of this paper is lack of innovations since most of the methods proposed in this paper are not new to us.\n\nI would like to see the model ablation w.r.t. repetition avoidance trick by muting the second trigram at test time. Intuitively, if the repetition issue is prominent to having decent summarization performance, it might affect our judgement on the significance of using intra-attention or combined RL signal.\nAnother thought on this: is it possible to integrate the trigram occurrence with summarization reward? so that the recurrent neural networks with attention could capture the learning signal to avoid the repetition issue and the heuristic function in the test time can be removed. \n\nIn addition, as the encoder-decoder structure gradually becomes the standard choice of sequence prediction, I would suggest the authors to add the sum of parameters into model ablation for reference.\n\nSuggested References:\nBahdanau et al. (2016) An Actor-critic Algorithm for Sequence Prediction. (actor-critic on machine translation)\nMiao & Blunsom (2016) Language as a Latent Variable: Discrete Generative Models for Sentence Compression. (mixture pointer mechanism + REINFORCE)\n", "Following the helpful comments and feedback from all reviewers, we updated our paper submission with the following changes:\n- Add the number of parameters of our model\n- Add model ablation results with respect to the trigram avoidance trick on the CNN/Daily Mail and New York Times datasets\n- Add perplexity scores and compare them with human evaluation results\n- Add Bahdanau et al. (2016) and Miao & Blunsom (2016) citations\n- Other minor fixes in citations", "Authors,\n\nPlease respond to the reviewers if you have any rebuttal points. While scores are positive, it is helpful to have these points resolved. ", "Hi! I think your paper is very very instructive. Can you share the code with me? Email Address:[email protected] Thank you!\n" ]
[ 8, 7, 6, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1 ]
[ "iclr_2018_HkAClQgA-", "iclr_2018_HkAClQgA-", "iclr_2018_HkAClQgA-", "iclr_2018_HkAClQgA-", "iclr_2018_HkAClQgA-", "iclr_2018_HkAClQgA-" ]
iclr_2018_BJRZzFlRb
Compressing Word Embeddings via Deep Compositional Code Learning
Natural language processing (NLP) models often require a massive number of parameters for word embeddings, resulting in a large storage or memory footprint. Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance. For this purpose, we propose to construct the embeddings with few basis vectors. For each word, the composition of basis vectors is determined by a hash code. To maximize the compression rate, we adopt the multi-codebook quantization approach instead of binary coding scheme. Each code is composed of multiple discrete numbers, such as (3, 2, 1, 8), where the value of each component is limited to a fixed range. We propose to directly learn the discrete codes in an end-to-end neural network by applying the Gumbel-softmax trick. Experiments show the compression rate achieves 98% in a sentiment analysis task and 94% ~ 99% in machine translation tasks without performance loss. In both tasks, the proposed method can improve the model performance by slightly lowering the compression rate. Compared to other approaches such as character-level segmentation, the proposed method is language-independent and does not require modifications to the network architecture.
accepted-poster-papers
This paper proposes an offline neural method using concrete/gumbel for learning a sparse codebook for use in NLP tasks such as sentiment analysis and MT. The method outperforms other methods using pruning and other sparse coding methods, and also produces somewhat interpretable codes. Reviewers found the paper to be simple, clear, and effective. There was particular praise for the strength of the results and the practicality of application. There were some issues, such as only being applicable to input layers, and not being able to be applied end-to-end. The author also did a very admirable job of responding to questions about analysis with clear and comprehensive additional experiments.
train
[ "rk0hvx5xf", "SyrG5UJ-G", "ryIqDgXbz", "Sk5cTlq7f", "rJL04SDXf", "SyW6J-AWM", "By5oXODZG", "Bk0C4WEZz", "ry-Ird7Zf", "HybsIIQbz", "ryJCJQXZf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "public", "official_reviewer", "public", "public", "public", "public" ]
[ "This paper proposed a new method to compress the space complexity of word embedding vectors by introducing summation composition over a limited number of basis vectors, and representing each embedding as a list of the basis indices. The proposed method can reduce more than 90% memory consumption while keeping original model accuracy in both the sentiment analysis task and the machine translation tasks.\n\nOverall, the paper is well-written. The motivation is clear, the idea and approaches look suitable and the results clearly follow the motivation.\n\nI think it is better to clarify in the paper that the proposed method can reduce only the complexity of the input embedding layer. For example, the model does not guarantee to be able to convert resulting \"indices\" to actual words (i.e., there are multiple words that have completely same indices, such as rows 4 and 6 in Table 5), and also there is no trivial method to restore the original indices from the composite vector. As a result, the model couldn't be used also as the proxy of the word prediction (softmax) layer, which is another but usually more critical bottleneck of the machine translation task.\nFor reader's comprehension, it would like to add results about whole memory consumption of each model as well.\nAlso, although this paper is focused on only the input embeddings, authors should refer some recent papers that tackle to reduce the complexity of the softmax layer. There are also many studies, and citing similar approaches may help readers to comprehend overall region of these studies.\n\nFurthermore, I would like to see two additional analysis. First, if we trained the proposed model with starting from \"zero\" (e.g., randomly settling each index value), what results are obtained? Second, What kind of information is distributed in each trained basis vector? Are there any common/different things between bases trained by different tasks?", "This paper presents an interesting idea to word embeddings that it combines a few base vectors to generate new word embeddings. It also adopts an interesting multicodebook approach for encoding than binary embeddings. \n\nThe paper presents the proposed approach to a few NLP problems and have shown that this is able to significant reduce the size, increase compression ratio, and still achieved good accuracy.\n\nThe experiments are convincing and solid. Overall I am weakly inclined to accept this paper.", "The authors proposed to compress word embeddings by approximate matrix factorization, and to solve the problem with the Gumbel-soft trick. The proposed method achieved compression rate 98% in a sentiment analysis task, and compression rate over 94% in machine translation tasks, without a performance loss. \n\nThis paper is well-written and easy to follow. The motivation is clear and the idea is simple and effective.\n\nIt would be better to provide deeper analysis in Subsection 6.1. The current analysis is too simple. It may be interesting to explain the meanings of individual components. Does each component is related to a certain topic? Is it meaningful to perform ADD or SUBSTRACT on the leaned code? \n\nIt may also be interesting to provide suitable theoretical analysis, e.g., relationships with the SVD of the embedding matrix.\n", "Hi, the authors of the NIPS 2017 Workshop paper have already contacted us. We share a similar idea but the work is conducted independently. They have already cited our paper in their Arxiv version. We will upload the revised version of our paper as soon as we are allowed to upload a revision. (Currently I cannot find the upload button in the revision page.)", "Besides that this work is conceptually related to the product quantification method (https://lear.inrialpes.fr/pubs/2011/JDS11/jegou_searching_with_quantization.pdf) mentioned by a public comment, this work is also highly related to the linear version of our recent work presented at NIPS 2017 Workshop on Machine Learning in Discrete Structures, entitled \"Learning K-way D-dimensional Discrete Code For Compact Embedding Representations\" (https://arxiv.org/abs/1711.03067). \n\nWe hope that the authors could mention our work in future revision of this concurrent ICLR submission. Thanks.", "Thank you for spending the time to give us a feedback.\n\nDo you mean the paper [1]? I have read both [1] and [2], they are applying sparse coding to the word embeddings. Although they have a different purpose, I'm impressed by the strong interpretability they can gain through the process. I will cite the related papers in the sparse coding. As two reviewers are interesting in the interpretability of the codes, I'm also doing some experiments trying to find some ways to find the topics of learned codes.\n\n[1] Sparse Overcomplete Word Vector Representations (Faruqui et al., 2015)\n[2] Learning Word Representations with Hierarchical Sparse Coding (Yogatama et al., 2014)\n\nFor (See et al., 2016) and the compression part of (Zhang et al., 2017), we also compare with the same pruning techniques and report the results in the experiment sections. As we only compress word embeddings, we don't have the class weighting problem as discussed in (See et al., 2016). Both of the papers report a compression rate of 80% with a small performance loss, which is identical to our results. Actually, with the pruning technique, we achieved a 90% compression rate for word embeddings in the translation tasks as shown in Table 4. However, it does not work well on the sentiment analysis task.\n", "There are also several papers using sparse coding directly on word embeddings (Yogatama et al 2015?), using optimization tools like SPAMS, instead of an autoencoder. These models are not \"deep\" but certainly worth citing and understanding the benefits of this approach. (Also worth comparing to See et al and Kim et al 2016? who both run pruning on the same dataset.) ", "Thank you for spending the time to review our paper. \n\nAs multiple reviewers are asking for us to analyze the information learned by each component, we did some extra experiments and found some interesting results.\n\nWe tried to learn a set of codes using a 3 x 256 coding scheme, this will force the model to decompose each embedding into 3 vectors. In order to maximize the compression rate, the model must make these 3 vectors as independent as possible. So we can think that they represent 3 concepts.\n\nThen we extracted the codes of some related words:\n------------------------------\nman\t 210 153 153\nwoman\t 232 153 153\n\nking\t 210 180 39\nqueen\t 232 180 39\n\nbritish\t 118 132 142\nlondon\t 185 126 142\n\njapan\t 118 56 21\ntokyo\t 185 36 21\n------------------------------\n\nWe can transform a \"man/king\" to \"woman/queen\" by change the subcode \"210\" in the first component to \"232\".\nSo we can think \"210\" must be a \"male\" code, and \"232\" must be a \"female\" code.\n\nSimilarly, when we look at the country and city names, we can find \"185\" in the first component to be a \"city\" code.\n\nWe uploaded the 3x256 and 8x8 codes of 10,000 most frequent words to anonymous gists, so the those who are interested in the codes can have a look.\n\n------\n3 x 256 codes of 10k most frequent words:\nhttps://gist.github.com/anonymous/aa6c03f871900a3c4e5d7f65d74361fe\n\n8 x 8 codes of 10k most frequent words:\nhttps://gist.github.com/anonymous/584d64a28c3bb7c421eee8450cae823a", "Actually, we just obtained the codes from the authors of FastText.zip and finished the comparison a few weeks ago.\n\nTheir idea is based on normalized product quantization (NPQ), which split a vector into K parts and quantize each part. For each word, an extra byte is used to quantize the norm of the embedding vector. We found one drawback of this approach is that it produces very long codes in order to achieve good model performance. Here are the results of IMDB sentiment analysis task:\n\n------------------------------------------------------\n code len total size accuracy\nGloVe baseline - 78 MB 87.18\nNPQ (K=60) 480 bits 4.26 MB 87.11\nOur Model(16x32) 80 bits 1.23 MB 87.37\n------------------------------------------------------\n\nI think it's a nice idea to separate the vector norm from quantization and it may also work in our approach to achieve higher compression rate. We will upload the revised paper once we are allowed to add revision. \n\nFor Matrin's paper, their method is based on sparsification. I will try to get the codes from him or find a way to compare with his model.", "Thank you for spending the time to review our paper. Hope our response can answer your question.\n\n1. About compressing the Softmax layer\n\nWe spent a significant period of time trying to apply the proposed method to the softmax layer, though without a successful result. It can be caused by the code sharing problem (multiple words get the same code) or the loss function. However, we are still optimistic that the compositional coding approach can also be applied to the softmax, which should be our future work. We will refer the readers to some recent papers that reduce the size of the softmax layer using pruning techniques.\n\n2. About the total model size\n\nThe full sizes of the baseline models are summarized in the following table:\n\nTask Embed Full size Ratio of embed\nIMDB 78 MB 79.1 MB 98.6%\nIWSLT De-En 35 MB 94 MB 37.2%\nASPEC En-Ja 274 MB 506 MB 54.1%\n\nWe will put the information of full model sizes into the tables in the experiment section.\n\n3. Experiment with random code assignment\n\nWe tried to initialize a set of 32 x 16 codes to be random numbers and see the performance in the \u0010IMDB task. With the random code assignment, the accuracy is much lower than the baseline.\n\n------------------------------------------------------------------------------------\nModel Accuracy\nBaseline 87.18\nRandom code + trained codebooks 84.19\nRandom code + random codebooks 84.72\n---------------------------------------------------------------\n\n4. Analysis of information distributed in the codes\n\nAs the codes are learned by a neural net, the interpretability is not guaranteed. However, we found some interesting relations in the codes.\n\nFor animal names, the 3rd subcode is normally a \"5\" for the plural nouns. For the verbs, we found the 2nd subcode is normally a \"0\" if the verb is in the past tense. Although there are also violations, we believe the model learned to arrange the codes in an efficient way.\n\n----\ndog 7 7 0 1 7 3 7 0\ndogs 4 7 5 1 7 3 4 0\n\ncat 0 7 0 1 7 3 7 0\ncats 4 7 5 1 7 3 4 0\n\npig 7 3 6 1 7 3 4 7\npigs 7 3 5 1 7 3 4 0\n\nfish 7 7 6 1 4 3 4 7\nfishes 7 2 5 0 7 3 4 6\n\nfox 6 5 7 1 4 3 0 0\nfoxes 6 2 5 1 7 3 4 6\n----\nbuy 0 7 2 1 4 3 3 1\nbought 0 0 2 1 4 3 3 1\n\nkick 7 6 1 1 4 3 0 0\nkicked 7 0 1 1 4 3 2 0\n\ngo 7 7 0 6 4 3 3 0\nwent 4 0 7 6 4 3 2 0\n\npick 7 6 7 1 4 3 3 0\npicked 7 0 7 1 4 0 3 0\n\ncatch 7 7 1 6 4 3 6 0\ncaught 7 0 7 4 4 3 2 0\n----", "I am curious to know if this method performs better than existing word embeddings compression techniques such as Product Quantizers (which also exploits the idea of compositional codes) [1] or WTA autoencoders [2].\n\n[1] FastText.zip: Compressing text classification models https://arxiv.org/pdf/1612.03651.pdf\n[2] ANDREWS, Martin. Compressing word embeddings. In : International Conference on Neural Information Processing. Springer International Publishing, 2016. p. 413-422." ]
[ 8, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJRZzFlRb", "iclr_2018_BJRZzFlRb", "iclr_2018_BJRZzFlRb", "rJL04SDXf", "iclr_2018_BJRZzFlRb", "By5oXODZG", "ryJCJQXZf", "ryIqDgXbz", "ryJCJQXZf", "rk0hvx5xf", "iclr_2018_BJRZzFlRb" ]
iclr_2018_SkhQHMW0W
Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections. In this paper, we find 99.9% of the gradient exchange in distributed SGD is redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth. To preserve accuracy during compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training. We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus. On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270x to 600x without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB. Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile.
accepted-poster-papers
This work proposes a hybrid system for large-scale distributed and federated training of commonly used deep networks. This problem is of broad interest and these methods have the potential to be significantly impactful, as is attested by the active and interesting discussion on this work. At first there were questions about the originality of this study, but it seems that the authors have now added extra references and comparisons. Reviewers were split about the clarity of the paper itself. One notes that "on the whole clearly presented", but another finds it too dense, disorganized and needing of more clear explanation. Reviewers were also concerned that methods were a bit heuristic and could benefit from more details. There were also many questions about these details in the discussion forum, these should make it into the next version. The main stellar aspect of the work were the experimental results, and reviewers call them "thorough" and note they are convincing.
train
[ "SkwY9v4VG", "rJ9crpElM", "rJmrmQ5lG", "B1lk3Ojxf", "B1HmMiTXz", "ByemeWoMz", "S1te8dFmM", "H1eVtcdff", "H1o6dcuff", "HkRogNzGf", "Byn_brkfG", "S1mTqdWzM", "r1U-Go0WG", "r13BWoAZM", "rknhZsRbf", "HJbNlsRbM", "S1SGgoA-f", "HySlhQ1GM", "ryuVTiCWG", "BkCsfi0bG", "H18nes0Wz", "BJRcgoCbG", "SkhtLE9Zz", "BJDSSJ5-M", "ryR_PyE-f", "S1vhAambf", "H1Rx7PTJG", "Bk5Gd4sxf", "B1wc_Z5JG", "SJPLQcF1M", "SyqSy9YyM", "H1cZgIYyM", "HJOI6ndJz" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "official_reviewer", "author", "author", "public", "author", "public", "author", "author", "author", "author", "author", "public", "official_reviewer", "author", "author", "author", "public", "official_reviewer", "public", "public", "author", "public", "public", "author", "author", "public", "public" ]
[ "Do the four rows in Table 2 (#GPUs in total = 4, 8, 16, 32) correspond to 1, 2, 4 and 8 training nodes? Could you please also say what is the compression ratio for these four cases? Thank you.", "I think this is a good work that I am sure will have some influence in the near future. I think it should be accepted and my comments are mostly suggestions for improvement or requests for additional information that would be interesting to have.\n\nGenerally, my feeling is that this work is a little bit too dense, and would like to encourage the authors in this case to make use of the non-strict ICLR page limit, or move some details to appendix and focus more on more thorough explanations. With increased clarity, I think my rating (7) would be higher.\n\nSeveral Figures and Tables are never referenced in the text, making it a little harder to properly follow text. Pointing to them from appropriate places would improve clarity I think.\n\nAlgorithm 1 line 14: You never seem to explain what is sparse(G). Sec 3.1: What is it exactly that gets communicated? How do you later calculate the Compression Ratio? This should surely be explained somewhere.\n\nSec 3.2 you mention 1% loss of accuracy. A pointer here would be good, at that point it is not clear if it is in your work later, or in another paper. The efficient momentum correction is great!\n\nAs I was reading the paper, I got to the experiments and realized I still don't understand what is it that you refer to as \"deep gradient compression\". Pointer to Table 1 at the end of Sec 3 would probably be ideal along with some summary comments.\n\nI feel the presentation of experimental results is somewhat disorganized. It is not clear what is immediately clear what is the baseline, that should be somewhere stressed. I find it really confusing why you sometimes compare against Gradient Dropping, sometimes against TernGrad, sometimes against neither, sometimes include Gradient Sparsification with momentum correction (not clear again what is the difference from DGC). I recommend reorganizing this and make it more consistent for sake of clarity. Perhaps show here only some highlights, and point to more in the Appendix.\n\nSec 5: Here I feel would be good to comment on several other things not mentioned earlier. \nWhy do you only work with 99.9% sparsity? Does 99% with 64 training nodes lead to almost dense total updates, making it inefficient in your communication model? If yes, does that suggest a scaling limit in terms of number of training nodes? If not, how important is the 99.9% sparsity if you care about communication cost dominating the total runtime? I would really like to better understand how does this change and what is the point beyond which more sparsity is not practically useful. Put differently, is DGC with 600x size reduction in total runtime any better than DGC with 60x reduction?\n\n\nFinally, a side remark:\nUnder eq. (2) you point to something that I think could be more discussed. When you say what you do has the effect of increasing stepsize, why don't you just increase the stepsize? \nThere has recently been this works on training ImageNet in 1 hour, then in 24 minutes, latest in 15 minutes... You cite the former, but highlight different part of their work. Broader idea is that this is trend that potentially makes this kind of work less relevant. While I don't think that makes your work bad or misplaced, I think mentioning this would be useful as an alternative approach to the problems you mention in the introduction and use to motivate your contribution.\n...what would be your reason for using DGC as opposed to just increasing the batch size?", "This paper proposes additional improvement over gradient dropping(Aji & Heafield) to improve communication efficiency. \n\n- First of all, the experimental results are thorough and seem to suggest the advantage of the proposed techniques.\n- The result for gradient dropping(Aji & Heafield) should be included in the ImageNet experiment.\n- I am having a hard time understanding the intuition behind v_t introduced in the momentum correction. The authors should provide some form of justifications.\n - For example, provide an equivalence provide to the original update rule or some error analysis would be great\n - Did you keep a running sum of v_t overall history? Such sum without damping(the m term in momentum update) is likely lead to the growing dominance of noise and divergence.\n- The momentum masking technique seems to correspond to stop momentum when a gradient is synchronized. A discussion about the relation to asynchronous update is helpful.\n- Do you do non-sparse global synchronization of momentum term? It seems that local update of momentum is likely going to diverge, and the momentum masking somehow reset that.\n- In the experiment, did you perform local aggregations of gradients between GPU cards before send out to do all0reduce in a network? since doing so will reduce bandwidth requirement.\n\nIn general, this is a paper shows good empirical results. But requires more work to justify the proposed correction techniques.\n\n\n---\n\nI have read the authors updates and changed my score accordingly(see series of discussions)\n", "The paper is thorough and on the whole clearly presented. However, I think it could be improved by giving the reader more of a road map w.r.t. the guiding principle. The methods proposed are heuristic in nature, and it's not clear what the guiding principle is. E.g., \"momentum correction\". What exactly is the problem without this correction? The authors describe it qualitatively, \"When the gradient sparsity is high, the interval dramatically increases, and thus the significant momentum effect will harm the model performance\". Can the issue be described more precisely? Similarly for gradient clipping, \"The method proposed by Pascanu et al. (2013) rescales the gradients whenever the sum of their L2-norms exceeds a threshold. This step is conventionally executed after gradient aggregation from all nodes. Because we accumulate gradients over iterations on each node independently, we perform the gradient clipping locally before adding the current gradient... \" What exactly is the issue here? It reads like a story of what the authors did, but it's not really clear why they did it.\n\nThe experiments seem quite thorough, with several methods being compared. What is the expected performance of the 1-bit SGD method proposed by Seide et al.?\n\nre. page 2: What exactly is \"layer normalization\"?\n\nre. page 4: What are \"drastic gradients\"?", "We thank the reviewer for the suggestions. We have revised our paper.\n\n- The author should emphasize that the sparsification + thresholding have async nature, as the update only trigger when sparse condition applies and causes the mismatch problem. \nWe talked about the asynchrony nature in another way in the Sec 3.3: “Because we delay the update of small gradients, when these updates do occur, they are outdated or \\emph{stale}.”\n\n- It is a bit hard to understand the vanilla baseline(I might not do it in that way). It can be explained as local gradient aggregation and only apply momentum update at the trigger point. The update rule is no longer equivalent to SGD with momentum. \nWe revised our paper to describe the vanilla sparse SGD + momentum baseline in Sec 3.2: \"If SGD with the momentum is directly applied to the sparse gradient scenario (line 15 in Algorithm \\ref{alg:ssgd}), the update rule is no longer equivalent to Equation \\ref{eq:msgd}, which becomes:\".\n\n- The paper did not explicitly mention that the value of v_t gets reset after triggering a communication, it should be explicitly mentioned in the update equation. Justify the correctness: \nWe revised our paper to explicitly mention resetting the value of v_t by the mask in Sec 3.2: \"Similarly to the line 12 in Algorithm \\ref{alg:ssgd}, the accumulation result $v_{k,t}$ gets cleared by the mask in the $sparse\\left( \\right)$ function.\"\n\nRough intuition gives most of the current justification. We can know that one is better than another, because of the equivalence. The author should try to do more formal justification, at least in some special cases, instead of leaving it as a debt for yourself or the potential readers\n- For example, the author should be able to prove that, under only one machine and one weight value. The weight value after K updates using the vanilla approach vs. the new approach. \nWe have shown that the sparsification + local gradient accumulation can be considered as “increasing the batch size over time” in Sec 3.1.\n\n-A fundamental question that needs to be answered, is that why thresholding trigger method(which is async) works as good as sync SGD. A proof sketch to show the loss change after K step update would shed light on this and may give insights on what a good update rule is(possibly borrow some analysis from aync update and stale gradient)\n[1] has shown that running stochastic gradient descent (SGD) in an asynchronous manner can be viewed as adding a momentum-like term to the SGD iteration and a smaller momentum coefficient can work as good as sync SGD. We introduced the momentum factor masking to dynamically reduce the momentum term in Sec 3.3. Meanwhile, the asynchrony nature in the sparsification + local gradient accumulation can be considered as “increasing the batch size over time” in Sec 3.1, and there’s numerous work showing that increasing the batch size is feasible.\n\nReferences\n[1] Mitliagkas, I., Zhang, C., Hadjis, S., & Re, C. (2017). Asynchrony begets momentum, with an application to deep learning. In 54th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2016. ", "Hi DGC,\n\nThanks for your comments. I read the paper again. The idea is quite interesting, but I still cannot say that I totally understood the results. Need more time for me to digest.\n\n\nSincerely,\n\nChia-Yu", "Thanks for the response, I feel confident this contribution should be accepted.", "Thank you for your comments. \n\nThe momentum factor masking does not reset the momentum correction. It only blocks the momentum of delayed gradients from misleading the optimization.\n\nSuppose the last update is at iteration t-1, the next update at iteration t+T-1, and we only consider the gradients { g_{t}, g_{t+1}, ..., g_{t+T-1} }\n\n - Dense Update\n w_{t+T} = w_{t} - lr x [ ... + (1 + m + ... + m^{T-1}) x g_{t} + (1 + m + ... + m^{T-2}) x g_{t+1} + ... + 1 x g_{t+T-1}]\n w_{t+\\tau} = w_{t} - lr x [ ... + (1 + m + ... + m^{\\tau-1}) x g_{t} + (1 + m + ... + m^{\\tau-2}) x g_{t+1} + ... + (1 + m + ... + m^{\\tau-T}) x g_{t+T-1} + ...], where \\tau > T\n\n - Only local gradient accumulation\n the coefficients of { g_{t}, g_{t+1}, ..., g_{t+T-1} } are always the same.\n w_{t+T} = w_{t} - lr x [ ... + 1 x g_{t} + 1 x g_{t+1} + ... + 1 x g_{t+T-1}]\n w_{t+\\tau} = w_{t} - lr x [ ... + (1 + m + m^2 + ... + m^{\\tau-T}) x (g_{t} + g_{t+1} + ... + g_{t+T-1}) + ...]\n\n - With the momentum correction,\n the coefficients of { g_{t}, g_{t+1}, ..., g_{t+T-1} } are always the same as the dense update.\n w_{t+T} = w_{t} - lr x [ ... + (1 + m + ... + m^{T-1}) x g_{t} + (1 + m + ... + m^{T-2}) x g_{t+1} + ... + 1 x g_{t+T-1}]\n w_{t+\\tau} = w_{t} - lr x [ ... + (1 + m + ... + m^{\\tau-1}) x g_{t} + (1 + m + ... + m^{\\tau-2}) x g_{t+1} + ... + (1 + m + ... + m^{\\tau-T}) x g_{t+T-1} + ...], where \\tau > T\n\n - With the momentum correction and momentum factor masking\n we clear the local u_{t} to prevent the delayed gradients from misleading the optimization after they are used for the update.\n w_{t+T} = w_{t} - lr x [ ... + (1 + m + ... + m^{T-1}) x g_{t} + (1 + m + ... + m^{T-2}) x g_{t+1} + ... + 1 x g_{t+T-1}]\n w_{t+\\tau} = w_{t} - lr x [ ... + (1 + m + ... + m^{T-1}) x g_{t} + (1 + m + ... + m^{T-2}) x g_{t+1} + ... + 1 x g_{t+T-1} + ...], where \\tau > T", " Thank you for your question.\n This example is to show how the momentum correction works, and therefore we do not consider the staleness effect in this example. \n Suppose the last update is at iteration t-1, the next update at iteration t+T-1, and we only consider the gradients { g_{t}, g_{t+1}, ..., g_{t+T-1} }\n - Dense Update\n w_{t+T} = w_{t} - lr x [ ... + (1 + m + ... + m^{T-1}) x g_{t} + (1 + m + ... + m^{T-2}) x g_{t+1} + ... + 1 x g_{t+T-1}]\n w_{t+\\tau} = w_{t} - lr x [ ... + (1 + m + ... + m^{\\tau-1}) x g_{t} + (1 + m + ... + m^{\\tau-2}) x g_{t+1} + ... + (1 + m + ... + m^{\\tau-T}) x g_{t+T-1} + ...], where \\tau > T\n\n - Only local gradient accumulation\n the coefficients of { g_{t}, g_{t+1}, ..., g_{t+T-1} } are always the same.\n w_{t+T} = w_{t} - lr x [ ... + 1 x g_{t} + 1 x g_{t+1} + ... + 1 x g_{t+T-1}]\n w_{t+\\tau} = w_{t} - lr x [ ... + (1 + m + m^2 + ... + m^{\\tau-T}) x (g_{t} + g_{t+1} + ... + g_{t+T-1}) + ...]\n\n - With the momentum correction,\n the coefficients of { g_{t}, g_{t+1}, ..., g_{t+T-1} } are always the same as the dense update.\n w_{t+T} = w_{t} - lr x [ ... + (1 + m + ... + m^{T-1}) x g_{t} + (1 + m + ... + m^{T-2}) x g_{t+1} + ... + 1 x g_{t+T-1}]\n w_{t+\\tau} = w_{t} - lr x [ ... + (1 + m + ... + m^{\\tau-1}) x g_{t} + (1 + m + ... + m^{\\tau-2}) x g_{t+1} + ... + (1 + m + ... + m^{\\tau-T}) x g_{t+T-1} + ...], where \\tau > T\n\n - With the momentum correction and momentum factor masking\n we clear the local u_{t} to prevent the delayed gradients from misleading the optimization after they are used for the update.\n w_{t+T} = w_{t} - lr x [ ... + (1 + m + ... + m^{T-1}) x g_{t} + (1 + m + ... + m^{T-2}) x g_{t+1} + ... + 1 x g_{t+T-1}]\n w_{t+\\tau} = w_{t} - lr x [ ... + (1 + m + ... + m^{T-1}) x g_{t} + (1 + m + ... + m^{T-2}) x g_{t+1} + ... + 1 x g_{t+T-1} + ...], where \\tau > T", "Once we applied momentum factor masking; the momentum correction becomes useless. The accumulated discounting factor in Eq(6) was masked and become the same as original way. It is not clear about momentum factor mask. Please clarify it. As reviewer's comments, it seems that momentum factor mask resets the momentum correction. From the content, this is really confusing! Thanks a lot. \n", "Thanks for the comments. The accuracy degradation on ImageNet are quoted from the Table 2 of AdaComp [1]:\nResNet18: baseline top1 error=32.41%, AdaComp top1 error=32.87% (0.46% accuracy degradation) \nResNet50: baseline top1 error=28.91%, AdaComp top1 error=29.15% (0.24% accuracy degradation)\n\nIn our DGC work:\nResNet50: baseline top1 error=24.04%, DGC top1 error=23.85% \n\nWe respect your argument and would be happy to adjust the citation to your paper. However, we believe ImageNet results are more interesting than MNIST. The 0.5% Top1 accuracy degradation on ImageNet is significant, not noise. Fully maintaining the accuracy on ImageNet at a much higher compression ratio is not easy, while the bag of four techniques introduced in DGC achieved this.\n\nThe worker-scalability of deep gradient compression is described in Figure6 with up to 64 workers. \n\nReferences:\n[1] Chen, Chia-Yu, et al. \"AdaComp: Adaptive Residual Gradient Compression for Data-Parallel Distributed Training.\" arXiv preprint arXiv:1712.02679 (2017).", "- Therefore, we do not keep a running sum of v_t overall history, but keep a running sum of u_t. v_t is the running sum result and will be cleared after update (with or without momentum factor masking). For example, at iteration t-1, \nu_{t-1} = m^{t-2} g_{1}+ … + m g_{t-2} + g_{t-1}, \nv_{t-1} = (1+…+m^{t-2}) g_{1} + … + (1+m) g_{t-2} + g_{t-1}. \nUpdate, w_{t} = w_{1} – lr x v_{t-1}\nAfter update, v_{t-1} = 0. \nNext iteration, \nu_{t} = m^{t-1} g_{1} + … + m g_{t-1} + g_{t}, \nv_{t} = m^{t-1} g_{1} + … + m g_{t-1} + g_{t}.\nUpdate, w_{t+1} = w_{t} – lr x v_{t} \n = w_{1} – lr x (v_{t-1} + v_{t} )\n = w_{1} - lr x [ (1+…+m^{t-1}) g_{1} + … + (1+m) g_{t-1} + g_{t} ]\nWhich is the same as the dense momentum SGD.\n\nIn the momentum factor masking section, both v_t and u_t get reset after trigerring a communication. Why only v_t gets reset in your example?", "We thank the reviewer for the comments.\n\n - Several Figures and Tables are never referenced in the text, making it a little harder to properly follow text. Pointing to them from appropriate places would improve clarity I think.\n\nWe revised our paper. All the figures and tables are referenced properly in the text.\n\n - Algorithm 1 line 14: You never seem to explain what is sparse(G). Sec 3.1: What is it exactly that gets communicated? How do you later calculate the Compression Ratio?\n\nWe have change the name of function to encode(G). The encode() function packs 32-bit nonzero gradient values and 16-bit run lengths of zeros in the flattened gradients. The encoded sparse gradients get communicated. These are described in the Sec 3.1 now.\nThe compression ratio is calculated as follows:\n The Gradient Compression Ratio = Size[ encode( sparse( G_k ) ) ] / Size [G_k]\nIt is defined in the Sec 4.1 now.\n\n - Sec 3.2 you mention 1% loss of accuracy. A pointer here would be good, at that point it is not clear if it is in your work later, or in another paper.\n\nWe pointed to the Figure 3(a) in the updated draft, and also cite the paper AdaComp [1].\n\n - Pointer to Table 1 at the end of Sec 3 would probably be ideal along with some summary comments.\n\nWe make a summary at the end of Sec 3 and add Appendix D to show the overall algorithm of DGC in the updated draft.\n\n - I find it really confusing why you sometimes compare against Gradient Dropping, sometimes against TernGrad, sometimes against neither, sometimes include Gradient Sparsification with momentum correction (not clear again what is the difference from DGC).\n\nBecause related work didn’t cover them all. Gradient Dropping [2] only performed experiments on 2-layer LSTM for NMT, and 3-layer DNN for MNIST; TernGrad [3] only performed experiments on AlexNet, GoogleNet and VGGNet. Therefore, we compared our AlexNet result with TernGrad. \n\nDGC contains not only momentum correction but also momentum factor masking and warm-up training. Momentum correction and Local gradient clipping are proposed to improve local gradient accumulation. Momentum factor masking and warm-up training are proposed to overcome the staleness effect. Comparison between Gradient Sparsification with momentum correction and DGC shows their impact on training respectively.\n\n - Why do you only work with 99.9% sparsity? Does 99% with 64 training nodes lead to almost dense total updates, making it inefficient in your communication model? If yes, does that suggest a scaling limit in terms of number of training nodes? If not, how important is the 99.9% sparsity if you care about communication cost dominating the total runtime?\n\nYes, 99% with 128 training nodes lead to almost dense total updates, making it inefficient in communication. The scaling limit N in terms of number of training nodes depends on the gradient sparsity s: N ≈1/(1-s). When the gradient sparsity is 99.9%, the scaling limit is 1024 training nodes.\n\n - When you say what you do has the effect of increasing stepsize, why don't you just increase the stepsize? What would be your reason for using DGC as opposed to just increasing the batch size?\n\nSince the memory on GPU is limited, the way to increase the stepsize is to increase training nodes. Previous work in increasing the stepsize focus on how to deal with very large mini-batch training, while our work focus on how to reduce the communication consumption among increased nodes under poor network bandwidth. DGC can be considered as increasing the stepsize temporally on top of increasing the actual stepsize spatially.\n\nReferences:\n[1] Chen, Chia-Yu, et al. \"AdaComp: Adaptive Residual Gradient Compression for Data-Parallel Distributed Training.\" arXiv preprint arXiv:1712.02679 (2017).\n[2] Aji, Alham Fikri, and Kenneth Heafield. Sparse Communication for Distributed Gradient Descent. In Empirical Methods in Natural Language Processing (EMNLP), 2017.\n[3] Wen, Wei, et al. TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning. In Advances in Neural Information Processing Systems, 2017.", "Hi, Wei. Thank you for your comments.\n\n First of all, all the hyper-parameters, including the learning rate and momentum, are the same as the default settings.\n\n - The loss because of TernGrad is just 0.04% instead of 0.89%? \n\n In the paper of TernGrad [1], the baseline AlexNet is trained with dropout ratio of 0.5, while the TernGrad AlexNet is trained with dropout ratio of 0.2. The paper claims that quantization introduces randomness and less dropout ratio avoids over-randomness. However, when we trained the baseline AlexNet with dropout ratio of 0.2, we gained 1 point improvement in top-1 accuracy. It indicates that the TernGrad might incur more loss of accuracy than expected. Therefore, to be fair, we use the dropout ratio of 0.2 in all experiments relating to AlexNet.\n\n - does the same warmup scheme work in general for all experiments?\n\n Yes. Warm-up training takes only 4 out of 90 epochs for ImageNet, 1 out of 70 epochs for Librispeech. The gradient sparsity increases exponentially 75% -> 93.75% -> 98.4375% -> 99.6%.\n\nReferences:\n[1] Wen, Wei, et al. TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning. In Advances in Neural Information Processing Systems, 2017.", " We thank the reviewer for the comments.\n\n - What exactly is the problem without this correction? Can the issue be described more precisely?\n\nWe already revised our paper, and described the momentum correction more precisely in Section 3.2. Basically, the momentum correction performs the momentum SGD without update locally and accumulates the velocity u_t locally. \n\n - What exactly is the issue of Gradient clipping?\n\nWhen training RNN, people usually use Gradient Clipping to avoid the exploding gradient problem. The hyper-parameter for Gradient Clipping is the threshold thr_G of the gradients L2-norm. The gradients for optimization is scaled by a coefficient depending on their L2-norm. \n\nBecause we accumulate gradients over iterations on each node independently, we need to scale the gradients before adding them to the previous accumulation, in order to scale the gradients by the correct coefficient. The threshold for local gradient clipping thr_Gk should be set to N^{-1/2} x thr_G. We add Appendix C to explain how N^{-1/2} comes.\n\n - What is the expected performance of the 1-bit SGD method proposed by Seide et al.?\n\n1-bit SGD [1] encodes the gradients as 0 or 1, so the data volume is reduced by 32x. Meanwhile, since 1-bit SGD quantizes the gradients column-wise, a floating-point scaler per column is required, and thus it cannot yield much speed benefit on convolutional neural networks.\n\n - What exactly is \"layer normalization\"\n\n“Layer Normalization” is similar to batch normalization but computes the mean and variance from the summed inputs in a layer on a single training case. [2]\n\n-\t What are \"drastic gradients\"?\n\nIt means the period when the network weight changes dramatically.\n\nReferences:\n[1] Frank Seide, Hao Fu, Jasha Droppo, Gang Li, and Dong Yu. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs. In Fifteenth Annual Conference of the International Speech Communication Association, 2014.\n[2] J. Lei Ba, J. R. Kiros, and G.E.Hinton, Layer Normalization. ArXiv e-prints, July 2016", "(continue)\n\n - However this paper added several hyper-parameters (momentum correction, learning rate correction, warm up, and momentum factor mask etc..)\n\nThe *only* hyper-parameters introduced by DGC are the warm-up training strategy. However, we use the same settings in all experiments as answered above. Momentum correction and Momentum factor masking are equation changes, they do not introduce any hyper-parameters. \n\n - The compression rate and performance ignore several important factors such as sparsity representation, different directions of compression, computation overhead (parallel or not)\n\nNo, Figure 6 in the Sec 5 already takes the sparsity representation, computation overhead, communication overhead into account. \n \n - Results are from simple model only\n\nNo, we have broadly experimented on state-of-the-art, complex models across CNN, RNN, CNN and RNN mixture. We extensively experimented with ResNet110 on Cifar10, AlexNet/ResNet50 on ImageNet, 2-layer LSTM with the size of 195MB on PTB, 7-layer GRU following 3-layer CNN (DeepSpeech) with the size of 488MB on LibriSpeech. \nIn comparison, previous work Gradient Dropping [4] performed experiments on 2-layer LSTM with size of 476MB for NMT, and 3-layer DNN with size of 80MB on MNIST; \nTernGrad [3] performed experiments on AlexNet, GoogleNet, and VGGNet on ImageNet; \nAdacomp [2] performed experiments on 4-layer CifarCNN with the size of 0.3MB on Cifar10, AlexNet, ResNet18, ResNet50 on ImageNet, BN50-DNN with the size of 43MB on BN50, and 2-layer LSTM with the size of 13MB on Shakespeare Dataset.\n\nReferences:\n[1] Cormen, Thomas H. Introduction to algorithms. MIT press, 2009\n[2] Chen, Chia-Yu, et al. \"AdaComp: Adaptive Residual Gradient Compression for Data-Parallel Distributed Training.\" arXiv preprint arXiv:1712.02679 (2017).\n[3] Wen, Wei, et al. TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning. In Advances in Neural Information Processing Systems, 2017.\n[4] Aji, Alham Fikri, and Kenneth Heafield. Sparse Communication for Distributed Gradient Descent. In Empirical Methods in Natural Language Processing (EMNLP), 2017.", "Thank you for your suggestions.\nWe appreciate your reminding us of citing these excellent papers, and we have already cited these work in the newest version of our paper.\n\n - Although sorting could be Nlog(N), this method is not easy to be parallel. Thus, large computation overhead still exists.\n\nWe use top-k selection, *NOT* sorting. The complexity of top-k selection is O(N), not O(NlogN) [1]. To further reduce computation, we perform the top-k selection on samples in stride. The sample rate is 0.1% to 1%. In practice, without any code optimization, the extra computation takes less than 10% of total communication time when training AlexNet with 64 nodes under 1Gbps Ethernet. We have already included this in Figure 6.\n\n - From previous works, gradient residue compression is pretty robust, it is not surprising that the compression rate is high.\n\nIn fact, gradient residue compression does not preserve the accuracy of the model.\nFigure 4 in the related work [2] shows that gradient residue compression brings around 2% to 5% loss of accuracy when the compression ratio is less than 500x, and even damages the training when the compression ratio is higher. It is our bag of 4 techniques that enables no loss of accuracy.\n\n - What happened in the direction from parameter to workers? This could reduce their compression rate by learner number.\n\nFirst, we use all-reduce communication model in system performance analysis. \n\nWith N=2^k workers and s sparsity. We need k step to gather these gradients. The density doubles at every step, so the average communication volume is \\sum_{i=0}^{k-1} 2^{i}*M*s/k = (N-1)/log(N)*M*s. The average density increases sub-linearly with the number of nodes by N/log(N), not exponentially. \n\nWe already considered this non-ideal effect, including the extra computation cost on top-k selection, in the second paragraph of Section 5: \"the density of sparse data doubles at every aggregation step in the worst case. However, even considering this effect, Deep Gradient Compression still significantly reduces the network communication time, as implied in Figure 6.\" \"For instance, when training AlexNet with 64 nodes, conventional training only achieves nearly 30× speedup with 10Gbps Ethernet (Apache, 2016), while with DGC, more than 40× speedup is achieved even with 1Gbps Ethernet\". With 1Gbps Ethernet, the speedup of TernGrad is 30x, our worse case is 44x (considering this non-ideal effect), our best case is 58x. We reported the worse case, which is 44x speedup (see Figure 6).\n\nWhen it comes to parameter server communication model, we only pull the sum of sparse gradients, which is the same as TernGrad [3]. With the gradient compression ratio of 500x, it requires at least 500 training nodes to pull the same data size as in the dense scenario.\n \n - What is the sparse representation?\n\nWe already discussed the sparse representation strategy in section 3.1. We used the simple run-length encoding: we pack the 32-bit float nonzero gradient values and 16-bit run lengths of zeros of the flattened gradients. The overhead is only 0.5x, not 10x. We already considered the overhead when reporting the compression ratio. \n\n - How much warm up period do you need to use for each examples?\n\nWarm-up training takes only 4 out of 90 epochs for ImageNet, 1 out of 70 epochs for Librispeech, which is only 1.4%-4% of the total training epochs. The time impact of warmup training is negligible. \n \n - Is the compression layer-wise or whole model (including FC layers and convolution layers)?\n\nUnlike AdaComp [2] has “~200X for fully-connected and recurrent layers, and ~40X for convolutional layers”, our compression rate is the same for the WHOLE model, where sparsity=99.9% for ALL layers.", "Hi DeepGradientCompression,\n\nThis is an interesting paper. I think that it is misleading to mention that AdaComp shows 0.2-0.4% degradation. AdaComp sometimes actually shows ~0.3% improvement and it always <0.5% difference compared to baseline. The difference is from randomness of SGD; not from AdaComp itself. It is not very meaningful to quote SGD difference within 0.5%. Please correct it.\n\nBy the way, I have some questions: how many workers do you use in the experiments? What is worker-scalability of deep gradient compression?\n\nBest,\n\nChia-Yu\n\n\n\n", "This is the comment after author's updated a revised version of the paper.\n\nI now understand the proposed momentum correction method(with some effort). However, the presentation can be further improved. I list my suggestions here:\n\nClarify the approach:\n- The author should emphasize that the sparsification + thresholding have async nature, as the update only trigger when sparse condition applies and causes the mismatch problem.\n- It is a bit hard to understand the vanilla baseline(I might not do it in that way). It can be explained as local gradient aggregation and only apply momentum update at the trigger point. The update rule is no longer equivalent to SGD with momentum.\n- The paper did not explicitly mention that the value of v_t gets reset after triggering a communication, it should be explicitly mentioned in the update equation.\n\nJustify the correctness:\nRough intuition gives most of the current justification. We can know that one is better than another, because of the equivalence.\nThe author should try to do more formal justification, at least in some special cases, instead of leaving it as a debt for yourself or the potential readers \n\n- For example, the author should be able to prove that, under only one machine and one weight value. The weight value after K updates using the vanilla approach vs. the new approach.\n-A fundamental question that needs to be answered, is that why thresholding trigger method(which is async) works as good as sync SGD. A proof sketch to show the loss change after K step update would shed light on this and may give insights on what a good update rule is(possibly borrow some analysis from aync update and stale gradient)\n", "We thank the reviewer for the comments. We have revised our paper.\n\n - Did you keep a running sum of v_t overall history? Such sum without damping(the m term in momentum update) is likely lead to the growing dominance of noise and divergence. Do you do non-sparse global synchronization of momentum term? It seems that local update of momentum is likely going to diverge, and the momentum masking somehow reset that.\n\nWe already revised our paper, and described the momentum correction more precisely in Section 3.2. \n\nBasically, the momentum correction performs the momentum SGD without update locally, and accumulates the velocity u_t locally. The optimization performs SGD with v_t instead of momentum SGD with G_t after momentum correction. We add figure 2 to illustrate the difference.\n\nTherefore, we do not keep a running sum of v_t overall history, but keep a running sum of u_t. v_t is the running sum result and will be cleared after update (with or without momentum factor masking). For example, at iteration t-1, \nu_{t-1} = m^{t-2} g_{1}+ … + m g_{t-2} + g_{t-1}, \nv_{t-1} = (1+…+m^{t-2}) g_{1} + … + (1+m) g_{t-2} + g_{t-1}. \nUpdate, w_{t} = w_{1} – lr x v_{t-1}\nAfter update, v_{t-1} = 0. \nNext iteration, \nu_{t} = m^{t-1} g_{1} + … + m g_{t-1} + g_{t}, \nv_{t} = m^{t-1} g_{1} + … + m g_{t-1} + g_{t}.\nUpdate, w_{t+1} = w_{t} – lr x v_{t} \n = w_{1} – lr x (v_{t-1} + v_{t} )\n = w_{1} - lr x [ (1+…+m^{t-1}) g_{1} + … + (1+m) g_{t-1} + g_{t} ]\nWhich is the same as the dense momentum SGD.\n\n - Did you perform local aggregations of gradients between GPU cards before send out to do all0reduce in a network?\n\nYes.", "Thank you for your comments.\nThe batch size is 80 and the number of iterations per epoch is 332.", "Thank you for your comments.\n - For parameter server, communication is reduced when push sparse gradient to parameter server. Is it possible to pull sparsified gradient and applied locally? \n\n Yes, you can pull sparsified gradient and applied locally.\n\n - For All-reduce, since the sparse gradients may be of different size, the standard MPI All-reduce operation won't work for this. Do you implement your own All-reduce operation?\n\n In our experiments, we force the size of the sparse gradients to be same as 0.1% of the number of gradients. We use hierarchical top-k selection not only to speed up sparsification but also to control the sparse gradients size. If the number of gradients is smaller than 0.1%, we filled the buffer with zeros. If it is much larger, we re-calculate the threshold. However, an efficient All-reduce operation for sparse communication is one of our future work.\n", "This paper has strong experimental results. Momentum and learning rate correction make sense for effective larger mini-batch size. However there are some suggestions about this work.\n\n\n1. This submission should cite other papers well. The main algorithm of this submission is very similar as Dryden's work in 2016 ( Communication quantization for data-parallel training of deep neural networks) and Strom in 2015 (Strom, N. 2015. Scalable distributed dnn training using commodity gpu cloud computing. In Sixteenth Annual Conference of the International Speech Communication Association). Moreover, recently an ArXiv paper (AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training, accepted in AAAI18) also reported similar gradient compression scheme and shows excellent experimental results in all different NNs (ResNet50, ResNet18, AlexNet, RNN, DNN, LSTM etc..). This paper should cite relevant work properly. \n\n2. In section5, authors proposed sampling to reduce sorting time (the same as Dryden's work in 2016). Although sorting could be Nlog(N), this method is not easy to be parallel. Thus, large computation overhead still exists.\n\n3. Learning rate correction (estimate T), momentum correction, momentum factor mask, and warm up are very empirical. From previous works, gradient residue compression is pretty robust, it is not surprising that the compression rate is high. \n\n4. The paper just focuses on compression from workers to parameter server. What happened in the direction from parameter to workers? This could reduce their compression rate by learner number (as described in TernGrad).\n\n5. What is the sparse representation? The overhead of sparse representation should be discussed. It is easy to lose compression rate by >10x here. The high compression rate may be confusing if detailed sparse representation is not discussed. \n\n6. How much warm up period do you need to use for each examples? Warm-up makes the experiments much easier since they do not clearly mention the warm-up epoch number. \n\n7. Is the compression layer-wise or whole model (including FC layers and convolution layers)?\n\nIn general, this paper reused previous gradient residue idea and added momentum and learning rate correction for effective larger mini-batch size. This paper did a lot of experiments and has strong experimental results for NN convergence. However this paper added several hyper-parameters (momentum correction, learning rate correction, warm up, and momentum factor mask etc..) and should clearly list values of these parameters in the test cases. It is also important to guide users ways to put these extra hyper-parameters. The compression rate and performance ignore several important factors such as sparsity representation, different directions of compression, computation overhead (parallel or not); results are from simple model only. Look forward to seeing more exciting papers from this team!\n\n\n\n\n \n\n", "I take a look at the other reviews after they get online. While most of them give accept to the paper given the strong empirical results provided by the paper.\n\nHowever, the problem is mentioned by all the reviewers(it is unclear why momentum correction is needed and the intuition behind this). I would like to emphasize that the current rule seems will is likely lead to the growing dominance of noise and divergence (because of no damping).\n\nI strongly encourage the author to clarify this. We cannot simply accept a paper for great empirical result without justification of why the rule works (all the reviewers are confused by this point) ", "In the appendix, you mention two ways to aggregate gradients: parameter server and All-reduce. \n1.For parameter server, communication is reduce when push sparse gradient to parameter server. Is it possible to pull sparsified gradient and applied locally? \n2. For All-reduce, since the sparse gradients may be of different size, the standard MPI All-reduce operation won't work for this. Do you implement your own All-reduce operation?", "\nHello!\n\nCould you please tell what was the batch size and the number of iterations per epoch on a node during distributed training of the language model on PTB? This is necessary to get an idea of total amount of communication that was sufficient to reach perplexity 72.24 at the end of 40-th epoch.\n\nThank you!", "Dear Kenneth Heafield,\n\n Thank you for clarifying the Gradient Dropping, it's very helpful. We will describe the Gradient Dropping in a more rigorous way in the final version.\n We also appreciate your reminding us of citing these two excellent papers.\n Here are the answers to your questions.\n\n - Warm-up training works in general, so was it included in your baseline experiments as well? \n\n Warm-up training was previously used for improving the large minibatch training proposed by Goyal et. al. They warm up the learning rate linearly in the first several epochs. However, we are the first to warm up the sparsity during the gradient pruning. Therefore, only experiments with DGC adopted warm-up sparsity. It is a simple but effective technique.\n\n - \"Implementing DGC requires gradient sorting.\" To be pedantic, it requires top-k selection which we have been talking to NVIDIA about implementing more efficiently in the context of beam search. I like the hierarchical add-on to the sampling we've been doing too; if too few gradients pass the threshold, do you sample more? \n\n We indeed use top-k selection instead of sorting. We do not sample more if too few gradients are selected. Since hierarchical selection is designed to control the communication data size, we will perform top-k selection twice only when too many gradients pass the threshold.", "Hi from TernGrad, \n\nImpressive result, really! \n\nFor the top-1 accuracy in Table 3, I guess the 0.89% accuracy difference of TernGrad comes from the different ways we trained the standard AlexNet? In our work, the baseline AlexNet is trained using the same hyper-parameters of caffe (https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet), and converges to 57.32%. Your baseline got 58.17% because you used different training hyper-parameters in Wilber 2016 as you pointed out?\nReplacing floating SGD by TernGrad, it converges to 57.28%. The loss because of TernGrad is just 0.04% instead of 0.89%? \n\nIs it easy to implement all of the techniques? Do you plan to open source it? I may want to try this.\nThe core of TernGrad can be done within several lines (https://github.com/wenwei202/terngrad/blob/master/terngrad/inception/bingrad_common.py#L159-L166).\n\nAnd just be curious about how does the warmup stage generalize, does the same warmup scheme work in general for all experiments? I am asking since we may not want to tune the warmup stage for several times when training a DNN, which essentially is wasting training time. TernGrad converges with the same hyper-parameters of standard SGD.\n\nThanks,\n-Wei ", "I'm Kenneth Heafield, one of the authors cited. \n\nIt's an interesting bag of 4 tricks here and I will likely use them going forward. \n\n\"Gradient Dropping requires adding a layer normalization.\" Figure 5 in our paper shows that gradient dropping works, admittedly slower, without layer normalization if we determine the threshold locally to each parameter/matrix rather than globally. \n\nI feel like you're giving us too much credit. Strom https://s3-us-west-2.amazonaws.com/amazon.jobs-public-documents/strom_interspeech2015.pdf and Dryden et al https://ornlcda.github.io/MLHPC2016/papers/3_Dryden.pdf deserve to be cited too. \n\nWarm-up training works in general, so was it included in your baseline experiments as well? \n\n\"incurring 0.3% loss of accuracy on a machine translation task\" It would be better to say BLEU score here, rather than a vague metric. Parsing people fight over 0.3% while translation people shrug over 0.3% BLEU. \n\n\"Implementing DGC requires gradient sorting.\" To be pedantic, it requires top-k selection which we have been talking to NVIDIA about implementing more efficiently in the context of beam search. I like the hierarchical add-on to the sampling we've been doing too; if too few gradients pass the threshold, do you sample more? \n\nAbstracts should compare to the strongest baseline, not just the stock baseline. \n\nLet's talk when you're less anonymous. ", "We really appreciate your comments.\n\n- Equation 1 & 2: shouldn’t k start from 1 if N is the number of training node?\n Yes. It's a typo. k should start from 1.\n\n- Related Work section: Graidient typo\n- Section 4.2: csparsity typo\n Thank you for pointing out these typos. They should be \"Gradient\" and \"sparsity\".\n\n- Line 8, 9 in Algorithm 4 in Appendix B: shouldn’t line 8 be U <- mU + G and line 9 be V_t <- V_{t-1} + mU + G\n These two lines are equivalent to those in Algorithm 4 in Appendix B. \n\n- Is \"Gradient Size\" referring to the average size of the gradient that's larger than the threshold?\n Yes. \"Gradient Size\" is referring to the size of the sparse gradient, which contains both the gradient values that are larger than the threshold and 16-bit index distances when it comes to DGC.", "we thank the reviewer for the comments.\n\n(1) \nFirst, warm-up training takes only 4 out of 90 epochs for imagenet, 1 out of 70 epochs for librispeech, which is only 1.4%-4% of the total training epochs. Therefore the impact of warmup training is negligible.\n\nSecond, during warm-up training the gradient is also very sparse. On Imagenet, the sparsity for the 4 warm-up epochs are: 75% -> 93.75% -> 98.4375% -> 99.6% (exponentially increase), then 99.9% for the rest 86 epochs. Same warm-up sparsity rule applies to the first four quarter epochs on Librispeech, then 99.9% for the rest 69 epochs. \n\n\n(2) Yes, we already considered the a larger communication volume of summed gradients. \n\nWith N=2^k workers and s sparsity. We need k step to gather these gradients. The density doubles at every step, so the average communication volume is \\sum_{i=0}^{k-1} 2^{i}*M*s/k = (N-1)/log(N)*M*s. The average density increases sub-linearly with the number of nodes by N/log(N), not exponentially. \n\nWe already considered this non-ideal effect in the second paragraph of Section 5: \"the density of sparse data doubles at every aggregation step in the worst case. However, even considering this effect, Deep Gradient Compression still significantly reduces the network communication time, as implied in Figure 6.\" \"For instance, when training AlexNet with 64 nodes, conventional training only achieves nearly 30× speedup with 10Gbps Ethernet (Apache, 2016), while with DGC, more than 40× speedup is achieved even with 1Gbps Ethernet\". With 1Gbps Ethernet, the speedup of TernGrad is 30x, our worse case is 44x (considering this non-ideal effect), our best case is 58x. We reported the worse case, which is 44x speedup (see Figure 6).", "Thank you for a great paper! The author's intuition really shines through. I just have a few clarifying points:\n\n- Equation 1 & 2: shouldn’t k start from 1 if N is the number of training node ?\n- Related Work section: Graidient typo\n- Section 4.2: csparsity typo\n- Line 8, 9 in Algorithm 4 in Appendix B: shouldn’t line 8 be U <- mU + G and line 9 be V_t <- V_{t-1} + mU + G\n- Is \"Gradient Size\" referring to the average size of the gradient that's larger than the threshold ?\n\nedit 1: add question about gradient size.", "For compression ratio in Table 3 & 4, does this work consider the larger communication volume during warm-up training?\nPlease clarify how many iterations it took to \"warm-up\" and what was the sparsity of gradients during warm-up. If it did warm up for 20% of total epochs with 50% sparsity, the compression ratio is bounded by 10x.\n\nDid this work consider a larger communication volume of summed gradients? Suppose there are gradients from k workers to sum up and the sparsity of gradients is s, the expectation of the sparsity of summed gradients is s^n which exponentially decreases with n. Please clarify this.\n\nThanks" ]
[ -1, 7, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkhQHMW0W", "iclr_2018_SkhQHMW0W", "iclr_2018_SkhQHMW0W", "iclr_2018_SkhQHMW0W", "ryuVTiCWG", "Byn_brkfG", "r1U-Go0WG", "HkRogNzGf", "S1mTqdWzM", "iclr_2018_SkhQHMW0W", "HySlhQ1GM", "BkCsfi0bG", "rJ9crpElM", "Bk5Gd4sxf", "B1lk3Ojxf", "S1SGgoA-f", "SkhtLE9Zz", "iclr_2018_SkhQHMW0W", "BkCsfi0bG", "BJDSSJ5-M", "S1vhAambf", "ryR_PyE-f", "iclr_2018_SkhQHMW0W", "rJmrmQ5lG", "iclr_2018_SkhQHMW0W", "iclr_2018_SkhQHMW0W", "B1wc_Z5JG", "iclr_2018_SkhQHMW0W", "iclr_2018_SkhQHMW0W", "H1cZgIYyM", "HJOI6ndJz", "iclr_2018_SkhQHMW0W", "iclr_2018_SkhQHMW0W" ]
iclr_2018_B14TlG-RW
QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension
Current end-to-end machine reading and question answering (Q\&A) models are primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the sequential nature of RNNs. We propose a new Q\&A architecture called QANet, which does not require recurrent networks: Its encoder consists exclusively of convolution and self-attention, where convolution models local interactions and self-attention models global interactions. On the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference, while achieving equivalent accuracy to recurrent models. The speed-up gain allows us to train the model with much more data. We hence combine our model with data generated by backtranslation from a neural machine translation model. On the SQuAD dataset, our single model, trained with augmented data, achieves 84.6 F1 score on the test set, which is significantly better than the best published F1 score of 81.8.
accepted-poster-papers
This work replaces the RNN layer of square with a self-attention and convolution, achieving a big speed up and performance gains, particularly with data augmentation. The work is mostly clear presented, one reviewer found it "well-written" although there was a complaint the work did not clear separate out the novel aspects. In terms of results the work is clearly of high quality, producing top numbers on the shared task. There were some initial complaints of only using the SQuAD dataset, but the authors have now included additional results that diversify the experiments. Perhaps the largest concern is novelty. The idea of non-RNN self-attention is now widely known, and there are several systems that are applying it. Reviewers felt that while this system does it well, it is maybe less novel or significant than other possible work.
train
[ "B1lKKF8Bz", "H1hw3bgSz", "S1yRD584M", "Hkx2Bz9lM", "rycJHDIgf", "Hyqx3y5xz", "r1xzqspQM", "BJgkoz9Xz", "ryYnnsdXG", "ByOo9od7M", "HkxgcsdQG", "Hy4sFiuQM", "H1d6uidmf", "rkOlY3WXf", "BkCXSXOyf", "HJg2Fk_yf" ]
[ "public", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "Thank you for your paper! We really liked your approach to accelerate inference and training times in QA. \nI have one question regarding the comparison with BiDAF. On the article, you mention that you batched the training examples by paragraph length in your model, but it is not clear whether you did the same for BiDAF (the implementation on GitHub offers the flags --cluster and --len_opt for that).\nThat is an important consideration because that change alone has a significant impact on training and inference times. In fact, by batching the inputs in that way we have fully trained BiDAF in 6h30 (12 epochs) on an NVidia Titan X, which is more than two times faster than your reported time of 15h on a more powerful P100.\nCould you please clarify this point in the article?", "The performance of the model on SQuAD dataset is impressive. In addition to the performance on the test set, we are also interested in the sample complexity of the proposed model. Currently, the SQuAD dataset splits the collection of passages into a training set, a development set, and a test set in a ratio of 80%:10%:10% where the test set is not released. Given the released training and dev set, we are wondering what would happen if we split the data in a different ratio, for example, 50% for training and the rest 50% for dev. We will really appreciate it if the authors can report the model performance (on training/dev respectively) under this scenario. \n", "I am happy with the rebuttal. I think this paper has good enough contributions to get published.\n\nI have revised my scores accordingly.", "Summary:\n\nThis paper proposes a non-recurrent model for reading comprehension which used only convolutions and attention. The goal is to avoid recurrent which is sequential and hence a bottleneck during both training and inference. Authors also propose a paraphrasing based data augmentation method which helps in improving the performance. Proposed method performs better than existing models in SQuAD dataset while being much faster in training and inference.\n\nMy Comments:\n\nThe proposed model is convincing and the paper is well written.\n\n1. Why don’t you report your model performance without data augmentation in Table 1? Is it because it does not achieve SOTA? The proposed data augmentation is a general one and it can be used to improve the performance of other models as well. So it does not make sense to compare your model + data augmentation against other models without data augmentation. I think it is ok to have some deterioration in the performance as you have a good speedup when compared to other models.\n\n2. Can you mention your leaderboard test accuracy in the rebuttal?\n\n3. The paper can be significantly strengthened by adding at least one more reading comprehension dataset. That will show the generality of the proposed architecture. Given the sufficient time for rebuttal, I am willing to increase my score if authors report results in an additional dataset in the revision.\n\n4. Are you willing to release your code to reproduce the results?\n\n\nMinor comments:\n\n1. You mention 4X to 9X for inference speedup in abstract and then 4X to 10X speedup in Intro. Please be consistent.\n2. In the first contribution bullet point, “that exclusive built upon” should be “that is exclusively built upon”.\n", "This paper proposes two contributions: first, applying CNNs+self-attention modules instead of LSTMs, which could result in significant speedup and good RC performance; second, enhancing the RC model training with passage paraphrases generated by a neural paraphrasing model, which could improve the RC performance marginally.\n\nFirstly, I suggest the authors rewrite the end of the introduction. The current version tends to mix everything together and makes the misleading claim. When I read the paper, I thought the speeding up mechanism could give both speed up and performance boost, and lead to the 82.2 F1. But it turns out that the above improvements are achieved with at least three different ideas: (1) the CNN+self-attention module; (2) the entire model architecture design; and (3) the data augmentation method. \n\nSecondly, none of the above three ideas are well evaluated in terms of both speedup and RC performance, and I will comment in details as follows:\n\n(1) The CNN+self-attention was mainly borrowing the idea from (Vaswani et al., 2017a) from NMT to RC. The novelty is limited but it is a good idea to speed up the RC models. However, as the authors hoped to claim that this module could contribute to both speedup and RC performance, it will be necessary to show the RC performance of the same model architecture, but replacing the CNNs with LSTMs. Only if the proposed architecture still gives better results, the claims in the introduction can be considered correct.\n\n(2) I feel that the model design is the main reason for the good overall RC performance. However, in the paper there is no motivation about why the architecture was designed like this. Moreover, the whole model architecture is only evaluated on the SQuAD dataset. As a result, it is not convincing that the system design has good generalization. If in (1) it is observed that using LSTMs in the model instead of CNNs could give on par or better results, it will be necessary to test the proposed model architecture on multiple datasets, as well as conducting more ablation tests about the model architecture itself.\n\n(3) I like the idea of data augmentation with paraphrasing. Currently, the improvement is only marginal, but there seems many other things to play with. For example, training NMT models with larger parallel corpora; training NMT models with different language pairs with English as the pivot; and better strategies to select the generated passages for data augmentation.\n\nI am looking forward to the test performance of this work on SQuAD.", "This paper presents a reading comprehension model using convolutions and attention. This model does not use any recurrent operation but it is not per se simpler than a recurrent model. Furthermore, the authors proposed an interesting idea to augment additional training data by paraphrasing based on off-the-shelf neural machine translation. On SQuAD dataset, their results show some small improvements using the proposed augmentation technique. Their best results, however, do not outperform the best results reported on the leader board.\n\nOverall, this is an interesting study on SQuAD dataset. I would like to see results on more datasets and more discussion on the data augmentation technique. At the moment, the description in section 3 is fuzzy in my opinion. Interesting information could be:\n- how is the performance of the NMT system? \n- how many new data points are finally added into the training data set?\n- what do ‘data aug’ x 2 or x 3 exactly mean?\n", "We have just added a new result on the adversarial SQuAD dataset [1] . In terms of robustness to the adversarial examples, our model is on par with the state-of-the-art model. Please see the Section 4.1.5 of the latest version for more details.\n\nps: This addition is also partly motivated by Reviewer 1's promise to increase our score (to above 7). Until now, we have added 2 more benchmarks: TriviaQA & Adversarial SQuAD.\n\n[1] Jia Robin, Percy Liang. Adversarial Examples for Evaluating Reading Comprehension Systems. In EMNLP 2017.\n\nThanks!", "As we have included the result on the triviaQA dataset as well, we hope the reviewer can reconsider the score, as promised in the original review. Thanks again for your suggestion to help us improve the paper!", "Dear Area Chair,\n\nWe have submitted the rebuttal and revision. Our rebuttal contains a general one to address the common concerns of the reviewers, and three separated ones to answer the individual questions for each reviewer.\n\nThanks! ", "We believe there are misunderstandings we have addressed below. We also have also included more experimental results. \n\nQ: The reviewer said “I suggest the authors rewrite the end of the introduction. The current version tends to mix everything together and makes the misleading claim.”\nA: Thank you for the suggestions! We have revised the introduction to make our contributions clearer. Note that even though self-attention has already been used extensively in Vaswani et al, the combination of convolutions and self-attention is novel, and is significantly better than self-attention alone and gives 2.7 F1 gain in our experiments. The use of convolutions also allows us to take advantage of common regularization methods in ConvNets such as stochastic depth (layer dropout), which gives an additional gain of 0.2 F1 in our experiments.\nWe would like to point out that in this paper the use of CNN + self-attention is to speed-up the model during training and inference. The speed-up leads to faster experimentation and allows us to train on more augmented data, contributing to the strong result on SQuAD.\n\nQ: The reviewer comments “I feel that the model design is the main reason for the good overall RC performance. However, in the paper there is no motivation about why the architecture was designed like this.”\nA: At a high level, our architecture is the standard “embedding -> embedding encoder -> attention -> modeling encoder -> output” architecture, shared by many neural reading comprehension models. Thus, we do NOT claim any novelty in the overall architecture. Traditionally, the encoder components are bidirectional LSTMs. Our motivation is to speed-up the architecture by replacing the bidirectional LSTMs with convolution+self-attention for the encoders for both embedding and modeling components. The context passages are over one hundred words long in SQuAD, so the parallel nature of CNN architectures leads to a significant speed boost for both training and inference. Replacing bidirectional LSTMs with convolution+self-attention is our main novelty.\n\nQ: The reviewer comments “it will be necessary to show the RC performance of the same model architecture, but replacing the CNNs with LSTMs. Only if the proposed architecture still gives better results, the claims in the introduction can be considered correct.”\nA: We think the reviewer might have misunderstood our claim. As mentioned above, we do NOT claim any novelty in the overall architecture, as it is a common reading comprehension model. We will make this point clearer in the revision. Our contribution, as we have emphasized several times, is to replace the LSTM encoders with convolution+self-attention, without changing the remaining components. We find the resulting model both fast and accurate. In fact, if we switch back to LSTM encoders, it will become BiDAF [1] or DCN [2], which are both slower (see our speedup experiments) and less accurate (see the leaderboard: https://rajpurkar.github.io/SQuAD-explorer/) than ours.\n\n[1] Bidirectional Attention Flow for Machine Comprehension. In ICLR 2017.\nMinjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi.\n[2] Dynamic Coattention Networks For Question Answering. ICLR 2017.\nCaiming Xiong, Victor Zhong, Richard Socher.\n\nQ: Results on one more dataset.\nA: We have conducted experiments on another Q&A dataset, TriviaQA, to verify that the effectiveness and efficiency of our model is general. In a nutshell, again, our model is 4x to 16x times faster than the RNN counterparts, while outperforming the state-of-the-art single-paragraph-reading model by more than 3.0 in both F1 and EM. Please see the revision. \n\nQ: More result on data augmentation.\nA: Thanks for the suggestions! We indeed put more experiments in the revision and here are some interesting findings: \nTranslating to more languages can lead to more diverse augmented data, which further result in better generalization. Currently we try both French and German.\nThe sampling ratio of (original : English-French-English : English-German-English) during training matters. The best empirical ratio is 3:1:1.\n\nQ: Leaderboard result.\nA: We submitted our best model for test set evaluation on SQuAD, on Dec 20, 2017. Our single model (named “FRC”) is ranked 3rd among all single models in terms of F1 with F1/EM=84.6/76.2 (https://rajpurkar.github.io/SQuAD-explorer/). The performance gain is because we add more regularization to the model. Note that the two single models ranked above us have NOT been published yet: “BiDAF + Self Attention + ELMo” & “AttentionReader+”.", "From the reviewer’s comments, it is not immediately clear to us the reviewer’s rationale for rejection. What we only know is the reviewer wants to know more about the data augmentation approach. It would be great if the reviewer can elaborate more on the rejection rationale. \n\nWe believe our work is significant in the following aspects:\n(a) Our work is novel: we introduced a new architecture for reading comprehension and a data augmentation technique that yields non-trivial gain on a strong SQuAD model. Note that even though self-attention has already been used extensively in Vaswani et al, the combination of convolutions and self-attention is novel, and is significantly better than self-attention alone and gives 2.7 F1 gain in our experiments. The use of convolutions also allows us to take advantage of common regularization methods in ConvNets such as stochastic depth (layer dropout), which gives an additional gain of 0.2 F1 in our experiments.\n\n(b) Our model is accurate: we are currently ranked 3rd by F1 score on the SQuAD leaderboard among single models (note: the two single models ranked above us are not published yet).\n\n(c) Our model is fast: we achieve a speed-up of up to 13x and 9x in training and inference respectively on SQuAD.\n \nAs stated above, we are disappointed with the low scores that our paper has received. Concurrent to our submission, there are two other papers on SQuAD, FusionNet[1] and DCN+ [2], which only tested on SQuAD and obtained much lower F1 scores (83.9 and 83.0 respectively) compared to ours (84.6). Their papers, however, received the averaged review scores of 6.33 and 7 respectively, which are much higher than our averaged review score of 5.33. As such, we encourage the reviewers to reconsider their scores.\n\n[1] https://openreview.net/forum?id=BJIgi_eCZ&noteId=BJIgi_eCZ\n[2] https://openreview.net/forum?id=H1meywxRW\n\nMore detailed comments:\nQ: Regarding “simplicity”.\nA: Thanks for raising this point. By simplicity, we mean we do not use hand-crafted features such as POS tagging ([3]), nor multiple reading pass ([4]). We have made this point clear in the revision and tried not using “simple” to avoid confusion.\n\n[3] Reading Wikipedia to Answer Open-Domain Questions. In ACL 2017.\nDanqi Chen, Adam Fisch, Jason Weston, Antoine Bordes. \n[4] Reasonet: Learning to stop reading in machine comprehension. In KDD 2017.\nYelong Shen, Po-Sen Huang, Jianfeng Gao, Weizhu Chen.\n\nQ: Leaderboard result.\nA: We submitted our best model for test set evaluation on SQuAD, on Dec 20, 2017. Our single model (named “FRC”) is ranked 3rd among all single models in terms of F1 with F1/EM=84.6/76.2 (https://rajpurkar.github.io/SQuAD-explorer/). The performance gain is because we add more regularization to the model. Note that the two single models ranked above us have NOT been published yet: “BiDAF + Self Attention + ELMo” & “AttentionReader+”. \n\nQ: Results on one more dataset.\nA: We have conducted experiments on another Q&A dataset, TriviaQA, to verify that the effectiveness and efficiency of our model is general. In a nutshell, again, our model is 4x to 16x times faster than the RNN counterparts, while outperforming the state-of-the-art single-paragraph-reading model by more than 3.0 in both F1 and EM. Please see the revision. \n\nQ: Section 3 and more discussion on the data augmentation.\nA: We have revised the paper to give more details regarding our method and results with data augmentation. Here, we highlight a few major details that the reviewers asked, as well as several new findings:\na) Performance of NMT systems:\nEnglish-German (newstest2015): 27.6 (to German) and 29.9 (to English)\nEnglish-French (newstest2014): 36.7 (to French) and 35.9 (to English)\nb) Note that in our Table, “x2” means the total amount of the final training data is twice as large as the original data, i.e. the added amount is the same as the original. We have clarified this as well in the revision.\nc) New finding: translating to more languages can lead to more diverse augmented data, which further result in better generalization. Currently we try both English-French and English-German.\nd) New finding: we have shown in the revised experiment section that different ratios (original : English-French-English : English-German-English) would have different effects on the final performance. Empirically, when the ratio is 3:1:1, we get the best result. We interpret this phenomenon as: the translation might bring noise to the augmented data that we should lay more weight to the original clean data.\n\n", "We thank the reviewer for your acknowledgement to our contributions! We address the comments below.\n\nQ: The reviewer asks “Why don’t you report your model performance without data augmentation in Table 1?” \nA: We thank the reviewer for the suggestion! We have added this result in the revision. In summary, without data augmentation, our model gets 82.7 F1 on dev set, while with data augmentation, we get 83.8 F1 on dev. We only submitted the model with augmented data, and get 84.6 F1 on test set, which outperforms most of the existing models and is the best among all the published results, as of Dec 20, 2017. \n\nQ: The reviewer asks “Can you mention your leaderboard test accuracy in the rebuttal?”\nA: We submitted our best model for test set evaluation on SQuAD, on Dec 20, 2017. Our single model (named “FRC”) is ranked 3rd among all single models in terms of F1 with F1/EM=84.6/76.2 (https://rajpurkar.github.io/SQuAD-explorer/). The performance gain is because we add more regularization to the model. Note that the two single models ranked above us have NOT been published yet: “BiDAF + Self Attention + ELMo” & “AttentionReader+”. \n\nQ: Results on one more dataset.\nA: We have conducted experiments on another Q&A dataset, TriviaQA, to verify that the effectiveness and efficiency of our model is general. In a nutshell, again, our model is 4x to 16x times faster than the RNN counterparts, while outperforming the state-of-the-art single-paragraph-reading model by more than 3.0 in both F1 and EM. Please see the revision. \n\nQ: The reviewer asks “Are you willing to release your code to reproduce the results?”\nA: Yes, we will release the code after the paper gets accepted.\n\nQ: Minor comments.\nA: Thank you. We addressed all of them in the latest revision. \n", "We thank reviewers for comments and feedback to our paper, which have helped us improve the paper. However, we are disappointed with the low scores that our paper has received. Concurrent to our submission, there are two other papers on SQuAD, FusionNet[1] and DCN+ [2], which only tested on SQuAD and obtained much lower F1 scores (83.9 and 83.0 respectively) compared to ours (84.6). Their papers, however, received the averaged review scores of 6.33 and 7 respectively, which are much higher than our averaged review score of 5.33. As such, we encourage the reviewers to reconsider their scores.\n\n[1] https://openreview.net/forum?id=BJIgi_eCZ&noteId=BJIgi_eCZ\n[2] https://openreview.net/forum?id=H1meywxRW\n\nWe answer here some key questions by the reviewers:\n\n1. Novelty\nA major concern amongst the reviewers novelty of this paper because it’s similar to Vaswani et al. We stress here that our model is indeed novel: Note that even though self-attention has already been used extensively in Vaswani et al, the combination of convolutions and self-attention is novel, and is significantly better than self-attention alone and gives 2.7 F1 gain in our experiments. Our good accuracy is coupled with very good speedup gains. The speedup gains of up to 13x per training iteration and 9x during inference on SQuAD is not small. This significant gain makes our model most promising for larger datasets.\n\n2. Test set result on SQuAD leaderboard\nWe submitted our best model for test set evaluation on SQuAD, on Dec 20, 2017. Our single model (named “FRC”) is ranked 3rd among all single models in terms of F1 with F1/EM=84.6/76.2 (https://rajpurkar.github.io/SQuAD-explorer/). The performance gain is because we add more regularization to the model. Note that the two single models ranked above us have NOT been published yet: “BiDAF + Self Attention + ELMo” & “AttentionReader+”. \n\n3. Results on an additional benchmark (TriviaQA) \nWe have conducted experiments on another Q&A dataset, TriviaQA, to verify that the effectiveness and efficiency of our model is general. In a nutshell, again, our model is 4x to 16x times faster than the RNN counterparts, while outperforming the state-of-the-art single-paragraph-reading model by more than 3.0 in both F1 and EM. Please see the revision. \n", "Authors, \n\nPlease post a rebuttal for this work. Discussion period ends Jan 5th. ", "Thanks for your interest and the questions! Here are the answers:\n\n1. The number of heads is 8, which is consistent throughout the layers. The attention key depth is 128, so the per head depth is 128/8=16.\n\n2. It should be \"the kernel sizes are 7 and 5\". \n\nWe will clarify those in the revision. Thanks!", "Thank you for your work. It seems the paper lacks some of the implementation details and sometimes includes ambiguous statements.\n1. What is the number of heads used for the multi-head self attention, and is the number consistent throughout the layers? And is the attention key depth per head also 128? I feel that the encoder layer detail is lacking.\n2. Subsection 2.2 in 2. Embedding Encoder Layer, the paper states that kernel size of 7 is used for embedding encoder. However, later on subsection 4.2 Basic Setup describes \"the kernel sizes are 5 and 7\" respectively. Could you please clarify this?" ]
[ -1, -1, -1, 8, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 5, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B14TlG-RW", "iclr_2018_B14TlG-RW", "BJgkoz9Xz", "iclr_2018_B14TlG-RW", "iclr_2018_B14TlG-RW", "iclr_2018_B14TlG-RW", "iclr_2018_B14TlG-RW", "Hy4sFiuQM", "rkOlY3WXf", "rycJHDIgf", "Hyqx3y5xz", "Hkx2Bz9lM", "iclr_2018_B14TlG-RW", "iclr_2018_B14TlG-RW", "HJg2Fk_yf", "iclr_2018_B14TlG-RW" ]
iclr_2018_Sy2ogebAW
Unsupervised Neural Machine Translation
In spite of the recent success of neural machine translation (NMT) in standard benchmarks, the lack of large parallel corpora poses a major practical problem for many language pairs. There have been several proposals to alleviate this issue with, for instance, triangulation and semi-supervised learning techniques, but they still require a strong cross-lingual signal. In this work, we completely remove the need of parallel data and propose a novel method to train an NMT system in a completely unsupervised manner, relying on nothing but monolingual corpora. Our model builds upon the recent work on unsupervised embedding mappings, and consists of a slightly modified attentional encoder-decoder model that can be trained on monolingual corpora alone using a combination of denoising and backtranslation. Despite the simplicity of the approach, our system obtains 15.56 and 10.21 BLEU points in WMT 2014 French-to-English and German-to-English translation. The model can also profit from small parallel corpora, and attains 21.81 and 15.24 points when combined with 100,000 parallel sentences, respectively. Our implementation is released as an open source project.
accepted-poster-papers
This work presents new results on unsupervised machine translation using a clever combination of techniques. In terms of originality, the reviewers find that the paper over-claims, and promises a breakthrough, which they do not feel is justified. However there is "more than enough new content" and "preliminary" results on a new task. The experimental quality also has some issues, there is a lack of good qualitative analysis, and reviewers felt the claims about semi-supervised work had issues. Still the main number is a good start, and the authors are correct to note that there is another work with similarly promising results. Of the two works, the reviewers found the other more clearly written, and with better experimental analysis, noting that they both over claim in terms of novelty. The most promising aspect of the work, will likely be the significance of this task going forward, as there is now more interest in the use of multi-lingual embeddings and nmt as a benchmark task.
train
[ "BkW3sl8Nz", "B1pZoxIEf", "rJdm42ENG", "SkBWbhN4M", "S1jAR0Klf", "SyniKeceM", "S1BhMb5lG", "B1sSDPTXf", "SJHB8vaXM", "Sy-0ez6MG", "HkiSezafG", "ByJTJzaMM", "r1NHyGpGG", "BkH86b6fM", "B1SawIffz" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "While it is true that we do not analyze any specific linguistic phenomenon in depth, note that our experiments already show that the system is not working like a \"word-for-word gloss\" as speculated in the comment: the baseline system is precisely a word-for-word gloss, and the proposed method beats it with a considerable margin. Note that this baseline system is based on the exact same cross-lingual embeddings as the proposed system, meaning that our approach must have learned non-trivial translation relations that go beyond a word-for-word substitution to surpass it.\n\nIn addition to that, the examples in Table 2 also show that the proposed method is able to properly handle word reordering and multiword expressions. As discussed in the paper, there are also instances where the system does a poor job in these aspects, but these examples confirm that the proposed approach is able to go beyond a word-for-word gloss.", "To put things into perspective, we would like to remark that, as reflected by the title itself, the focus of our work is on unsupervised NMT. To the best of our knowledge, this is the first working approach to train an NMT system with nothing but monolingual corpora (concurrently with another submission at https://openreview.net/forum?id=rkYTTf-AZ), and this is in fact where our main contribution lies in.\n\nFor that reason, we disagree that \"what most readers will want to know is: how does this compare to a standard supervised NMT system [...] since that's the real choice that a practitioner would be faced with\". Needless to say, \"a standard supervised NMT system\" cannot work in an unsupervised scenario. As said before, this is the main focus of our work, and there is no such choice to be made in it, as standard machine translation is not an option when we only have access to monolingual corpora.\n\nBesides that, we think that our experiments already give a reasonably complete picture of how the proposed system compares to standard supervised approaches. We report the results of a comparable NMT system for different corpora sizes, as well as those of GNMT, which can be taken as a representative state-of-the-art machine translation system. Moreover, we run our experiments in a widely used dataset, making our results easily comparable to other work.\n\nFinally, we acknowledge that our work is not necessarily the best possible approach to unsupervised NMT. It is, however, the first working approach to unsupervised NMT, and this is where our main contribution lies in. As such, it is to be expected that significant advances will be made in the future, but we do not think that this makes our work preliminary. We think that our approach is well motivated, the experiments convincingly show its solid performance and, overall, our work sets a strong foundation for a new and exciting research direction in NMT.", "It's true that certain aspects of English-German translation are difficult, and indeed, phrase-based MT is notoriously bad at the verb-final construction, often failing to translate the verb altogether. But the paper doesn't analyse the performance of these systems on this specific phenomenon. If it did, it would be a much stronger analysis than what's currently in the paper. You can get reasonable BLEU scores in English-German with just a word-for-word gloss, so, in the absence of any analysis, I would hypothesize that the system does something like this. ", "Thanks for the clarifications. They raise some new questions for me.\n\nThe revision states (at the bottom of p.7) that the constraints (i.e. fixed cross-lingual embeddings) are necessary for learning. This seems like something that could only be discovered empirically, but the evidence is not in the paper. Do you have other experiments that show this? As discussed elsewhere in these reviews, the contribution of this paper is empirical, and IMO it would be *much* stronger if it included systematic experiments that empirically probe the importance of its architectural choices. \n\nOf course, if these changes cripple the comparable NMT system then that baseline is only part of the story. What most readers will want to know is: how does this compare to a standard supervised NMT system (and SMT system, as pointed out by the other reviewers), since that’s the real choice that a practitioner would be faced with. Do you have experiments on this?\n\nThat these answers aren't in the paper reinforce my feeling that the paper is preliminary. It would be much more convincing if it presented empirical evidence and toned down the rhetoric. Show, don't tell.", "The authors present a model for unsupervised NMT which requires no parallel corpora between the two languages of interest. While the results are interesting I find very few original ideas in this paper. Please find my comments/questions/suggestions below:\n\n1) The authors mention that there are 3 important aspects in which their model differs from a standard NMT architecture. All the 3 differences have been adapted from existing works. The authors clearly acknowledge and cite the sources. Even sharing the encoder using cross lingual embeddings has been explored in the context of multilingual NER (please see https://arxiv.org/abs/1607.00198). Because of this I find the paper to be a bit lacking on the novelty quotient. Even backtranslation has been used successfully in the past (as acknowledged by the authors). Unsupervised MT in itself is not a new idea (again clearly acknowledged by the authors).\n\n2) I am not very convinced about the idea of denoising. Specifically, I am not sure if it will work for arbitrary language pairs. In fact, I think there is a contradiction even in the way the authors write this. On one hand, they want to \"learn the internal structure of the languages involved\" and on the other hand they deliberately corrupt this structure by adding noise. This seems very counter-intuitive and in fact the results in Table 1 suggest that it leads to a drop in performance. I am not very sure that the analogy with autoencoders holds in this case.\n\n3) Following up on the above question, the authors mention that \"We emphasize, however, that it is not possible to use backtranslation alone without denoising\". Again, if denoising itself leads to a drop in the performance as compared to the nearest neighbor baseline then why use backtranslation in conjunction with denoising and not in conjunction with the baseline itself. \n\n4) This point is more of a clarification and perhaps due to my lack of understanding. Backtranslation to generate a pseudo corpus makes sense only after the model has achieved a certain (good) performance. Can you please provide details of how long did you train the model (with denoising?) before producing the backtranslations ?\n\n5) The authors mention that 100K parallel sentences may be insufficient for training a NMT system. However, this size may be decent enough for a PBSMT system. It would be interesting to see the performance of a PBSMT system trained on 100K parallel sentences. \n\n6) How did you arrive at the beam size of 12 ? Was this a hyperparameter? Just curious.\n\n7) The comparable NMT set up is not very clear. Can you please explain it in detail ? In the same paragraph, what exactly do you mean by \"the supervised system in this paper is relatively small?\"", "unsupervised neural machine translation\n\nThis is an interesting paper on unsupervised MT. It trains a standard architecture using:\n\n1) word embeddings in a shared embedding space, learned using a recent approach that works with only tens of bilingual word papers.\n\n2) A encoder-decoder trained using only monolingual data (should cite http://www.statmt.org/wmt17/pdf/WMT15.pdf). Training uses a “denoising” method which is not new: it uses the same idea as contrastive estimation (http://www.aclweb.org/anthology/P05-1044, a well-known method which should be cited). \n\n3) Backtranslation.\n\nAll though none of these ideas are new, they haven’t been combined in this way before, and that what’s novel here. The paper is essentially a neat application of (1), and is an empirical/ systems paper. It’s essentially a proof-of-concept that it is that it’s possible to get anything at all using no parallel data. That’s surprising and interesting, but I learned very little else from it. The paper reads as preliminary and rushed, and I had difficulty answering some basic questions:\n\n* In Table (1), I’m slightly puzzled by why 5 is better than 6, and this may be because I’m confused about what 6 represents. It would be natural to compare 5 with a system trained on 100K parallel text, since the systems would then (effectively) differ only in that 5 also exploits additional monolingual data. But the text suggests that 6 is trained on much more than 100K parallel sentences; that is, it differs in at least two conditions (amount of parallel text and use of monolingual text). Since this paper’s primary contribution is empirical, this comparison should be done in a carefully controlled way, differing each of these elements in turn.\n\n* I’m very confused by the comment on p. 8 that “the modifications introduced by our proposal are also limiting” to the “comparable supervised NMT system”. According to the paper, the architecture of the system is unchanged, so why would this be the case? This comment makes it seem like something else has been changed in the baseline, which in turn makes it somewhat hard to accept the results here.\n\nComment:\n* The qualitative analysis is not really an analysis: it’s just a few cherry-picked examples and some vague observations. While it is useful to see that the system does indeed generate nontrivial content in these cases, this doesn’t give us further insight into what the system does well or poorly outside these examples. The BLEU scores suggest that it also produces many low-quality translations. What is different about these particular examples? (Aside: since the cross-lingual embedding method is trained on numerals, should we be concerned that the system fails at translating numerals?)\n\nQuestions:\n* Contrastive estimation considers other neighborhood functions (“random noise” in the parlance of this paper), and it’s natural to wonder what would happen if this paper also used these or other neighborhood functions. More importantly, I suspect the the neighborhood functions are important: when translating between Indo-European languages as in these experiments, local swaps are reasonable; but in translating between two different language families (as would often be the case in the motivating low-resource scenario that the paper does not actually test), it seems likely that other neighborhood functions would be important, since structural differences would be much larger.\n\nPresentational comments (these don’t affect my evaluation, they’re mostly observations but they contribute to a general feeling that the paper is rushed and preliminary):\n\n* BPE does not “learn”, it’s entirely deterministic.\n\n* This paper is at best tangentially related to decipherment. Decipherment operates under two quite different assumptions: there is no training data for the source language ciphertext, only the ciphertext itself (which is often very small); and the replacement function is deterministic rather than probabilistic (and often monotonic). The Dou and Knight papers are interesting, but they’re an adaptation of ideas rather than decipherment per se. Since none of those ideas are used here this feels like hand-waving.\n\n* Future work is vague: “we would like to detect and mitigate the specific causes…” “we also think that a better handling of rare words…” That’s great, but how will you do these things? Do you have specific reasons to think this, or ideas on how to approach them? Otherwise this is just hand-waving.", "This paper describes a first working approach for fully unsupervised neural machine translation. The core ideas being this method are: (1) train in both directions (French to English and English to French) in tandem; (2) lock the embedding table to bilingual embeddings induced from monolingual data; (3) share the encoder between two languages; and (3) alternate between denoising auto-encoder steps and back-translation steps. The key to making this work seems to be using a denoising auto-encoder where noise is introduced by permuting the source sentence, which prevents the encoder from learning a simple copy operation. The paper shows real progress over a simple word-to-word baseline for WMT 2014 English-French and English-German. Preliminary results in a semi-supervised setting are also provided.\n\nThis is solid work, presenting a reasonable first working system for unsupervised NMT, which had never been done before now. That alone is notable, and overall, I like the paper. The work shares some similarities with He et al.’s NIPS 2016 paper on “Dual learning for MT,” but has more than enough new content to address the issues that arise with the fully unsupervised scenario. The work is not perfect, though. I feel that the paper’s abstract over-claims to some extent. Also, the experimental section shows clearly that in getting the model to work at all, they have created a model with a very real ceiling on performance. However, to go from not working to working a little is a big, important first step. Also, I found the paper’s notation and prose to be admirably clear; the paper was very easy to follow.\n\nRegarding over-claiming, this is mostly an issue of stylistic preference, but this paper’s use of the term “breakthrough” in both the abstract and the conclusion grates a little. This is a solid first attempt at a new task, and it lays a strong foundation for others to build upon, but there is lots of room for improvement. I don’t think it warrants being called a breakthrough - lots of papers introduce new tasks and produce baseline solutions. I would generally advise to let the readers draw their own conclusions.\n\nRegarding the ceiling, the authors are very up-front about this in Table 1, but it bears repeating here: a fully supervised model constrained in the same way as this unsupervised model does not perform very well at all. In fact, it consistently fails to surpass the semi-supervised baseline (which I think deserved some further discussion in the paper). The poor performance of the fully supervised model demonstrates that there is a very real ceiling to this approach, and the paper would be stronger if the authors were able to show to what degree relaxing these constraints harms the unsupervised system and helps the supervised one.\n\nThe semi-supervised experiment in Sections 2.3 and 4 is a little dangerous. With BLEU scores failing to top 22 for English-French, there is a good chance that a simple phrase-based baseline on the same 100k sentence pairs with a large target language model will outperform this technique. Any low-resource scenario should include a Moses baseline for calibration, as NMT is notoriously weak with small amounts of parallel data.\n\nFinally, I think the phrasing in Section 5.1 needs to be softened, where it states, “... it is not possible to use backtranslation alone without denoising, as the initial translations would be meaningless sentences produced by a random NMT model, ...” This statement implies that the system producing the sentences for back-translation must be a neural MT system, which is not the case. For example, a related paper co-submitted to ICLR, called “Unsupervised machine translation using monolingual corpora only,” shows that one can prime back-translation with a simple word-to-word system similar to the word-to-word baseline in this paper’s Table 1.", "We have uploaded a new version of the paper with the following new results, which aim to address the concerns raised in the reviews in relation to the semi-supervised experiments:\n\n- We have tested the proposed method with only 10,000 parallel sentences, obtaining an improvement of 1-3 BLEU points over the unsupervised system. This reinforces that the proposed approach has potential interest beyond the strictly unsupervised scenario, showing that it can profit from a small parallel corpus that would be insufficient to train a conventional machine translation system.\n\n- We have added new results for the comparable NMT system using the same parallel data as the semi-supervised systems. This was suggested by AnonReviewer1, and allows for an easier comparison between the semi-supervised and the supervised systems.", "Thanks for the effort put in reproducing our experiments. We would like to clarify that we did not observe any of the stability issues mentioned in the report. As usual with any deep learning experiment, there may be some subtle details that may have made it difficult to reproduce our results exactly. Moreover, it looks like the team missed some important details that were already present in the paper (e.g. they use fixed embeddings in the decoder, which we do not). In any case, we plan to release the entire package of code and scripts to reproduce our experiments once the submission has been accepted.", "We would like to thank all reviewers for their detailed and insightful feedback. We have answered each specific point in the replies below, and uploaded a new version of the paper addressing them.", "Thanks for the insightful feedback. Please find our answers below:\n\n- Regarding over-claiming, it was not our intention to exaggerate our contribution, and we in fact share your view on this: we think that our work is a strong foundation for a new and exciting research direction in NMT, but we agree that it is only a first step and there is still a long way to go. We understand that “breakthrough” might not be the most appropriate term for this, and we have removed it from the revised version of the paper.\n\n- We find the discussion on the ceiling very interesting and relevant. We agree on the following key observations: 1) the comparable supervised system can be seen as a ceiling for the unsupervised system, and 2) the comparable supervised system gets relatively poor results. As such, one might conclude that our approach has a hard limit in this ceiling, as any eventual improvement in the proposed training method could at best close the gap with it. However, this also assumes that the ceiling itself is fixed and cannot be improved, which we do not find to be the case. In fact, we think that a very interesting research direction is to identify and address the factors that limited the performance of the comparable supervised system, which should also translate in an improvement for the unsupervised system. We have the following ideas in this regard, which we have better described in the revised version of the paper:\n\n1) We did not perform any rigorous hyperparameter exploration, and we favored efficiency over performance in our experimental design. As such, we think that there is a considerable margin to improve our results with some engineering effort, such as using larger models, longer training times, ensembling techniques and better decoding strategies (length/coverage penalty).\n\n2) While the constraints that we introduce to make our unsupervised system trainable might also limit its performance, one could design a multi-phase training procedure where these constraints are progressively relaxed. For instance, a key aspect of our design is to use fixed cross-lingual embeddings in the encoder. This is necessary in the early stages of training, as it forces the encoder to use a common word representation for both languages, but it might also limit what it can ultimately learn in the process. For that reason, one could start to progressively update the weights of the encoder embeddings as training progresses. Similarly, one could also decouple the shared encoder into two independent encoders at some point during training, or progressively reduce the noise level.\n\n- Regarding the semi-supervised experiments, note that our point here was not to improve the state of the art under these conditions, but rather to prove that the proposed system can also exploit a (relatively) small parallel corpus, showing its potential interest beyond the strictly unsupervised scenario.\n\n- Our statement that “it is not possible to use backtranslation alone without denoising” was referring to our training procedure where backtranslation uses the model from the previous iteration. It is true that it does not apply to the general case, as backtranslation could also be used in conjunction with other translation methods (e.g. embedding nearest neighbor), and we have consequently softened the statement in the revised version as suggested.", "Presentational comments:\n\n- We are aware that BPE is completely deterministic. However, it does require to extract some merge operations that are later applied. The original paper of Sennrich et al. (2016) refers to this process as \"learning\", so we decided to follow their wording.\n\n- The Dou and Knight papers also attempt to build machine translation systems using monolingual corpora, so we briefly discuss and acknowledge their work accordingly even if our approach is completely different. Regarding the choice of the term \"decipherment\" to refer to that work, we understand that this might not exactly adjust to the common acceptation of the term, but it seems to be the one that the authors themselves use (e.g. \"Unifying bayesian inference and vector space models for improved decipherment\"). We have rewritten it as \"statistical decipherment for machine translation\", which we hope that is more descriptive.\n\n- We have rewritten the future work in the revised version of the paper trying to be more specific.", "Thanks for the insightful review. We have tried to make the paper more clear in the revised version taking these comments into account. Please find the answers to each specific point below:\n\nGeneral:\n\n- To clarify what the semi-supervised and supervised systems represents in Table 1: (5) is the same as (4), but in addition to training on monolingual corpora using denoising and backtransaltion, it is also trained in a subset of 100K parallel sentences using standard supervised cross-entropy loss (it alternates one mini-batch of denoising, one mini-batch of backtranslation and one mini-batch of this supervised training). (6) is the same as (5) except for two differences: 1) it uses the full parallel corpus instead of the subset of 100K parallel sentences, and 2) it does not use any monolingual corpora nor denoising or backtranslation. We think that the main reason why (5) is better than (6) is related to the domain: the parallel corpus of (6) is general, whereas the subset of 100K parallel sentences and the monolingual corpus used for (5) are in the news domain just as the test set. While these facts were already mentioned in the paper, the new version includes a more detailed discussion. At the same time, we agree that the fact that the comparable system differs from the semi-supervised system in two aspects makes the comparison more difficult, and we are currently working to extend our experiments accordingly.\n\n- Regarding the comment that “the modifications introduced by our proposal are also limiting” to the “comparable supervised NMT system”, note that, as discussed in the previous point, the comparable NMT system uses the exact same architecture and hyperparameters as the unsupervised system (number of layers, hidden units, attention model etc.) and, as such, it also incorporates the non-standard variations in Section 3.1 (dual structure using a shared encoder with fixed embeddings). These are what we were referring to as “the modification introduced by our proposal”, but the only difference between the unsupervised and the supervised systems is that, instead of training in monolingual corpora using denoising and backtranslation, we train in parallel corpora just as in standard NMT. We have tried to make this more clear in the revised version of the paper.\n\nComment:\n\n- We agree that the qualitative analysis in the current version is limited. It was done mainly to check and illustrate that the proposed unsupervised NMT system generates sensible translations despite the lack of parallel corpora. We believe that a more detailed investigation and analysis into the properties and characteristics of translation generated by unsupervised NMT must be conducted in the future.\n\n- Note that we only use shared numerals to initialize the iterative embedding mapping method, so it is understandable that the system fails to translate numerals after the training of both the embedding mapping and the unsupervised NMT system itself. While it would certainly be possible to perform an ad-hoc processing to translate numerals under the assumption that they are shared by different languages, our point was to show that the system has some logical adequacy issues for very similar concepts (e.g. different numerals or month names).\n\nQuestions:\n\n- Thanks for pointing out the connection with contrastive estimation, which is now properly discussed in the revised version of the paper. As for the role of neighborhood functions, we agree that there are many possible choices beyond local swaps, and the optimal choice could greatly depend on the typological divergences between the languages involved. In this regard, we think that this is a very interesting direction to explore in the future, and we have tried to better discuss this matter in the revised version of the paper.\n\nHaving said that, please note that we have considered two language pairs (English-French and English-German) in our experiments. Despite being indo-european, there are important properties that distinguish these language pairs, such as the verb-final construction and the prevalence of compounding in German in contrast to French. In fact, English-German has often been studied in machine translation as a particularly challenging language pair. For that reason, we believe that the experiments on these two distinct language pairs support the effectiveness of the proposed approach, despite the potential for future investigation on the effect of contrastive functions on the choice of language pairs.", "Thanks for the insightful comments. Please find the answers to the specific points below, which were also addressed in the revised version of the paper:\n\n1) We are aware that the basic building blocks of our work come from previous work and, as you note, we try to properly acknowledge that in the paper. However, we do not see this as a weakness, but rather an inherent characteristic of science as a collaborative effort. Our contribution lies in combining these basic building blocks in a novel way to build the first fully unsupervised NMT system. We believe that this is an important contribution on its own: NMT is a highly relevant field where the predominant approach has been supervised and, for the first time, we show that an unsupervised approach is also viable. As such, we think that our work explores a highly original idea and opens a new and exciting research direction.\n\n2) This is a very interesting observation, but we think that, paradoxically, corrupting the structure of the language is necessary for the system to learn such structure. Note that, without denoising, this training step would be reduced to a trivial copying task that admits degenerated solutions. The intuition is that one can easily copy a sentence in any language even if they know nothing about that language. In contrast, adding noise to the input makes the task of reconstructing the input non-trivial, and forces the system to learn about the structure of that language to be able to solve it. The intuition in this case is that, if we are given a scrambled sentence in some language, it is not possible for us to recover the original sentence unless we have some knowledge of the language in question. In other words, the idea of denoising is to corrupt the structure of the input, so the system needs to learn the correct structure in order to recover the original uncorrupted input (this is possible because the system does see the correct structure in the output during training). Note that this has also been found to help extract good representations from natural language sentences by other authors (Hill et al., 2016). Regarding arbitrary language pairs, we think that this idea is particularly relevant for distant pairs: corrupting the word order of the input makes the shared encoder rely less in this word order, which is necessary for distant language pairs with more divergences in this regard.\n\n3/4) There seems to be some confusion here on how our training procedure works in relation to backtranslation. Note that each training iteration performs one mini-batch of denoising and one mini-batch of backtranslation in each direction, and the bactranslation step at iteration i uses the model from iteration (i-1). This way, denoising and backtranslation keep constantly improving the model, and backtranslation itself uses the most recent model at each time. This is in contrast with traditional backtranslation, where a fixed model is used to backtranslate the entire corpus at one time. In relation to point 3, while denoising alone is certainly weaker than the nearest neighbor baseline, the combination of denoising and backtransation eventually leads to a stronger model, which backtranslation itself takes advantage of in the following iterations as just described.\n\n5) The purpose of this experiment was to show that the proposed system can also benefit from small parallel corpora, making it suitable not only for the unsupervised scenario, but also for the semi-supervised scenario. As such, our point was not to improve the state of the art under these conditions, but rather to show that our work has potential interest beyond the strictly unsupervised scenario.\n\n6) A beam size of 12 is very common in previous work (Sutskever et al., 2014; Sennrich et al., 2016a;b; He et al., 2016), so we also adopted it for our experiments without any further exploration.\n\n7) The comparable NMT system uses the exact same architecture and hyperparameters as the unsupervised system (number of layers, hidden units, attention model etc.). Furthermore, it incorporates the non-standard variations in Section 3.1 (dual structure using a shared encoder with fixed embeddings). The only difference is that, instead of training it on monolingual corpora using denoising and backtranslation, it is trained on parallel corpora just as standard NMT.\n\nWhen we say that \"the supervised system in this paper is relatively small\", we mean that the size of the model (number of layers, training time etc.) is small compared to the state of the art, which explains in part why its results are also weaker. However, note that this also applies to the unsupervised system, which uses the exact same settings. We therefore believe that there is a considerable margin to improve our results by using a larger model.", "Our team attempted to reproduce the denoising and the denoising+backtranslation models on the French-English language pair. We document our findings in the pdf here: https://github.com/anthonywchen/Unsupervised-NMT-Reproducibility/blob/master/ICLR_Reproducibiltiy.pdf" ]
[ -1, -1, -1, -1, 6, 5, 7, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 4, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "rJdm42ENG", "SkBWbhN4M", "r1NHyGpGG", "r1NHyGpGG", "iclr_2018_Sy2ogebAW", "iclr_2018_Sy2ogebAW", "iclr_2018_Sy2ogebAW", "iclr_2018_Sy2ogebAW", "B1SawIffz", "iclr_2018_Sy2ogebAW", "S1BhMb5lG", "r1NHyGpGG", "SyniKeceM", "S1jAR0Klf", "iclr_2018_Sy2ogebAW" ]
iclr_2018_BkwHObbRZ
Learning One-hidden-layer Neural Networks with Landscape Design
We consider the problem of learning a one-hidden-layer neural network: we assume the input x is from Gaussian distribution and the label y=aσ(Bx)+ξ, where a is a nonnegative vector and B is a full-rank weight matrix, and ξ is a noise vector. We first give an analytic formula for the population risk of the standard squared loss and demonstrate that it implicitly attempts to decompose a sequence of low-rank tensors simultaneously. Inspired by the formula, we design a non-convex objective function G whose landscape is guaranteed to have the following properties: 1. All local minima of G are also global minima. 2. All global minima of G correspond to the ground truth parameters. 3. The value and gradient of G can be estimated using samples. With these properties, stochastic gradient descent on G provably converges to the global minimum and learn the ground-truth parameters. We also prove finite sample complexity results and validate the results by simulations.
accepted-poster-papers
I recommend acceptance based on the reviews. The paper makes novel contributions to learning one-hidden layer neural networks and designing new objective function with no bad local optima. There is one point that the paper is missing. It only mentions Janzamin et al in the passing. Janzamin et al propose using score function framework for designing alternative objective function. For the case of Gaussian input that this paper considers, the score function reduces to Hermite polynomials. Lack of discussion about this connection is weird. There should be proper acknowledgement of prior work. Also missing are some of the key papers on tensor decomposition and its analysis I think there are enough contributions in the paper for acceptance irrespective of the above aspect.
test
[ "ry5GSRHNf", "rk0Ek5vgM", "SyfsN8tef", "SJjI7pKlz", "HkLFssYfM", "H1jVcsFMf", "BkFFKsKfz" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Thanks for the review again!\n\nWe apologize that we didn't know that the paper was expected to be updated. We just added the results for sigmoid that answers the question \"does sigmoid suffer from the same problem?\" as we claimed in the response before. Please see page 9, figure 2 in the current version. \n\nWe will revise the paper with more intuitions/explanations as promised in the previous response as soon as possible. ", "[ =========================== REVISION ===============================================================]\nI am satisfied with the answers to my questions. The paper still needs some work on clarity, and authors defer the changes to the next version (but as I understood, they did no changes for this paper as of now), which is a bit frustrating. However I am fine accepting it.\n[ ============================== END OF REVISION =====================================================]\n\nThis paper concerns with addressing the issue of SGD not converging to the optimal parameters on one hidden layer network for a particular type of data and label (gaussian features, label generated using a particular function that should be learnable with neural net). Authors demonstrate empirically that this particular learning problem is hard for SGD with l2 loss (due to apparently bad local optima) and suggest two ways of addressing it, on top of the known way of dealing with this problem (which is overparameterization). First is to use a new activation function, the second is by designing a new objective function that has only global optima and which can be efficiently learnt with SGD\n\nOverall the paper is well written. The authors first introduce their suggested loss function and then go into details about what inspired its creation. I do find interesting the formulation of population risk in terms of tensor decomposition, this is insightful\n\nMy issues with the paper are as follows:\n- The loss function designed seems overly complicated. On top of that authors notice that to learn with this loss efficiently, much larger batches had to be used. I wonder how applicable this in practice - I frankly didn't see insights here that I can apply to other problems that don't fit into this particular narrowly defined framework\n- I do find it somewhat strange that no insight to the actual problem is provided (e.g. it is known empirically but there is no explanation of what actually happens and there is a idea that it is due to local optima), but authors are concerned with developing new loss function that has provable properties about global optima. Since it is all empirical, the first fix (activation function) seems sufficient to me and new loss is very far-fetched. \n- It seems that changing activation function from relu to their proposed one fixes the problem without their new loss, so i wonder whether it is a problem with relu itself and may be other activations funcs, like sigmoids will not suffer from the same problem\n- No comparison with overparameterization in experiments results is given, which makes me wonder why their method is better.\n\nMinor: fix margins in formula 2.7. \n\n\n", "This paper studies the problem of learning one-hidden layer neural networks and is a theory paper. A well-known problem is that without good initialization, it is not easy to learn the hidden parameters via gradient descent. This paper establishes an interesting connection between least squares population loss and Hermite polynomials. Following from this connection authors propose a new loss function. Interestingly, they are able to show that the loss function globally converges to the hidden weight matrix. Simulations confirm the findings.\n\nOverall, pretty interesting result and solid contribution. The paper also raises good questions for future works. For instance, is designing alternative loss function useful in practice? In summary, I recommend acceptance. The paper seems rushed to me so authors should polish up the paper and fix typos.\n\nTwo questions:\n1) Authors do not require a^* to recover B^*. Is that because B^* is assumed to have unit length rows? If so they should clarify this otherwise it confuses the reader a bit.\n2) What can be said about rate of convergence in terms of network parameters? Currently a generic bound is employed which is not very insightful in my opinion.\n\n", "This paper proposes a tensor factorization-type method for learning one hidden-layer neural network. The most interesting part is the Hermite polynomial expansion of the activation function. Such a decomposition allows them to convert the population risk function as a fourth-order orthogonal tensor factorization problem. They further redesign a new formulation for the tensor decomposition problem, and show that the new formulation enjoys the nice strict saddle properties as shown in Ge et al. 2015. At last, they also establish the sample complexity for recovery.\n\nThe organization and presentation of the paper need some improvement. For example, the authors defer many technical details. To make the paper accessible to the readers, they could provide more intuitions in the first 9 pages.\n\nThere are also some typos: For example, the dimension of a is inconsistent. In the abstract, a is an m-dimensional vector, and on Page 2, a is a d-dimensional vector. On Page 8, P(B) should be a degree-4 polynomial of B.\n\nThe paper does not contains any experimental results on real data.", "Thanks for the review and comments.\n\nResponse to your questions:\n\n--- \"empirical fix seems sufficient\": We do consider the proposal of the empirical fix as part of the contribution of the paper. It's driven by the theoretical analysis of the squared loss (more intuition below), and it's novel as far as we know.\n\nThat said, we think it's also valuable to pursue a provably nice loss function without bad local minima. Note that there is no known method to empirically verify whether a function has no bad local minima, and such statements can only be established by proofs. Admittedly our first-cut design of the loss function is complicated and sub-optimal in terms of sample complexity, we do hope that our technique can inspire better loss function and model design in the future.\n\n--- \"does sigmoid suffer from the same problem?\": we did experiment with the sigmoid activation and found that the sigmoid activation function also has bad local minima. We will add this experiment to the next version of the paper. We conjecture that the activation h_2 +h_4 or the activation 1/2 |z| has no spurious local minima.\n\n--- \"actual insights about the landscape\": Our insights is that the squared loss objective function is trying to perform an infinite number of tensor decomposition problems simultaneously, and the mixing of all of these problems very likely creates bad local minima, as we empirically observed. The intuition behind the empirical fix is that removing some of these tensor decomposition problems would make the landscape simpler and nicer.\n\n--- \"comparison with over-parameterization\": over-parameterization is indeed a powerful way to remove the bad local minima, and it gives models with good prediction. But it doesn't recover the parameters of the true model because the training parameters space is larger. Our method is guaranteed to recover the true parameters of the model, which in turns guarantees a ``complete\" generalization to any unseen examples, even including e.g., adversarial examples or test examples drawn from another distribution. In this sense, our approaches (both the empirical fix and theoretical one) return solutions with stronger guarantees than over-parameterization can provide.\n\nOf course, it's also a very important open problem to understand better other alternatives to landscape design such as over-parametrization.\n", "Thanks for the comments.\nRegarding the questions:\n\n1) Yes, since we mostly focus on the ReLU activation, we assume that the rows of B^* have unit norms. For ReLU activation, the row norms are inherently unidentifiable.\n\n2) The technical version of the landscape analysis in Theorem B.1 specifies the precise dependencies of the landscape properties on the dimension, etc. To get a convergence rate, one can combine our Theorem B.1 with an analysis of gradient descent or other algorithms on non-convex functions. The best-known analysis for SGD in Ge et al. 2015 does not specify the precise polynomial dependencies. Since developing stochastic algorithms (beyond SGD) with lower iteration complexity is an active area of research, the best-known convergence rate is constantly changing.\n", "Thanks for the comments. We will add more intuitions in the paper and fix the typos.\n\nThe high-level intuition is that the squared loss objective function is trying to perform an infinite number of tensor decomposition problems simultaneously and the mixing of all of these problems very likely creates bad local minima, as we empirically observed. Thus we design an objective function that selects only two of these tensor decompositions problems, which empirically removes the bad local minima. Finally, we design an objective function that resembles more a 4th order tensor decomposition objective in Ge et al. (2015), which are known to be good." ]
[ -1, 6, 9, 7, -1, -1, -1 ]
[ -1, 3, 3, 3, -1, -1, -1 ]
[ "rk0Ek5vgM", "iclr_2018_BkwHObbRZ", "iclr_2018_BkwHObbRZ", "iclr_2018_BkwHObbRZ", "rk0Ek5vgM", "SyfsN8tef", "SJjI7pKlz" ]
iclr_2018_SysEexbRb
Critical Points of Linear Neural Networks: Analytical Forms and Landscape Properties
Due to the success of deep learning to solving a variety of challenging machine learning tasks, there is a rising interest in understanding loss functions for training neural networks from a theoretical aspect. Particularly, the properties of critical points and the landscape around them are of importance to determine the convergence performance of optimization algorithms. In this paper, we provide a necessary and sufficient characterization of the analytical forms for the critical points (as well as global minimizers) of the square loss functions for linear neural networks. We show that the analytical forms of the critical points characterize the values of the corresponding loss functions as well as the necessary and sufficient conditions to achieve global minimum. Furthermore, we exploit the analytical forms of the critical points to characterize the landscape properties for the loss functions of linear neural networks and shallow ReLU networks. One particular conclusion is that: While the loss function of linear networks has no spurious local minimum, the loss function of one-hidden-layer nonlinear networks with ReLU activation function does have local minimum that is not global minimum.
accepted-poster-papers
I recommend acceptance based on the positive reviews. The paper analyzes critical points for linear neural networks and shallow ReLU networks. Getting characterization of critical points for shallow ReLU networks is a great first step.
train
[ "S1BRtK8EM", "Hyyw_tH4z", "S1aEzCJxG", "ryOWEcdlM", "SJ6btV9gz", "BydBmeLQf", "H1oDUDefG", "rJye8vezz", "SkBt4Dgff" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "I am satisfied with the authors response and maintain my rating and acceptance recommendation.", "Thanks for the clarification. Most of my concerns are addressed. An anonymous reviewer raised a concern about the overlap with existing work, Li et al. 2016b. The authors' comments about this related work sound ok to me. But I would suggest the authors add more discussion about it. Overall the paper is above the acceptance threshold in my opinion and I keep my rating.", "Authors of this paper provided full characterization of the analytical forms of the critical points for the square loss function of three types of neural networks: shallow linear networks, deep linear networks and shallow ReLU nonlinear networks. The analytical forms of the critical points have direct implications on the values of the corresponding loss functions, achievement of global minimum, and various landscape properties around these critical points.\n\nThe paper is well organized and well written. Authors exploited the analytical forms of the critical points to provide a new proof for characterizing the landscape around the critical points. This technique generalizes existing work under full relaxation of assumptions. In the linear network with one hidden layer, it generalizes the work Baldi & Hornik (1989) with arbitrary network parameter dimensions and any data matrices; In the deep linear networks, it generalizes the result in Kawaguchi (2016) under no assumptions on the network parameters and data matrices. Moreover, it also provides new characterization for shallow ReLU nonlinear networks, which is not discussed in previous work.\n\nThe results obtained from the analytical forms of the critical points are interesting, but one problem is that how to obtain the proper solution of equation (3)? In the Example 1, authors gave a concrete example to demonstrate both local minimum and local maximum do exist in the shallow ReLU nonlinear networks by properly choosing these matrices satisfying (12). It will be interesting to see how to choose these matrices for all the studied networks with some concrete examples.", "This paper studies the critical points of shallow and deep linear networks. The authors give a (necessary and sufficient) characterization of the form of critical points and use this to derive necessary and sufficient conditions for which critical points are global optima. Essentially this paper revisits a classic paper by Baldi and Hornik (1989) and relaxes a few requires assumptions on the matrices. I have not checked the proofs in detail but the general strategy seems sound. While the exposition of the paper can be improved in my view this is a neat and concise result and merits publication in ICLR. The authors also study the analytic form of critical points of a single-hidden layer ReLU network. However, given the form of the necessary and sufficient conditions the usefulness of of these results is less clear.\n\n\nDetailed comments:\n\n- I think in the title/abstract/intro the use of Neural nets is somewhat misleading as neural nets are typically nonlinear. This paper is mostly about linear networks. While a result has been stated for single-hidden ReLU networks. In my view this particular result is an immediate corollary of the result for linear networks. As I explain further below given the combinatorial form of the result, the usefulness of this particular extension to ReLU network is not very clear. I would suggest rewording title/abstract/intro\n\n- Theorem 1 is neat, well done!\n\n- Page 4 p_i’s in proposition 1\nFrom my understanding the p_i have been introduced in Theorem 1 but given their prominent role in this proposition they merit a separate definition (and ideally in terms of the A_i directly). \n\n- Theorems 1, prop 1, prop 2, prop 3, Theorem 3, prop 4 and 5\n\tAre these characterizations computable i.e. given X and Y can one run an algorithm to find all the critical points or at least the parameters used in the characterization p_i, V_i etc?\n\n- Theorems 1, prop 1, prop 2, prop 3, Theorem 3, prop 4 and 5\n\tWould recommend a better exposition why these theorems are useful. What insights do you gain by knowing these theorems etc. Are less sufficient conditions that is more intuitive or useful. (an insightful sufficient condition in some cases is much more valuable than an unintuitive necessary and sufficient one).\n\n- Page 5 Theorem 2\n\tDoes this theorem have any computational implications? Does it imply that the global optima can be found efficiently, e.g. are saddles strict with a quantifiable bound?\n\n- Page 7 proposition 6 seems like an immediate consequence of Theorem 1 however given the combinatorial nature of the K_{I,J} it is not clear why this theorem is useful. e.g . back to my earlier comment w.r.t. Linear networks given Y and X can you find the parameters of this characterization with a computationally efficient algorithm? \n", "This paper mainly focuses on the square loss function of linear networks. It provides the sufficient and necessary characterization for the forms of critical points of one-hidden-layer linear networks. Based on this characterization, the authors are able to discuss different types of non-global-optimal critical points and show that every local minimum is a global minimum for one-hidden-layer linear networks. As an extension, the manuscript also characterizes the analytical forms for the critical points of deep linear networks and deep ReLU networks, although only a subset of non-global-optimal critical points are discussed. In general, this manuscript is well written. \n\nPros:\n1. This manuscript provides the sufficient and necessary characterization of critical points for deep networks. \n2. Compared to previous work, the current analysis for one-hidden-layer linear networks doesn’t require assumptions on parameter dimensions and data matrices. The novel analyses, especially the technique to characterize critical points and the proof of item 2 in Proposition 3, will probably be interesting to the community.\n3. It provides an example when a local minimum is not global for a one-hidden-layer neural network with ReLU activation.\n\nCons:\n1. I'm concerned that the contribution of this manuscript is a little incremental. The equivalence of global minima and local minima for linear networks is not surprising based on existing works e.g. Hardt & Ma (2017) and Kawaguchi (2016). \n2. Unlike one-hidden-layer linear networks, the characterizations of critical points for deep linear networks and deep ReLU networks seem to be hard to be interpreted. This manuscript doesn't show that every local minimum of these two types of deep networks is a global minimum, which actually has been shown by existing works like Kawaguchi (2016) with some assumptions. The behaviors of linear networks and practical (deep and nonlinear) networks are very different. Under such circumstance, the results about one-hidden-layer linear networks are less interesting to the deep learning community.\n\nMinors:\nThere are some mixed-up notations: tilde{A_i} => A_i , and rank(A_2) => rank(A)_2 in Proposition 3.", "Based on the reviewers' comments, we uploaded a revision that made the following changes. We are happy to make further changes if the reviewers have additional comments.\n\n1. We fixed the mixed-up notations in Prop. 3. Note that in item 3 of Prop. 3, we only perturb A_2 to tilde{A_2}.\n\n2. In the title, abstract and introduction, we reworded neural networks as linear neural networks whenever applicable.\n\n3. We added Remark 1 above Prop. 1 to separately define the parameters p_i's.\n\n4. In the paragraph before Remark 1, we commented that the critical points characterized in Theorem 1 cannot be fully listed out because they are in general uncountable. We also explained how to use the form in Theorem 1 to obtain some critical points. We also note that the analytical structure of the critical points is important, which determines the landscape properties of the loss function. This comment is also applicable to the case of deep linear networks and shallow ReLU networks. \n\n5. Towards the end of the paragraph after Prop. 2, we added further insight of Prop. 2 in a special case, i.e., both A_2 and A_1 are full rank at global minima under the assumptions on data matrices and network dimensions in Baldi & Hornik (1989). In the paragraph after Prop. 5, we added the similar understanding, i.e., if all the parameter matrices are square and the data matrices satisfy the assumptions as in Baldi & Hornik (1989), then all global minima must correspond to full rank parameter matrices. \n\n6. After Theorem 2, we commented that the saddle points can be non-strict for arbitrary data matrices X and Y with an illustrative example. \n\n7. We added another related work Li et al. 2016b.", "Q1: How to obtain the proper solution of eq (3)?\n\nA: We note that matrix L_1 can be chosen arbitrarily. Thus, choosing L_1 = 0 always satisfies eq (3), and we obtain the critical points A1 = C^-1 V^t U^t YX^+, A_2 = UVC with any invertible matrix C and any matrix V with the structure specified in Theorem 1. To further obtain the solution of eq (3) with nonzero L_1, one can fix a proper V and solve the linear equation on C in eq (3). If a solution exists, we then obtain the form of a corresponding critical point.\n\nQ2: In Example 1, authors gave a concrete example to demonstrate both local minimum and local maximum do exist in the shallow ReLU nonlinear networks by property choosing these matrices satisfying (12). How to choose these matrices for all the studied networks with some concrete examples?\n\nA: For other studied networks (shallow linear and deep linear networks), examples can be constructed based on the corresponding characterizations. For shallow linear networks, as our response for Q1, we can set L_1 = 0 (so that eq (3) is satisfied), and then A1 = C^-1 V^t U^t YX^+, A_2 = UVC with any invertible matrix C and any matrix V with the structure specified in Theorem 1 are critical points. Furthermore, if we further set the parameters p_i according to Prop 2, we obtain examples for global minima. For deep linear networks, it is also easier to construct examples by setting L_k = 0 for all k so that eq (6) is satisfied, and we can then obtain critical points for any invertible C_k and proper V_k with the structure specified in Theorem 3. Furthermore, if we further set the parameters p_i(0) according to Prop 5, we obtain examples for global minima. We note that all local minima are also global minima for these linear networks.", "Q1: I think in the title/abstract/intro the use of neural nets is somewhat misleading as neural nets are typically nonlinear. This paper is mostly about linear networks. I would suggest rewording title/abstract/intro.\n\nA: We agree, and we will reword neural networks as linear neural networks.\n\nQ2: From my understanding, the p_i have been introduced in Theorem 1 but given their prominent role in this proposition they merit a separate definition.\n\nA: We will separately define p_i, and further clarify their impact on the forms of the A_i. The p_i's in Theorem 1 can be any positive integer smaller than the corresponding m_i (i.e., the multiplicity of singular value sigma_i), and the sum of the pi's is equal to the rank of A_2.\n\nQ3: Given X and Y, can one run an algorithm to find all the critical points or at least the parameters used in the characterization p_i, V_i etc?\n\nA: We first note that in general, the set of critical points is uncountable and cannot be fully listed out. Hence, the characterization of the analytical forms of the critical points is more important in terms of its analytical structure, which can have a direct implication on the global optimality conditions and can be exploited to prove the landscape properties. \n\nOn the other hand, these forms do suggest ways to obtain some critical points. For shallow linear networks, if we choose L_1 = 0 (i.e., eq (3) is satisfied), we directly obtain the form of critical points A1 = C^-1 V^t U^t YX^+, A_2 = UVC, where C is any invertible matrix and V is any matrix with the structures specified in Theorem 1. For nonzero L_1, one can fix a proper V and solve the linear equation on C in eq (3). If a solution exists, we then obtain the form of a corresponding critical point. For shallow ReLU networks, one can find the solution of parameters from eq (12) following the same procedure as the above, and one needs to further verify the existence conditions in eqs (13, 14). For deep linear networks, in the case where L_k = 0 for all k (i.e., eq (6) is satisfied), we can obtain the form of critical points for any invertible C_k and proper V_k. For nonzero L_k, eq (6) needs to be verified for given C_k and V_k to determine a critical point. \n\nQ4: What insights do you gain by knowing Theorems 1, prop 1, prop 2, prop 3, Theorem 3, prop 4 and 5?\n\nA: The analytical forms in Theorems 1,3 help to characterize the global optima in Prop 2,5. They also help to identify the descent and ascent directions at critical points in Prop 3 and Theorem 4, establishing the landscape properties around them. Such properties then provide an alternative approach to show the equivalence between local minima and global minima.\n\nFor further insights, Prop. 2 case 1 implies that the parameter matrix A_2 must be full rank at any global minima. In particular, A_1 is also full rank at global minima in this case under the assumptions on data matrices and network dimensions in Baldi & Hornik (1989). Similar conclusions hold for deep linear networks, e.g., if all the parameter matrices are square and the data matrices satisfy the assumptions in Baldi & Hornik (1989), then all global minima must correspond to full rank parameter matrices. \n\nQ5: Does Theorem 2 have any computational implications, e.g. are saddles strict with a quantifiable bound?\n\nA: Theorem 2 does not directly imply the strictness of saddle points. In fact, the saddle points in Theorem 2 can be non-strict for arbitrary data X and Y (the case we consider). As an example, consider the loss of the linear network L(a_2, a_1) = (y-a_2 a_1 x)^2, where a_1, a_2, x and y are all scalars. Consider the case with y=0, then L(a_2, a_1) = (a_2 a_1 x)^2. One can check that the Hessian at the saddle point a_1 = 0, a_2 = 1 is [2x^2, 0; 0, 0], which does not have negative eigenvalue. Thus, non-strict saddle can exist if data are arbitrary.\n\nQ6: Why is Proposition 6 useful, can you find the parameters of this characterization with a computationally efficient algorithm? \n\nA: Prop 6 is more useful in terms of the structure of the forms it characterizes for the critical points. For example, such forms in Prop 6 (and its special case of Prop 7) are exploited to construct a spurious local minimum in Example 1. Computationally, as pointed out in our response to Q3, we can compute/verify the parameters for various cases, but we cannot fully list all critical points, which are uncountable.", "Q1: Contribution of this manuscript is a little incremental. Equivalence of global minima and local minima for linear networks is not surprising, e.g. Hardt & Ma (2017) and Kawaguchi (2016). \n\nA: We agree that the equivalence between global minima and local minima for linear networks has been established in the existing works. This work was in fact highly inspired by these previous results. However, the focus of this paper is different. The main results lie in providing the analytical forms of critical points for linear networks and ReLU networks, which further provides analytical forms of global optima (Prop. 2, 5) for linear networks and shows the existence of spurious local minima (Example 1) for ReLU networks. This type of results were not in Hardt & Ma (2017) and Kawaguchi (2016). We then further exploit such analytical forms of critical points to provide alternative arguments for the equivalence between local minima and global minima for linear networks, which were originally established in Kawaguchi (2016) by exploiting the necessary conditions of local minima. \n\nQ2: The characterizations of critical points for deep linear and ReLU networks seem to be hard to be interpreted. \n\nA: We agree that the forms of critical points for deep linear and ReLU networks are complex. But they can still be useful for various cases. For example, the characterization of critical points for deep linear networks in Theorem 3 further helps to characterize the global minima in Prop 5, and the characterization of critical points for ReLU networks in Prop 7 further helps to show the existence of spurious local minima in Example 1. \n\nQ3: This manuscript doesn't show that every local minimum of these two types of deep networks (i.e., deep linear and ReLu networks) is a global minimum, which actually has been shown by Kawaguchi (2016) with some assumptions. \n\nA: Indeed, under some assumptions, Kawaguchi (2016) established the equivalence between local minima and global minima for both deep linear and ReLU networks. However, Kawaguchi (2016) assumed that each ReLU is activated according to Bernoulli distribution, and studied the expected loss over the randomness of the activations. In this setting, the loss function of ReLU networks reduces to that of linear networks. In comparison, we neither assume nor average over the randomness of the activations in our loss for ReLU networks. In fact, we showed that spurious local minima do exist for such loss of ReLU networks (Example 1).\n\nQ4: The behaviors of linear networks and practical deep and nonlinear networks are very different. The results about one-hidden-layer linear networks are less interesting to the deep learning community.\n\nA: We agree that it is challenging to understand deep and nonlinear networks, and their behaviors can be very different from shallow linear networks. Ultimately, we agree that tools for studying shallow linear networks won’t be sufficient. However, understanding shallow linear networks can still be beneficial in various cases. For example, our characterizations of deep linear and shallow ReLU networks are further developments of the characterizations of shallow linear networks. Such understandings allow us to show the existence of spurious local minimum for ReLU networks (Example 1), which is different from the behavior of linear networks. \n\nWe also thank the reviewer for pointing out the mixed-up notations. We will fix these notations." ]
[ -1, -1, 7, 7, 6, -1, -1, -1, -1 ]
[ -1, -1, 3, 5, 4, -1, -1, -1, -1 ]
[ "rJye8vezz", "SkBt4Dgff", "iclr_2018_SysEexbRb", "iclr_2018_SysEexbRb", "iclr_2018_SysEexbRb", "iclr_2018_SysEexbRb", "S1aEzCJxG", "ryOWEcdlM", "SJ6btV9gz" ]
iclr_2018_rJm7VfZA-
Learning Parametric Closed-Loop Policies for Markov Potential Games
Multiagent systems where the agents interact among themselves and with an stochastic environment can be formalized as stochastic games. We study a subclass of these games, named Markov potential games (MPGs), that appear often in economic and engineering applications when the agents share some common resource. We consider MPGs with continuous state-action variables, coupled constraints and nonconvex rewards. Previous analysis followed a variational approach that is only valid for very simple cases (convex rewards, invertible dynamics, and no coupled constraints); or considered deterministic dynamics and provided open-loop (OL) analysis, studying strategies that consist in predefined action sequences, which are not optimal for stochastic environments. We present a closed-loop (CL) analysis for MPGs and consider parametric policies that depend on the current state and where agents adapt to stochastic transitions. We provide easily verifiable, sufficient and necessary conditions for a stochastic game to be an MPG, even for complex parametric functions (e.g., deep neural networks); and show that a closed-loop Nash equilibrium (NE) can be found (or at least approximated) by solving a related optimal control problem (OCP). This is useful since solving an OCP---which is a single-objective problem---is usually much simpler than solving the original set of coupled OCPs that form the game---which is a multiobjective control problem. This is a considerable improvement over the previously standard approach for the CL analysis of MPGs, which gives no approximate solution if no NE belongs to the chosen parametric family, and which is practical only for simple parametric forms. We illustrate the theoretical contributions with an example by applying our approach to a noncooperative communications engineering game. We then solve the game with a deep reinforcement learning algorithm that learns policies that closely approximates an exact variational NE of the game.
accepted-poster-papers
The paper considers Markov potential games (MPGs), where the agents share some common resource. They consider MPGs with continuous state-action variables, coupled constraints and nonconvex rewards, which is novel. The reviews are all positive and point out the novel contributions in the paper
train
[ "BJLGKD8Mz", "BkVvEP5gM", "BJZ6A-clG", "H1iE5-jQG", "ry7CMFMmz", "S1UU_tGQz", "B1JnBtzXf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "While it is not very surprising that in a potential game it is easy to find Nash equilibria (compare to normal form static games, in which local maxima of the potential are pure Nash equilibria), the idea of approaching these stochastic games from this direction is novel and potentially (no pun intended) fruitful. The paper is well written, the motivation is clear, and some of the ideas are non-trivial. However, the connection to learning representations is a little tenuous. ", "Summary:\nThis paper studies multi-agent sequential decision making problems that belong to the class of games called Markov Potential Games (MPG). It considers finding the optimal policy within a parametric space of policies, which can be represented by a function approximator such as a DNN.\nA main contribution of this work is that it shows that for MPG, instead of solving a multi-objective optimization problem (Eq. 8), which is difficult, it is sufficient to solve a scalar-valued optimization problem (Eq. 16). Theorem 1 shows that under certain conditions on the reward function, the game is MPG. It also shows how one might find the potential function J, which is used in the single objective optimization problem.\nFinding J can be computationally expensive in general. So the paper provides some properties that lead to finding J easier. For example, obtaining J is easy if we have a cooperative game (Corollary 1) or the reward can be decomposed/decoupled in a certain way (Theorem 2).\n\n\nEvaluation:\n\nThis is a well-written paper that studies an important problem, but I don’t think ICLR is the right venue for it. There is not much about (representation) learning in this work. The use of TRPO as an RL algorithm in the Experiment does not play a critical role in this work either. Aside this general comment, I have several other more specific comments.\n\n\n- There is a significant literature on the use of RL for multi-agent systems. The paper does not do a good job comparing and positioning with respect to them. For example, refer to the following recent paper and references therein:\n\nPerolat, Strub, et al., “Learning Nash Equilibrium for General-Sum Markov Games from Batch Data,” AISTATS, 2017.\n\n\n- If I understand correctly, the policies are considered to be functions from the state of the system to a continuous action. So it is a function, and not a probability distribution. This means that the space of considered policies correspond to the space of pure strategies. We know that for some games, the Nash equilibrium is a mixed strategy. Isn’t this a big limitation of this approach?\n\n\n- I am unclear how this approach can handle stochastic dynamics. For example, the optimization (P1) depends on the realization of (theta_i)_i. But this is not available. The dependence is not only in the objective, but also in the constraints, which makes things more difficult.\n\nI understand that in the experiments the authors used two models (either the average of random realization, or solving a different optimization for each realization), but none of them is an appropriate solution for a stochastic system.\n\n\n- How large is the MPG class? Is there any structural result that positions them compared to other Markov Games? For example, is the class of zero-sum games an example of MPG?\n\n\n- There is a comment close to the end of Section 5 that when there is no prior knowledge of the dynamics and the reward, one can use the proposed approach to learn PCL-NE by using any DRL.\nThis is questionable because if the reward is not known, the conditions of Theorems 1 or 2 cannot be verifies, so it is not possible to use (P1) instead of (G2).\n\n\n- What comments can you make about the computational complexity? It seems that depending on the dynamics, the optimization problem P1 can be non-convex, hence computationally difficult to solve.\n\n\n- How is the work related to the following paper?\nMacua, Zazo, Zazo, “Learning in Constrained Stochastic Dynamic Potential Games,” ICASSP, 2016\n\n======\nI updated the score based on the authors' rebuttal.\n", "This manuscript considers a subclass of stochastic games named Markov potential games. It provides some assumptions that guarantee that a game is a Markov potential game and leads to some nice properties to solve the problem to approximately a Nash equilibrium. It is claimed that the work extends the state of the art by analysing the closed-loop version in a different manner, firstly constraining policies to a parametric family and then deriving conditions for that, instead of the other way around. As someone with no knowledge in the topic, I find the paper interesting to read, but I have not followed any proofs. The experimental setup is quite limited, even though I believe that the intention of the authors is to provide some theoretical ideas rather than applying them. Minor point: there are a few sentences with small errors, this could be improved.", "We have addressed the comments from the reviewers. In addition, we have strengthened the form of Theorem 2.", "We believe that ICLR is a propper venue. Our key contribution is to show that closed-loop NE (CL-NE) can be approximated with parametric policies. However, the applicability of this result is limited by the accuracy of the approximation. Approximations that depend on hand-coded features usually require domain knowledge and have to be re-designed for every game; while learned features that can express complex policies can alleviate these problems. Thus, we see this work as a relevant application of learned representations to multiagent systems that extend previous works, which only studied cooperative games, or assumed discrete state-action with no coupled constraints.\n\nAlthough the focus of our literature review is potential games, we know no previous method for approximating CL-NE for any class of Markov games with continuous variables and coupled constraints. There are open-loop (OL) analysis of some games with continuous variables, like monotone games. Also, (Perolat et al. 2017) studied state-dependent policies but assumed finite state-action sets, which are less common in engineering.\n\nConsidering deterministic policies is not a limitation of our setting for two reasons: 1) Prop. 1 shows that under mild conditions, there exists a deterministic policy that achieves the optimal value of P1 and that is also an NE of G2. 2) We do not claim that our method will find all possible NE of G2, but just the one that is also solution to P1. There may be many (possible mixed strategies) solutions to G2, but we propose a method to find one of them.\n\nThe reviewer has concerns about handling stochastic dynamics. We remark that the notation for objective and dynamix is standard in the literature. On the other hand, we agree that we should clearify that the optimal value of the OCP is the one that maximizes the expected return, for which the constraints are satisfied almost surely.\n\nRegarding the models used in the experiment, we remark that these two models are only for estimating the benchmark solution. The proposed DRL solution tackles the problem without taking into account any of these models. However, we are happy to change the way of computing the benchmark solution, and any further feedback on this direction will be much appreciated.\n\nThe reviewer asks how large is the MPG class, and if zero sum games are an example of MPG. MPGs appear often in engineering and economics applications, where multiple agents have to share some resource. We have studied MPGs with \"exact potentiality\" condition, that includes cooperative and congestion games. There is a larger family of games that satisfy the \"weighted potentiality\" condition, where an agent’s change in reward due to its unilateral strategy deviation is equal to the change in the potential function but scaled by a positive weight. It is easy to show that weighted potential games (WPGs) and exact potential games can be made equivalent by scaling the reward functions [1, Lemma 2.1]. Thus, equivalent results to those presented here should be equally available for WPGs. A zero sum game is a WPG with weights 1 and -1, but we believe our KKT approach still holds in this case.\n\nThe reviewer argues that it is not possible to learn PCL-NE with no prior knowledge of the environment, since Theorems 1 or 2 cannot be verified. We have to distinguish designer from DRL agents. Our claim is that we can use the proposed approach to find a PCL-NE by using any DRL agent that has no prior knowledge of the dynamics and/or the reward, given that the game is MPG. We do not claim that the agents are able to validate the Theorems. This situation is similar to previous works that assumed knowledge that the game is cooperative, or for most of the single agent reinforcement learning literature that assumes that the environment is an MDP without requiring the agents to verify it.\n\nThe reviewer suggests that since the rewards are nonconvex, the computational complexity of P1 can be high. We disagree in part. Under Assumptions 1-4, having a discount factor smaller than one makes the Bellman operator monotone, independent on the convexity of the rewards. On the other hand, training a DRL algorithm implies finding local optima of nonconvex problems; but we remark that this is independent on the convexity of the agents' rewards.\n\nThere are a number of notable differences with (Macua, Zazo, Zazo, 2016). The main one is that although such work had the intuition that MPGs could be solved with RL methods, it only included an OL analysis; actually, it only extended previous OL analysis to the stochastic case. That is the reason why it didn't consider state-dependent policy and their Corollary 1 missed the disjoint state condition. Since such OL analysis is not satisfactory for stochastic dynamics, the current paper bridges this gap. We believe that this is an important piece in the potential games literature.\n\n[1] Lã et al. Potential game theory: applications in radio resource allocation. Springer, 2016", "We appreciate the feedback from the reviewer. We just wish to emphasize the importance of providing an analysis and effective method for finding closed-loop (CL) solutions for a relevant class of games that appear often in engineering and economics, and that includes cooperative and congestion games. Up to the best of our knowledge, this is the first time that this kind of solutions are rigorously provided for any class of Markov games with continuous variables and/or coupled constraints that appear often in engineering applications.\n\nMoreover, we remark that since our solution relies on parametric policies, being able to learn features is key for the applicability of the method. In summary, we believe this paper provides a useful application of representation learning for multiagent systems, which extends previous approaches, which only considered cooperative games or assumed finite state-action sets.\n\nWe acknowledge that the experimental setup is limited. But as the reviewer suggests, our intention with the example in Appendixes A-B and with the numerical experiment in Sec. 5 is to illustrate how to apply the proposed framework to economic and engineering problems.", "We also expected that finding closed-loop Nash equilibria in MPG should be doable. However, we remark that the closed-loop analysis is much more slippery than the open-loop analysis, since the agents have to take into account not only all possible trajectories over the state-action space (as in the open-loop case), but also all possible deviations from that trajectories at every step. The situation is even more involved since we consider coupled constraints (i.e., we are considering the stochastic infinite-horizon extension of a relevant class of generalized Nash equilibrium problems like those studied in [1]). Up to the best of our knowledge this is the first work that provides a rigorous analysis and an effective method for learning approximate closed-loop Nash equilibrium in continuous MPG (actually in any class of games with continuous state-action variables).\n\nThe reviewer comments that the connection of the current work with learning representations is a little tenuous. Although the main focus of the paper is the theoretical analysis of Markov potential games (MPGs), we believe that this connection is indeed stronger than it might seem. Our key idea is to rely on parametric policies, whose applicability for real problems depends on the expressiveness of the parametric family. If the optimal policy is a complicated mapping from states to actions, we require sophisticated parametric approximations that are able to approximate such mapping. Parametric approximations that depend on hand-coded features usually require expert domain knowledge, can be time consuming (especially for multiagent problems), and have to be re-designed for every problem at hand; while learned features that can express complex closed-loop policies are able to alleviate these problems, hence, crucial to the usefulness of our method. In summary (as responded to AnonReviewer2), we see the current setting as a relevant application of learned representations that extend previous multiagent applications, which only studied cooperative games, or assumed discrete state-action, and never with coupled constraints. In addition, we remark that our analysis allows to reformulate the game in a centralized manner that could inspire the extension of advanced DRL techniques like [2, 3], which were previously only valid for cooperative games.\n\n[1] F. Facchinei and C. Kanzow. \"Generalized Nash equilibrium problems.\" 4OR: A Quarterly Journal of Operations Research 5.3 (2007): 173-210.\n\n[2] J. Foerster et al. \"Counterfactual Multi-Agent Policy Gradients.\" arXiv preprint arXiv:1705.08926 (2017).\n\n[3] P. Sunehag et al. \"Value-Decomposition Networks For Cooperative Multi-Agent Learning.\" arXiv preprint arXiv:1706.05296 (2017)." ]
[ 7, 6, 6, -1, -1, -1, -1 ]
[ 2, 3, 1, -1, -1, -1, -1 ]
[ "iclr_2018_rJm7VfZA-", "iclr_2018_rJm7VfZA-", "iclr_2018_rJm7VfZA-", "iclr_2018_rJm7VfZA-", "BkVvEP5gM", "BJZ6A-clG", "BJLGKD8Mz" ]
iclr_2018_SyProzZAW
The power of deeper networks for expressing natural functions
It is well-known that neural networks are universal approximators, but that deeper networks tend in practice to be more powerful than shallower ones. We shed light on this by proving that the total number of neurons m required to approximate natural classes of multivariate polynomials of n variables grows only linearly with n for deep neural networks, but grows exponentially when merely a single hidden layer is allowed. We also provide evidence that when the number of hidden layers is increased from 1 to k, the neuron requirement grows exponentially not with n but with n^{1/k}, suggesting that the minimum number of layers required for practical expressibility grows only logarithmically with n.
accepted-poster-papers
All the reviewers are agree on the significance of the topic of understanding expressivity of deep networks. This paper makes good progress in analyzing the ability of deep networks to fit multivariate polynomials. They show exponential depth advantage for general sparse polynomials. I am very surprised that the paper misses the original contribution of Andrew Barron. He analyzes the size of the shallow neural networks needed to fit a wide class of functions including polynomials. The deep learning community likes to think that everything has been invented in the current decade. @article{barron1994approximation, title={Approximation and estimation bounds for artificial neural networks}, author={Barron, Andrew R}, journal={Machine Learning}, volume={14}, number={1}, pages={115--133}, year={1994}, publisher={Springer} }
train
[ "HJqsNbFez", "S1z1Zf9xM", "B1B65zqef", "HkvXDF2XM", "HyaCBK3XG", "B1ebHYnXG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Experimental results have shown that deep networks (many hidden layers) can approximate more complicated functions with less neurons compared to shallow (single hidden layer) networks. \nThis paper gives an explicit proof when the function in question is a sparse polynomial, ie: a polynomial in n variables, which equals a sum J of monomials of degree at most c. \nIn this setup, Theorem 4.3 says that a shallow network need at least ~ (1 + c/n)^n many neurons, while the optimal deep network (whose depth is optimized to approximate this particular input polynomial) needs at most ~ J*n, that is, linear in the number of terms and the number of variables. The paper also has bounds for neural networks of a specified depth k (Theorem 5.1), and the authors conjecture this bound to be tight (Conjecture 5.2). \n\nThis is an interesting result, and is an improvement over Lin 2017 (where a similar bound is presented for monomial approximation). \nOverall, I like the paper.\n\nPros: new and interesting result, theoretically sound. \nCons: nothing major.\nComments and clarifications:\n* What about the ability of a single neural network to approximate a class of functions (instead of a single p), where the topology is fixed but the network weights are allowed to vary? Could you comment on this problem?\n* Is the assumption that \\sigma has Taylor expansion to order d tight? (That is, are there counter examples for relaxations of this assumption?) \n* As noted, the assumptions of your theorems 4.1-4.3 do not apply to ReLUs, but ReLUs network perform well in practice. Could you provide some further comments on this?\n\n", "The paper investigates the representation of polynomials by neural networks up to a certain degree and implied uniform approximations. It shows exponential gaps between the width of shallow and deep networks required for approximating a given sparse polynomial. \n\nBy focusing on polynomials, the paper is able to use of a variety of tools (e.g. linear algebra) to investigate the representation question. Results such as Proposition 3.3 relate the representation of a polynomial up to a certain degree, to the approximation question. Here it would be good to be more specific about the domain, however, as approximating the low order terms certainly does not guarantee a global uniform approximation. \n\nTheorem 3.4 makes an interesting claim, that a finite network size is sufficient to achieve the best possible approximation of a polynomial (the proof building on previous results, e.g. by Lin et al that I did not verify). The idea being to construct a superposition of Taylor approximations of the individual monomials. Here it would be good to be more specific about the domain. Also, in the discussion of Taylor series, it would be good to mention the point around which the series is developed, e.g. the origin. \n\nThe paper mentions that ``the theorem is false for rectified linear units (ReLUs), which are piecewise linear and do not admit a Taylor series''. However, a ReLU can also be approximated by a smooth function and a Taylor series. \n\nTheorem 4.1 seems to be implied by Theorem 4.2. Similarly, parts of Section 4.2 seem to follow directly from the previous discussion. \n\nIn page 1 ```existence proofs' without explicit constructions'' This is not true, with numerous papers providing explicit constructions of functions that are representable by neural networks with specific types of activation functions. \n\n", "Summary and significance: The authors prove that for expressing simple multivariate monomials over n variables, networks of depth 1 require exp(n) many neurons, whereas networks of depth n can represent these monomials using only O(n) neurons. \nThe paper provides a simple and clear explanation for the important problem of theoretically explaining the power of deep networks, and quantifying the improvement provided by depth.\n\n+ves:\nExplaining the power of depth in NNs is fundamental to an understanding of deep learning. The paper is very easy to follow. and the proofs are clearly written. The theorems provide exponential gaps for very simple polynomial functions.\n\n-ves:\n1. My main concern with the paper is the novelty of the contribution to the techniques. The results in the paper are more general than that of Lin et al., but the proofs are basically the same, and it's difficult to see the contribution of this paper in terms of the contributing fundamentally new ideas. \n2. The second concern is that the results apply only to non-linear activation functions with sufficiently many non-zero derivatives (same requirements as for the results of Lin et al.).\n3. Finally, in prop 3.3, reducing from uniform approximations to Taylor approximations, the inequality |E(δx)| <= δ^(d+1) |N(x) - p(x)| does not follow from the definition of a Taylor approximation.\n\nDespite these criticisms, I contend that the significance of the problem, and the clean and understandable results in the paper make it a decent paper for ICLR.", "Thank you for this thoughtful feedback. To respond to the particular comments raised:\n\n- This is a very interesting question. In this work, we have supposed that connections between layers of a network are dense. In this case, the topology is given simply by the number of neurons in each layer, and this architecture is relatively versatile. Architectures of the form described in the proof of Thm. 5.1 (where the sizes of the hidden layers follow a decreasing geometric progression) should be especially flexible, able to learn a wide range of monomials and sums of monomials. Intuitively, this network architecture learns well because the initial large hidden layers capture many lower order correlations between input variables, which are then used to calculate higher-order correlations deeper within the network.\n\n- The conditions on the activation function appear to be at least largely tight. As we mention in the text, Thm. 3.4 fails for ReLU activation (where the Taylor series is not even defined), implying that all subsequent theorems also fail for ReLUs. More interestingly, it is possible to multiply d inputs with (slightly) fewer than 2^d neurons if the constant term in the Taylor series for the activation function is zero. We had previously proven that a less elegant exponential bound still holds as long as the dth Taylor coefficient itself is nonzero (without any assumptions on the other coefficients), and we have included this in our revision.\n\n- In practice, we are rarely concerned with uniform approximation for epsilon truly arbitrarily small. ReLUs can be (imperfectly) approximated by Taylor-approximable functions, and the behavior diverges as the desired epsilon decreases. In running our experiments, we observed similar behavior with ReLUs as with Taylor-approximable activation functions, even though the full power of our theoretical results is indeed not applicable.", "We are very grateful for this helpful feedback, and have responded below to individual issues raised.\n\nThank you for the suggestion that we make clearer the domain under which Prop. 3.3 and Thm. 3.4 hold. We have made explicit in our revision that these results hold for any (fixed) domain (-R, R)^n, and that Taylor series are constructed around the origin.\n\nWhile it is indeed true that a ReLU can be approximated by a smooth function with a well-defined Taylor series, any particular choice of such a function would fail our strict requirement of uniform approximation for arbitrarily small \\epsilon. Since we have assumed that the choice of nonlinear function \\sigma is fixed, we cannot use progressively better approximations to ReLUs. Another way of thinking about this is to note that a neural network with ReLUs is ultimately piecewise linear. For a fixed budget of neurons, the number of linear pieces is bounded. Given a fixed number of linear pieces and a general polynomial to approximate, the approximation cannot be better than some fixed \\epsilon (depending on the polynomial), whereas we would like \\epsilon to be arbitrarily small.\n\nTheorems 4.1 and 4.2 are in fact independent, with neither implying the other. This is because it is possible for a polynomial to admit a compact uniform approximation without admitting a compact Taylor approximation. We have made this clearer in the text.\n\nWe have rephrased our discussion of prior literature to emphasize that “existence proofs” are a feature only of *some* of the prior work. There are indeed excellent papers that provide explicit constructions.", "We are very grateful for this close reading and constructive comments. Detailed responses follow:\n\n1. We believe that in addition to presenting more general results than in the literature, we also contribute techniques that are significantly stronger than those in Lin et al. In particular, tighter proof techniques are required in order to prove lower bounds on the number of neurons required for a uniform approximation. One of the more interesting methodological insights resulting from our approach is that even though uniform approximation does not imply Taylor approximation, we can still use the lack of a Taylor approximation as a significant step towards proving the lack of a uniform approximation. To the best of our knowledge, this is the first time that Taylor approximation and uniform approximation of neural networks have been rigorously linked.\n\n2. The assumptions on the activation function can be weakened somewhat at the expense of less elegant formulations. We had previously proven that an exponential bound still holds as long as the dth Taylor coefficient itself is nonzero (without any assumptions on the other coefficients), and we have included this statement and proof in our revision. As we mention in the text, Thm. 3.4 fails for ReLU activation (where the Taylor series is not even defined), implying that all subsequent theorems also fail for ReLUs. In practice, however, we are rarely concerned with uniform approximation for epsilon truly arbitrarily small. ReLUs can be (imperfectly) approximated by Taylor-approximable functions, and the behavior diverges as the desired epsilon decreases. In running our experiments, we observed similar behavior with ReLUs as with Taylor-approximable activation functions, even though the full power of our theoretical results is indeed not applicable.\n\n3. In our revision, we have rewritten the proof of prop. 3.3 to encompass all cases. Thank you for calling this to our attention." ]
[ 7, 6, 6, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_SyProzZAW", "iclr_2018_SyProzZAW", "iclr_2018_SyProzZAW", "HJqsNbFez", "S1z1Zf9xM", "B1B65zqef" ]
iclr_2018_B1QgVti6Z
Empirical Risk Landscape Analysis for Understanding Deep Neural Networks
This work aims to provide comprehensive landscape analysis of empirical risk in deep neural networks (DNNs), including the convergence behavior of its gradient, its stationary points and the empirical risk itself to their corresponding population counterparts, which reveals how various network parameters determine the convergence performance. In particular, for an l-layer linear neural network consisting of \dmi neurons in the i-th layer, we prove the gradient of its empirical risk uniformly converges to the one of its population risk, at the rate of O(r2llmaxi\dmislog⁡(d/l)/n). Here d is the total weight dimension, s is the number of nonzero entries of all the weights and the magnitude of weights per layer is upper bounded by r. Moreover, we prove the one-to-one correspondence of the non-degenerate stationary points between the empirical and population risks and provide convergence guarantee for each pair. We also establish the uniform convergence of the empirical risk to its population counterpart and further derive the stability and generalization bounds for the empirical risk. In addition, we analyze these properties for deep \emph{nonlinear} neural networks with sigmoid activation functions. We prove similar results for convergence behavior of their empirical risk gradients, non-degenerate stationary points as well as the empirical risk itself. To our best knowledge, this work is the first one theoretically characterizing the uniform convergence of the gradient and stationary points of the empirical risk of DNN models, which benefits the theoretical understanding on how the neural network depth l, the layer width \dmi, the network size d, the sparsity in weight and the parameter magnitude r determine the neural network landscape.
accepted-poster-papers
Based on the positive reviews, I recommend acceptance. The paper analyzes when empirical risk is close to the population version, when empirical saddle points are close to the population version and empirical gradients are close to the population version.
train
[ "H1Wo7pKgM", "BJGc-k9xG", "r13F3TRbM", "S1T8deYMz", "S1mZuxYGf", "S14tDgFGG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "public" ]
[ "This paper studies empirical risk in deep neural networks. Results are provided in Section 4 for linear networks and in Section 5 for nonlinear networks.\nResults for deep linear neural networks are puzzling. Whatever the number of layers, a deep linear NN is simply a matrix multiplication and minimizing the MSE is simply a linear regression. So results in Section 4 are just results for linear regression and I do not understand why the number of layers come into play? \nAlso this is never explicitly mentioned in the paper, I guess the authors make an assumption that the samples (x_i,y_i) are drawn i.i.d. from a given distribution D. In such a case, I am sure results on the population risk minimization can be found for linear regression and should be compare to results in Section 4.\n\n", "This paper provides the analysis of empirical risk landscape for GENERAL deep neural networks (DNNs). Assumptions are comparable to existing results for OVERSIMPLIFED shallow neural networks. The main results analyzed: 1) Correspondence of non-degenerate stationary points between empirical risk and the population counterparts. 2) Uniform convergence of the empirical risk to population risk. 3) Generalization bound based on stability. The theory is first developed for linear DNNs and then generalized to nonlinear DNNs with sigmoid activations.\n\nHere are two detailed comments:\n\n1) For deep linear networks with squared loss, Kawaguchi 2016 has shown that the global optima are the only non-degerenate stationary points. Thus, the obtained non-degerenate stationary deep linear network should be equivalent to the linear regression model Y=XW. Should the risk bound only depends on the dimensions of the matrix W?\n\n2) The comparison with Bartlett & Maass’s (BM) work is a bit unfair, because their result holds for polynomial activations while this paper handles linear activations. Thus, the authors need to refine BM's result for comparison.", "Overall, this work seems like a reasonable attempt to answer the question of how the empirical loss landscape relates to the true population loss landscape. The analysis answers:\n\n1) When empirical gradients are close to true gradients\n2) When empirical isolated saddle points are close to true isolated saddle points\n3) When the empirical risk is close to the true risk.\n\nThe answers are all of the form that if the number of training examples exceeds a quantity that grows with the number of layers, width and the exponential of the norm of the weights with respect to depth, then empirical quantities will be close to true quantities. I have not verified the proofs in this paper (given short notice to review) but the scaling laws in the upper bounds found seem reasonably correct. \n\nAnother reviewer's worry about why depth plays a role in the convergence of empirical to true values in deep linear networks is a reasonable worry, but I suspect that depth will necessarily play a role even in deep linear nets because the backpropagation of gradients in linear nets can still lead to exponential propagation of errors between empirical and true quantities due to finite training data. Moreover the loss surface of deep linear networks depends on depth even though the expressive capacity does not. An analysis of dynamics on this loss surface was presented in Saxe et. al. ICLR 2014 which could be cited to address that reviewer's concern. However, the reviewer's suggestion that the results be compared to what is known more exactly for simple linear regression is a nice one. \n\nOverall, I believe this paper is a nice contribution to the deep learning theory literature. However, it would even better to help the reader with more intuitive statements about the implications of their results for practice, and the gap between their upper bounds and practice, especially given the intense interest in the generalization error problem. Because their upper bounds look similar to those based on Rademacher complexity or VC dimension (although they claim theirs are a little tighter) - they should put numbers in to their upper bounds taken from trained neural networks, and see what the numerical evaluation of their upper bounds turn out to be in situations of practical interest where deep networks show good generalization performance despite having significantly less training data than number of parameters. I suspect their upper bounds will be loose, but still - it would be an excellent contribution to the literature to quantitatively compare theory and practice with bounds that are claimed to be slightly tigher than previous bounds. Even if they are loose - identifying the degree of looseness could inspire interesting future work. \n", "(1)\tAnother reviewer's suggestion that the results be compared to what is known more exactly for simple linear regression is a nice one. \n\nReply: Thanks for the suggestion. Please refer to the reply to the second question of Reviewer 1 for our explanations. \n\n(2)\tIt would better to help the reader with more intuitive statements about the implications of their results for practice, and the gap between their upper bounds and practice, especially given the intense interest in the generalization error problem. Because their upper bounds look similar to those based on Rademacher complexity or VC dimension (although they claim theirs are a little tighter) - they should put numbers into their upper bounds taken from trained neural networks, and see what the numerical evaluation of their upper bounds turn out to be in situations of practical interest where deep networks show good generalization performance despite having significantly less training data than number of parameters. I suspect their upper bounds will be loose, but still - it would be an excellent contribution to the literature to quantitatively compare theory and practice with bounds that are claimed to be slightly tighter than previous bounds. Even if they are loose - identifying the degree of looseness could inspire interesting future work. \n\nReply: Thanks for your suggestion. Similar to other works which focus on the analysis of generalization error (Bartlett & Maass 2003; Neyshabur et al., COLT 2015), stability and robustness (Xu & Mannor, Machine learning 2012), and Rademacher complexity of networks (Sun et al., AAAI 2016; Xie et al., Axriv 2015), the derived theoretical bounds are relatively looser than the computed empirical bound estimated on a specific dataset (Kawaguchi et al., Axriv 2017). This is because the property of the computed solution is unknown when employing general optimization algorithms. Therefore, in the analysis, we have to consider the worst case (worst solution or worst function in a hypothesis class) to bound the error. In particular, we need to bound the generalization error |E_{S~D, A} (Jn(w^n)-J(w^n))| where w^n is a computed optimization solution by an algorithm A. In this case, the distance between the expected empirical loss E_{S~D} Jn(w^n) and the expected population loss $E_{S~D} J(w^n) is small. In contrast, due to the unknown property of the solution w^n which relies on the concrete optimization algorithm, testing dataset, etc., we have to use the worst case (namely consider the whole solution space) to bound this error. As mentioned in the manuscript, the generalization error and stability error are directly derived by Theorems 3 and 6. So the comparison between the empirical bound and the derived bound is not meaningful. This is also why other works do not present the comparison either. In the future we will focus on a specific algorithm and utilize the property of its computed optimization solution to bound the generalization error which may give a more practical error bound.\n\nIt is worth mentioning that the results in Theorems 3 and 6 are tighter than or comparable to the similar results in existing works (e.g. Bartlett & Maass 2003; Neyshabur et al., COLT 2015). Also the uniform convergence of the gradient and the non-degenerate stationary points of the empirical risk are new which benefit the theoretical understanding on how the neural network parameters determine the neural network landscape. \n", "(1)\tFor deep linear networks with squared loss, Kawaguchi 2016 has shown that the global optima are the only non-degerenate stationary points. Thus, the obtained non-degerenate stationary of a deep linear network should be equivalent to the linear regression model Y=XW. Should the risk bound only depends on the dimensions of the matrix W?\n\nReply: Thanks for your comments. Some existing works (e.g., Loh and Wainwright, JMLR 2015; Negahban et al. NIPS 2011) proved that the risk bound of linear regression only depend on the dimension of the regression matrix W. However, as explained in the reply to the second question of Reviewer 1, the multi-layer linear network is not equivalent to a linear regression due to an additional rank constraint on the regression parameter and the multi-factor form in our analysis: let W’=W_1W_2…W_l and we have Y=W’X subject to rank(W’)<=min{rank(W_1),…,rank(W_l)}. Thus, relevant results on linear regression are not applicable here. \n\nOur risk bound of multi-layer linear networks involves the dimensions of each weight matrix. This is because in the proof we consider the essential multi-layer architecture of the deep linear network. In order to derive the uniform convergence sup_{W} |Jn(W)-J(W)| we need to consider the solution space of W=[W_1,W_2,...,W_l] instead of the solution space of W=W_1*W_2*…*W_l in the linear regression. We explained the reason why we do not transform the deep linear network into a linear regression in the reply to the second question of Reviewer 1. Please refer to it. The main reason is that we aim to build consistent analysis techniques for both deep linear and ReLU networks. This work is mainly devoted to analyzing the multi-layer architecture of deep linear networks which can pave a way for analyzing deep ReLU networks. Besides, the obtained results are more consistent with the results for deep nonlinear networks (see Sec. 5). About this we also explained in the updated version. Please refer to the last paragraph of the reply to the second question of Reviewer 1.\n\n(2)\tThe comparison with Bartlett & Maass’s (BM) work is a bit unfair, because their result holds for polynomial activations while this paper handles linear activations. Thus, the authors need to refine BM's result for comparison.\n\nReply: Thanks for your suggestion. Since the linear activation function is a case of polynomial activations, we compared our results (Theorem 3) on the deep linear networks with Bartlett & Maass’s result. Actually, our risk bound (Theorem 6) on the nonlinear networks is also tighter than theirs. Our bound is sup_w |\\hat{Jn(w)-J(w)}| <= O(\\sqrt{[s log(dn/l) + log(1/epsilon)](l-1)/ n} ), while in Bartlett & Maass’s work, their result is |\\hat{Jn(w)- inf_f J(w) }| <= O(\\sqrt{[\\gamma log(n)^2 + log(1/epsilon)] / n} ) where \\gamma is at the order of O(ld log(d) + l^2d). In the updated version, we added comparison between our results in Theorem 6 on the deep nonlinear networks and Bartlett & Maass’s results at the end of the third paragraph in Sec. 5.2.\n\n“Similar to linear networks, our risk convergence rate is also tighter than the convergence rate on the networks with polynomial activation functions and one-dimensional output in (Bartlett & Maass, 2003) since ours is at the order of O(\\sqrt{ (l-1)(s \\log(dn/l)+\\log(1/\\varepsilon))/n), while the later one is $O(\\sqrt{(\\gamma \\log^2(n)+\\log(1/\\varepsilon))/n}) where \\gamma is at the order of O(ld \\log(d)+l^2 d) (Bartlett & Maass, 2003).”", "(1) Results in Sec. 4 are for linear regression (LR) and why the number of layers come into play?\n\nReply: The issue w.r.t. the number of layers comes from the assumption that each weight matrix W_i is individually bounded, i.e., ||W_i||_F<=r, which gives the multiplied matrices are bounded by ||W_1*…*W_l||_F<=r^l. This bound is commonly used in our proof and unavoidable, since both our proof and the analysis of a LR model need such a bound of ||W_1*…*W_l||_F on the regression matrix. \n\nBesides, our analysis does not transform the multi-layer linear network into an LR model (explained in 2nd question) for investigating the effect of network parameters upon model performance. Thus we need to consider the number of layers. E.g., to bound the risk sup_w | Jn(w)-J(w) |, we need the Lipschitz constant of Jn(w) which is the upper bound on the norm of the gradient of Jn(w) and is at the order of O(l r^{l-1}). \n\n(2) Assumption that samples drawn i.i.d. from distribution D is not mentioned. In such a case, results on the population risk minimization can be found for linear regression (LR) and should be compared.\n\nReply: As explained when introducing the problem Eqn. (1), the data are i.i.d. drawn from D. \n\nWe first explain the difference with the results of linear regression. First, the multi-layer linear network is not equivalent to an LR due to an additional rank constraint on the regression parameter. Let W’=W_1*W_2*…*W_l. We have Y=W’*X s.t. rank(W’)<=min{rank(W_1),…,rank(W_l)}. Thus relevant results on LR are not applicable. \nBesides, our results give a similar convergence rate as the one for LR models (at least in n). \n(1) Our convergence rate for non-degenerate stationary points matches that of LR models, which are both O(1/sqrt(n)). E.g., Loh and Wainwright (JMLR 2015) proved that for sparse LR, the distance between the optimization solution and the optimum shrinks at O(sqrt(h s \\log(d)/n )). Here s and d respectively denote the nonzero entry number and the dimension of parameter. h denotes the upper bound on the magnitude of the gradient of Jn(w) and is at the order of O(r^l) as explained above. Negahban et al. (NIPS 2011) provided similar results. These results accord with ours in n. The difference lies in d and s as we consider different solution space W=[W_1,…W_l] in our results from the solution space W=W_1*…*W_l in the linear regression (see reply to 1st question of Reviewer2). But the results from Loh et al. and Negahban et al. require restricted strong convexity (RSC) condition which is hard to verify and the noise in the data to be i.i.d. standard Gaussian. In contrast, our analysis does not use the RSC condition and only assume that the input datum x is sub-Gaussian and has bounded magnitude. \n(2) As for the convergence rate of empirical risk, to our best knowledge there are no similar results. Applying the Rademacher complexity (RC) based approach (commonly used to analyze the uniform convergence rate and usually gives tighter bounds than VC-dimension) gives the RC of linear regression at the order of O(h/sqrt{n}) (Ofer Dekel’s lecture notes, CSE522, 2011) and hence the convergence rate is |sup_f Jn(W’)- J(W’)| <= O(h/sqrt{n}). Here f denotes the linear hypothesis class, and h denotes the upper bound on the F-norm of W’ and is at the order of O(r^l) when each \\|W_i\\|_F is bound by r as in our analysis. This bound accords with our provided convergence rate O(1/sqrt{n}). The extra parameters s and d are involved in our results as we consider the whole parameter space rather than the function hypothesis f, which gives more transparent explanations on the roles of various network parameters. \n(3) As for the convergence rate of the gradient, we did not find any related works or existing directly applicable technique. We are the first to provide such results. \n\nThe most important reason why we do not transform to LR is that analyzing LR model cannot provide useful results for further analyzing deep ReLU networks which are the focus of this work and most popular models in practice. So our proof considers the multi-layer architecture. Specifically,\t\n(1) Our analysis technique may benefit other property analysis of deep neural networks. Meanwhile, we cannot transform the deep nonlinear network (e.g. deep ReLU networks) into a linear regression model, since each layer involves the ReLU function. So resorting to the analysis of linear regression model cannot really benefit the analysis of deep nonlinear networks, which is not expected by us.\n(2) Avoiding transforming the deep linear networks to linear regression puts the analysis of deep linear networks in a similar framework as the deep ReLU networks. Thus, we derive consistent and neat results for both deep linear and ReLU networks which both include parameters about the multi-layer architecture (like depth l) and layer-wise weight matrix (like dimensions and magnitude bound).\n\nIn the updated version, we have added the explanations at the end of Sec. 4.2." ]
[ 3, 7, 7, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1 ]
[ "iclr_2018_B1QgVti6Z", "iclr_2018_B1QgVti6Z", "iclr_2018_B1QgVti6Z", "r13F3TRbM", "BJGc-k9xG", "H1Wo7pKgM" ]
iclr_2018_Hk9Xc_lR-
On the Discrimination-Generalization Tradeoff in GANs
Generative adversarial training can be generally understood as minimizing certain moment matching loss defined by a set of discriminator functions, typically neural networks. The discriminator set should be large enough to be able to uniquely identify the true distribution (discriminative), and also be small enough to go beyond memorizing samples (generalizable). In this paper, we show that a discriminator set is guaranteed to be discriminative whenever its linear span is dense in the set of bounded continuous functions. This is a very mild condition satisfied even by neural networks with a single neuron. Further, we develop generalization bounds between the learned distribution and true distribution under different evaluation metrics. When evaluated with neural distance, our bounds show that generalization is guaranteed as long as the discriminator set is small enough, regardless of the size of the generator or hypothesis set. When evaluated with KL divergence, our bound provides an explanation on the counter-intuitive behaviors of testing likelihood in GAN training. Our analysis sheds lights on understanding the practical performance of GANs.
accepted-poster-papers
I recommend acceptance. The two positive reviews point out the theoretical contributions. The authors have responded extensively to the negative review and I see no serious flaw as claimed by the negative review.
train
[ "HJjLXT4gM", "ByyV3Atez", "SyRq3ukMf", "HySPX0V-f", "BJJee_aXz", "Skfk1PT7z", "rkqxkP6XG", "Hyz1hBp7M", "rkwy_hVbG", "S1FLYnVbf", "S1eOo0sez", "r16zP8jlG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "== Paper Summary ==\nThe paper addresses the problem of balancing capacities of generator and discriminator classes in generative adversarial nets (GANs) from purely theoretical (function analytical and statistical learning) perspective. In my point of view, the main *novel* contributions are: \n(a) Conditions on function classes guaranteeing that the induced IPMs are metrics and not pseudo-metrics (Theorem 2.2). Especially I liked an argument explaining why ReLu activations could work better in discriminator that tanh.\n(b) Proving that convergence in the neural distance implies a weak convergence (Theorem 2.5)\n(c) Listing particular cases when the neural distance upper bounds the so-called bounded Lipschitz distance (also know as the Fortet-Mourier distance) and the symmetrized KL-divergence (Corollary 2.8 and Proposition 2.9).\n\nThe paper is well written (although with *many* typos), the topic is clearly motivated and certainly interesting. The related literature is mainly covered well, apart from several important points listed below.\n\n== Major comments ==\nIn my opinion, the authors are slightly overselling the results. Next I shortly explain why:\n\n(1) First, point (a) above is indeed novel, but not groundbreaking. A very similar result previously appeared in [1, Theorem 5]. The authors may argue that the referenced result deals only with MMDs, that is IPMs specified to the function classes belonging to the Reproducing Kernel Hilbert Spaces. However, the technique used to prove the \"sufficient\" part of the statement is literally *identical*. \n\n(2) As discussed in the paragraph right after Theorem 2.5, Theorem 10 of [2] presents the same result which is on one hand stronger than Theorem 2.5 of the current paper because it allows for more general divergences than the neural distance and on the other hand weaker because in [2] the authors assumes a compact input space. Overall, Theorem 2.5 of course makes a novel contribution, because the compactness assumption is not required, however conceptually it is not that novel.\n\n(3) In Section 3 the authors discuss the generalization properties of the neural network distance. One of the main messages (emphasized several times throughout the paper) is that surprisingly the capacity of the generator class does not enter the generalization error bound. However, this is not surprising at all as it is a consequence of the way in which the authors define the generalization. In short, the capacity of discriminators (D) naturally enters the picture, because the generalization error accounts for the mismatch between the true data distribution mu (used for testing) and its empirical version hat{mu} (used for training). However, the authors assume the model distribution (nu) is the same both during testing and training. In practice this is not true and during testing GANs use the empirical version of nu. If the authors were to account for this mismatch, capacity of G would certainly pop up as well.\n\n(4) The error bounds of Section 3 are based on a very standard machinery (empirical processes, Rademacher complexity) and to the best of my knowledge do not lead to any new interesting conclusions in terms of GANs.\n\n(5) Finally, I would suggest the authors to remove Section 4. I suggest this mainly because the authors admit in Remark 4.1 that the main result of this section (Theorem 4.1) is a corollary of a stronger result appearing in [2]. Also, the main part of the paper has 13 pages, while a recommended amount is 8. \n\n== Minor comments ==\n\n(1) There are *MANY* typos in the paper. Only few of them are listed below.\n(2) First paragraph of page 18, proof of Theorem 2.2. This part is of course well known and the authors may just cite Lemma 9.3.2. of Dudley's \"Real analysis and probability\" for instance.\n(3) Theorem 2.5: \"Let ...\"\n(4) Page 7, \"...we may BE interested...\"\n(5) Corollary 3.2. I doubt that in practice anyone uses discriminator with one hidden unit. The authors may want to consider using the bound on the Rademacher complexity of DNNs recently derived in [3]. \n(6) Page 8, \"..is neural networK\"\n(7) Page 9: \"...interested IN evaluating...\"\n(8) Page 10. All most ---> almost.\n\n[1] Gretton et al., A Kernel Two-Sample Test, JMLR 2012.\n[2] Liu et al, Approximation and Convergence Properties of Generative Adversarial Learning, 2017\n[3] Bartlett et al, Spectrally-normalized margin bounds for neural networks, 2017", "In more detail, the analysis of the paper is as follows. Firstly, it primarily focuses on GAN objective functions which are \"integral probability metrics (IPMs)\"; one way to define these is by way of similarity to the W-GAN, namely IPMs replace the 1-Lipschitz functions in W-GAN with a generic set of functions F. The paper overall avoids computational issues and treats the suprema as though exactly solved by sgd or related heuristic (the results of the paper simply state supremum, but some of the prose seems to touch on this issue).\n\nThe key arguments of the paper are as follows.\n\n1. It argues that the discriminator set should be not simply large, it should be dense in all bounded continuous functions; as a consequence of this, the IPM is 0 iff the distributions are equal (in the weak sense). Due to this assertion, it says that it suffices to use two layer neural networks as the discriminator set (as a consequence of the \"universal approximation\" results well-known in the neural network literature).\n\n2. It argues the discriminator set should be small in order to mitigate small-sample effects. (Together, points 1 and 2 mimic a standard bias-variance tradeoff in statistics.) For this step, the paper relies upon standard Rademacher results plus a little bit of algebraic glue. Curiously, the paper chooses to argue (and forms as a key tenet, indeed in the abstract) that the size of the generator set is irrelevant for this, only the size of the discriminator matters.\n\nUnfortunately, I find significant problems with the paper, in order from most severe to least severe.\n\nA. The calculation ruling out the impact of the generator in generalization calculations in 2 above is flawed. Before pointing out the concrete bug, I note that this assertion runs completely counter to intuition, and thus should be made with more explanation (as opposed to the fortunate magic it is presented as). Moreover, I'll say that if the authors choose to \"fix\" this bug by adding a generator generalization term, the bound is still a remedial application of Rademacher complexity, so I'm not exactly blown away. Anyway, the bug is as follows. The equation which drops the role of the generator in the generalization calculation is the equation (10). The proof of this inequality is at the start of appendix E. Looking at the derivation in that appendix, everything is correct up to the second-to-last display, the one with a supremum over nu in G. First of all, this right hand side should set off alarm bells; e.g., if we make the generator class big, we can make this right hand side essentially as big as the IPM allows even when mu = mu_m. Now the bug itself appears when going to the next display: if the definition of d_F is expanded, one obtains two suprema, each own over _their own_ optimization variable (in this case the variables are discriminator functions). When going to the next equation, the authors accidentally made the two suprema have the same variable and invoke a fortuitous but incorrect cancellation. As stated a few sentences back, one can construct trivial counterexamples to these inequalities, for instance by making mu and mu_m arbitrarily close (even exactly equal if you wish) and then making nu arbitrarily far away and the discriminator set large enough to identify this.\n\nB. The assertions in 1, regarding sizes of discriminator sets needed to achieve the goal of the IPM being 0 iff the distributions are equal (in the weak sense), are nothing more than immediate corollaries of approximation results well-known for decades in the neural network literature. It is thus hard to consider this a serious contribution.\n\nC. I will add on a non-technical note that the paper's assertion on what a good IPM \"should be\" is arguably misled. There is not only a meaning to specific function classes (as with Lip_1 in Wasserstein_1) beyond simply \"many functions\", but moreover there is an interplay between the size of the generator set and the size of the discriminator set. If the generator set is simple, then the discriminator set can also get away with being simple (this is dicussed in the Arora et al 2017 ICML paper, amongst other places). Perhaps I am the one that is misled, but even so the paper does not appear to give a good justification of its standpoint.\n\nI will conclude with typos and minor remarks. I found the paper to contain a vast number of small errors, to the point that I doubted a single proofread.\n\nAbstract, first line: \"a minimizing\"? general grammar issue in this sentence; this sort of issue throughout the paper.\n\nAbstract, \"this is a mild condition\". Optimizing over a function class which is dense in all bounded measurable functions is not a mild assumption. In the particular case under discussion, the size of the network can not be bounded (even though it has just two layers, or as the authors say is the span of single neurons).\n\nAbstract, \"...regardless of the size of the generator or hypothesis set\". This really needs explanation in the abstract, it is such a bold claim. For instance, I wrote \"no\" in the margin while reading the abstract the first time.\n\nIntro, first line: its -> their.\n\nIntro, #3 \"energy-based GANs\": 'm' clashes with sample size.\n\nIntro, bottom of page 1, the sentence with \"irrelenvant\": I can't make any sense of this sentence.\n\nIntro, bottom of page 1, \"is a much smaller discriminator set\": no, the Lip_1 functions are in general incomparable to arbitrary sets of neural nets.\n\nFrom here on I'll comment less on typos.\n\nMiddle of page 2, point (i): this is the only place it is argued/asserted that the discriminator set should contain essentially everything? I think this needs a much more serious justification.\n\nSection 1.1: Lebegure -> Lebesgue.\n\nPage 4, vicinity of equation 5: there should really be a mention that none of these universal approximation results give a meaningful bound on the size of the network (the bound given by Barron's work, while nice, is still massive).\n\nStart of section 3. To be clear, while one can argue that the Lipschitz-1 constraint has a regularization effect, the reason it was originally imposed is to match the Kantorovich duality for Wasserstein_1. Moreover I'll say this is another instance of the paper treating the discriminator set as irrelevant other than how close it is to being dense in all bounded measurable functions.", "The authors provide an insight into the discriminative and generalizable aspect of the discriminator in GANs. They show that the richer discriminator set to enhance the discrimination power of the set while reducing the generalization bound. These facts are intuitive, but they made a careful analysis of it.\n\nThe authors provide more realistic analysis of discriminators by relaxing the constraint on discriminator set to have a richer closure of linear span instead of being rich by itself which is suitable for neural networks.\n\nThey analyze the weak convergence of probability measure under neural distance and generalize it to the other distances by bounding the neural distance.\n\nFor the generalization, they follow the standard generalization procedure and techniques while carefully adapt it to their setting.\n\nGenerally, I like the way the authors advertise their results, but it might be a bit oversold, especially for readers with theory background.\n\nThe authors made a good job in clarifying what is new and what is borrowed from previous work which makes this paper more interesting and easy to read.\n\nSince the current work is a theoretical work, being over 8 pages is acceptable, but since the section 4 is mostly based on the previous contributions, the authors might consider to have it in the appendix.", "Thank you for your review! We can see that you went to some details of the paper, and we are grateful for that. However, you may have some misunderstandings of the main results as we will elaborate as below. \n\nA.\tIn fact, our derivations and results on generalization is correct. The flaw you found in Appendix E (Proof of Equation 10) is this derivation: \n |d_F(u, v) – d_F(u_m, v)| <= d_F(u, u_m) := \\sup_{f\\in F} E_{u}[f] – E_{u_m}[f].\nIn fact, this is purely the triangle inequality of the pseudo metric d_F. It can also be proved from the definition of d_F as follows: \nd_F(u, v) – d_F(u_m, v) \n:= \\sup_{f\\in F} ( E_{u}[f] – E_{v}[f] ) - \\sup_{g\\in F} ( E_{u_m}[g] – E_{v}[g] )\n\\le \\sup_{f\\in F} ( E_{u}[f] – E_{v}[f] - E_{u_m}[f] + E_{v}[f] )\n= \\sup_{f\\in F} ( E_{u}[f] - E_{u_m}[f] ) =: d_F(u, u_m)\nSymmetrically, we can prove d_F(u_m, v) – d_F(u, v) \\le D_F(u_m, u) . Therefore, we have proved |d_F(u, v) – d_F(u_m, v)| <= d_F(u, u_m). \nWe can make the two variables (i.e., f and g) into one variable (i.e., f) because $- \\sup_{g\\in F} ( E_{u_m}[g] – E_{v}[g] ) \\le - E_{u_m}[f] – E_{v}[f]$ for any $f \\in F$.\n\nB.\tOur main contributions in discriminative power of GANs are:\n(1)\tWe give the necessary and sufficient condition for discrimination: span(F) is dense in bounded continuous function space; see Theorem 2.2.\n(2)\tFor any metric space, minimizing GAN’s objective function implies weak convergence; see Theorem 2.6. \nThese two theorems have nothing to do with the universal approximation property of neural networks. Therefore, it is unfair to say that they are *immediate corollaries of approximation results well-known for decades in the neural network literature*.\nCombining our Theorem 2.2 and the well-known universal approximation property of neural networks, we proved that GANs with neural works as discriminators are discriminative; see Theorem 2.3 and Corollary 2.4.\n\nC.\tFrom the perspective of *game theory*, there is an interplay between the generator set and the discriminator set for the existence of equilibria; see Arora (2017). From the perspective of *minimizing a loss function* (e.g., neural distance/divergence), the existence of a global minimum is a straightforward result from continuity of the loss function and compactness of the hypothesis set. In this paper, we analyze the properties of different loss functions, regardless of the hypothesis set. The analog in supervised learning is analyzing properties of different loss functions (negative log-likelihood, mean square error, regularized or not), regardless of the hypothesis set. From this perspective of *minimizing a loss function*, the goodness of the discriminator set (defines the loss function) can be studied independently from the generator set. \nIn this paper, except our results on KL divergence (i.e., Proposition 2.9 and Corollary 3.5), we do not make any assumptions on the generator set. This makes our results widely applicable, regardless of the user’s choice of the generator set. Of course, given the loss function, one can study the accuracy and generalization for a specific generator set, as we commented several times in our paper. \n", "We thank for your insightful comments and your appreciation of our results. \n\nThank you for pointing out that we might oversell the results, especially for readers with theory background. We avoid this in the revised version in two ways. First, we simplify the proof of standard results (the proof of the sufficient part of Theorem 2.1 and the proof of Theorem 3.1) and focus more on their implications on GANs. Second, to avoid misunderstanding, we emphasize that only when evaluated with neural distance, generalization is guaranteed as long as the discriminator set is small enough, regardless of the size of the generator or hypothesis set. We explain that this seemingly-surprising result is reasonable because the evaluate metric (neural distance) is defined by the discriminator set and is “weak” compared to standard metrics like BL distance and KL divergence.\n\nWe move the neural divergence section to Appendix B, only summarizing our new contributions in discrimination properties of f-GANs in Remark 2.2. We would like to point out that the following result is our novel contribution: a neural $f$-divergence is discriminative if linear span of its discriminators \\emph{without the output activation function} is dense in the bounded continuous function space. Both its statement and its proof are nontrivial and cannot be found in other places. \n\nFinally, in the revised version, we add one generalization bound for GANs with DNNs as discriminators in Appendix A.1. This bound makes use of the recent result on Rademacher complexity of DNNs in Bartlett et al. (2017). Compared to our previous result for neural network discriminators (Corollary 3.3), the new bound gets rid of the number of parameters, which can be prohibitively large in practice. Moreover, the new bound can be directly applied to the spectral normalized GANs (Anonymous, 2018), and may explain the empirical success of the spectral normalization technique.\n\n[Bartlett et al., 2017)] Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrally-normalized margin bounds for neural networks. In Advances in Neural Information Processing Systems, pp. 6241–6250, 2017. \n[Anonymous, 2018] Anonymous. Spectral normalization for generative adversarial networks. International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=B1QRgziT-.\n", "Reply to your minor comments:\n(1)\tWe are sorry for this. We carefully corrected typos in the revised version.\n(2)\tAs you suggested, we now directly cite Lemma 9.3.2. of Dudley's \"Real analysis and probability\".\n(3)\tThank you for your suggestion. We add the new bound in Appendix A.1 in our revised version. Compared to our previous result for neural network discriminators (Corollary 3.3), the new bound gets rid of the number of parameters, which can be prohibitively large in practice. Moreover, the new bound can be directly applied to the spectral normalized GANs [4], and may explain the empirical success of the spectral normalization technique.\n\n[1] Liu et al, Approximation and Convergence Properties of Generative Adversarial Learning, 2017\n[2] Ivo Danihelka, Balaji Lakshminarayanan, Benigno Uria, DaanWierstra, and Peter Dayan. Comparison of maximum likelihood and gan-based training of real nvps. arXiv preprint arXiv:1705.05263, 2017.\n[3] Aditya Grover, Manik Dhar, and Stefano Ermon. Flow-gan: Bridging implicit and prescribed learning in generative models. arXiv preprint arXiv:1705.08868, 2017.\n[4] Anonymous. Spectral normalization for generative adversarial networks. International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=B1QRgziT-.\n", "Thank you very much for your insightful review. Your comments help us improve the paper a lot! Overall, all your comments are correct. The following are our replies to some details of your comments.\n\nMajor comments:\n(1)\tYes, for the sufficient part, the proof is standard and is the same as that of the uniqueness of weak convergence. The essential idea is in fact to make use of the moment matching effect of GANs, which is “obvious” for neural distance but tricky for neural divergence. As long as we have moment matching, we now directly cite Lemma 9.3.2. of Dudley's \"Real analysis and probability\" as you suggested.\n\n(2)\tTheorem 2.5 and its proof have two important differences from Theorem 10 of [1]. First, as you pointed out, it gets rid of the compactness assumption. Second, its proof (in Appendix E) in fact gives the convergence rate of the GAN training as the neural distance is minimized. We can see that the convergence rate depends both on the optimization error decay rate ( d_F(\\mu, \\nu_n) ) and the representation error decay rate (\\epsilon(r) defined in Proposition 2.7). This convergence rate provides guidance to improve the training speed of GANs, by utilizing either faster optimization algorithms or more representative discriminator set. On the contrary, the existence proof in [2] does not provide an estimate of the convergence rate.\n\n(3)\tYes, you are correct. In the revised version, we emphasize that only when evaluated with neural distance, generalization is guaranteed as long as the discriminator set is small enough, regardless of the size of the generator or hypothesis set. In this paper, we want to bound $d_F(\\mu, nu_m)$, i.e., the difference between the unknown target distribution \\mu and the learned distribution \\nu_m, instead of the testing error $d_F(\\hat{\\mu}, \\hat{nu}_m)$. If the the testing error, the capacity of the generator set would certainly pop up, as you commented. \n\n(4)\tThe error bound under neural distance in Section 3 (Theorem 3.1) is indeed based on a very standard machinery (empirical processes, Rademacher complexity). However, we think itself and its induced results are still valuable for two reasons. \n\nFirst, although its derivation is standard, its meaning is very different from that in supervised learning. When the evaluation metric is taken as neural distance, the generalization error can be purely bounded by the complexity of the discriminator set. This seemingly loose bound is indeed tight *in terms of the order of sample size m* for several common cases. For example, for GANs with neural discriminators and for MMD-GANs, the bound is O(m^{-1/2}); for Wasserstein distance, the bound is O(m^{-1/d}); for total variation distance, the bound is O(1). In all these cases, the bound is indeed tight *in terms of the order of sample size m*. Of course, the bound is still very loose in other aspects, due to the ignorance of the generator set. \n\nSecond, we use our bounds between neural distance and other standard measures to derive generalization for other evaluation metrics. Especially, when the KL divergence is used as the evaluation metric, our bound (Corollary 3.5) suggests that the generator and discriminator sets must be compatible in that the log density ratios of the generators and the true distributions should exist and be included inside the linear span of the discriminator set. The strong condition that log-density ratio should exist partially explains the counter-intuitive behavior of testing likelihood in flow GANs ([2,3]).\n\n5.\tWe move the neural divergence section to Appendix B, only summarizing our new contributions in discrimination properties of f-GANs in Remark 2.2. We would like to point out that the following result is our novel contribution: a neural $f$-divergence is discriminative if linear span of its discriminators \\emph{without the output activation function} is dense in the bounded continuous function space. Both its statement and its proof are nontrivial and cannot be found in other places. ", "We thank all the reviewers for their insightful comments. Based on their feedbacks, we made the following changes to the paper.\n\n1.We move the neural divergence section to Appendix B, only summarizing our new contributions in discrimination properties of f-GANs in Remark 2.2.\n\n2.We add one generalization bound for GANs with DNNs as discriminators in Appendix A.1. This bound makes use of the recent result on Rademacher complexity of DNNs in Bartlett et al. (2017). Compared to our previous result for neural network discriminators (Corollary 3.3), the new bound gets rid of the number of parameters, which can be prohibitively large in practice. Moreover, the new bound can be directly applied to the spectral normalized GANs (Anonymous, 2018), and may explain the empirical success of the spectral normalization technique.\n\n3.We emphasize that only when evaluated with neural distance, generalization is guaranteed as long as the discriminator set is small enough, regardless of the size of the generator or hypothesis set. We explain that this seemingly-surprising result is reasonable because the evaluate metric (neural distance) is defined by the discriminator set and is “weak” compared to standard metrics like BL distance and KL divergence.\n\n4.We make other small changes according to reviews, and correct typos in the original draft.\n\n[Bartlett et al., 2017)] Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrally-normalized margin bounds for neural networks. In Advances in Neural Information Processing Systems, pp. 6241–6250, 2017.\n[Anonymous, 2018] Anonymous. Spectral normalization for generative adversarial networks. International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=B1QRgziT-.\n\n", "Abstract, \"...regardless of the size of the generator or hypothesis set\". This really needs explanation in the abstract, it is such a bold claim. For instance, I wrote \"no\" in the margin while reading the abstract the first time. \n\nOur reply: Hope that you are convinced by this *bold* claim now. \n\nPage 4, vicinity of equation 5: there should really be a mention that none of these universal approximation results give a meaningful bound on the size of the network (the bound given by Barron's work, while nice, is still massive). \n\nOur reply: In the revised version, we will mention that most early universal approximation results, e.g., Cybenko, 1989; Hornik et al., 1989; Hornik, 1991; Leshno et al., 1993, are qualitive results and do not give a meaningful approximation rates. Barron (1993) and Bach (2017) give the approximation rates of two-layer neural networks. \n\nStart of section 3. To be clear, while one can argue that the Lipschitz-1 constraint has a regularization effect, the reason it was originally imposed is to match the Kantorovich duality for Wasserstein_1. Moreover, I'll say this is another instance of the paper treating the discriminator set as irrelevant other than how close it is to being dense in all bounded measurable functions.\n\nOur reply: We agree with you on the initial motivation of the Lipschitz-1 constraint in WGAN. We do not require that the discriminator set be dense in C_b(X). \nOn one hand, our basic requirement is that span of the discriminator set is dense in C_b(X). The larger the discriminator set is, the more discriminative the IPM will be. On the other hand, the smaller the discriminator set is, the smaller the generalization error will be. As our title indicates, there is a discrimination-generalization tradeoff in GANs. Fortunately, we show that several GANs *in practice* already chose their discriminator set at the sweet point. \n", "Abstract, \"this is a mild condition\". Optimizing over a function class which is dense in all bounded measurable functions is not a mild assumption. In the particular case under discussion, the size of the network cannot be bounded (even though it has just two layers, or as the authors say is the span of single neurons). \n\nOur reply: This is a *big misunderstanding* of our results. Our Theorem 2.2 says that it suffices to optimizing over a function class F, whose span is dense in all bounded measurable functions, to guarantee the discriminative power of GANs. The optimization is over *F*, not *span of F*.\nWe emphasize the *span* several times in our paper. What’s the difference the *span* makes?\n1.\tFor tanh (or sigmoid) activation: neural networks with one hidden layer and *sufficiently many neurons* can approximate any continuous functions; neural networks with one hidden layer and *only one neuron* are sufficient to discriminate any two distributions. \n2.\tFor relu activation: neural networks with one hidden layer, *sufficiently many neurons* and *unbounded weights* can approximate any continuous functions; neural networks with one hidden layer, *only one neuron* and *bounded weights* are sufficient to discriminate any two distributions.\n3.\tA simpler example: the span of two points (1,0) and (0,1) is dense in the whole R^2 plane!\nTherefore, our condition is indeed a very mild condition. \n\nIntro, bottom of page 1, the sentence with \"irrelenvant\": I can't make any sense of this sentence. \n\nOur reply: As we listed in the first page, different GANs define their objective functions (e.g., Wasserstein distance and f-divergence) with different *non-parametric* discriminator sets, while in practice they use parametric discriminator sets as surrogates, which leads to objective functions like neural distance and neural divergence. In this sentence, we say that the properties of their objective functions in mind and objective functions in practice can be fundamentally different or even irrelevant. The main goal of this paper is to close this gap, i.e., to provide discrimination and generalization properties for practical GANs, which use parametric discriminator sets. \n\nIntro, bottom of page 1, \"is a much smaller discriminator set\": no, the Lip_1 functions are in general incomparable to arbitrary sets of neural nets. \n\nOur reply: WGAN is motivated to optimized over all Lip_1 functions. In practice, WGAN optimizes over neural networks with bounded parameters (weight clipping). It is argued in the WGAN paper that this practical discriminator set is contained in Lip_K function class for certain K>0. However, the set of neural networks with bounded parameters is a much smaller set compared to the Lip_K function class. \nWe note that different GAN variants will also use different parametric function classes. We use F_{nn} as an abstract symbol for all these parametric discriminator sets. \n\nMiddle of page 2, point (i): this is the only place it is argued/asserted that the discriminator set should contain essentially everything? I think this needs a much more serious justification. \n\nOur reply: We did not argue/assert that that the discriminator set should contain essentially everything. We argue that the span of the discriminator set should be dense in bounded continuous function space. As we pointed out before, the *span* makes a big difference.\n", "Thank you for your interest, questions and comments! \n\nFirst, as far as we know, no previous work gives a sufficient and necessary conditions of the discriminator set under which the neural distance is discriminative. In previous work, the discriminative power of GANs is typically justified by assuming that the discriminator set has enough capacity, such as all functions taking values in [0,1] in vanilla GAN or all Lipchitz functions with Lipchitz constant 1 in WGAN. However, we use neural networks with bounded parameters in practice. The main goal of our results is to close this gap between previous theoretical results and practices. We'd like to mention the following points on the discriminative power. \n(1). [1] and [2] also noticed that GANs with restricted discriminators may not be discriminative, but this problem was addressed in neither [1] nor [2].\n(2). Previous work also uses the universal approximation property of neural networks to justify the discriminative power empirically. Our results show that the neural distance is discriminative under much weaker condition, that is, span of the discriminator set can approximate any continuous functions. This justifies why neural network with bounded parameters works in practice. \n(3). Our discriminative results also apply to neural divergence (Theorem 4.1), which requires that span of the discriminators without the nonlinear activation in the last layer is dense in bounded continuous functions. This coincides with the implementation difference between vanilla GAN and WGAN, where WGAN simply uses discriminators in the vanilla GAN without the nonlinear activation in the last layer. \n\nYes, the generalization part shares similarities with the supervised learning. However, the most important difference is that: in supervise learning, the complexity of the hypothesis set (G) bounds the generalization error; in GANs, the complexity of the discriminator set (F) bounds the generalization error, which can be independent of the hypothesis set. \n\nWe agree that we do not consider the impact of training on GANs, which is very important in practice. We noticed several recent papers are working in this direction, e.g., [3, 4]. We are currently working on stabilizing the training of GANs through our approach.\n\n[1] Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium in generative adversarial nets (gans). arXiv preprint arXiv:1703.00573, 2017.\n[2] Shuang Liu, Olivier Bousquet, and Kamalika Chaudhuri. Approximation and convergence properties\nof generative adversarial learning. arXiv preprint arXiv:1705.08991, 2017.\n[3] Jerry Li, Aleksander Madry, John Peebles, and Ludwig Schmidt. Towards understanding the dynamics of generative adversarial networks. arXiv preprint arXiv:1706.09884, 2017b.\n[4] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, G¨unter Klambauer, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a nash equilibrium. arXiv preprint arXiv:1706.08500, 2017.\n\n", "I'm going over papers on theoretical results on GANs and got into this one.\n\nIn the first part of the paper, the author asks whether the neural distance is \"discriminative\" or not, that is, whether being equal in neural distance implies that the two distributions are actually identical. It is shown that the answer is affirmative for a large class of discriminators, including neural networks. Based on this property, it is shown that the learned distribution weakly converge to the target distribution. I found that the proof for this \"discriminative\" result seems like a straightforward exercise in real analysis. I wonder whether this result has been discovered before. \n\nThe generalization part seems reasonable, and shares several similarities with supervised learning. A drawback of this paper is that they do not consider the impact of training on GANs, which matters a lot in practice. \n" ]
[ 6, 3, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Hk9Xc_lR-", "iclr_2018_Hk9Xc_lR-", "iclr_2018_Hk9Xc_lR-", "ByyV3Atez", "SyRq3ukMf", "HJjLXT4gM", "HJjLXT4gM", "iclr_2018_Hk9Xc_lR-", "ByyV3Atez", "ByyV3Atez", "r16zP8jlG", "iclr_2018_Hk9Xc_lR-" ]
iclr_2018_SyZI0GWCZ
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information (gradient-based attacks) or on confidence scores such as class probabilities (score-based attacks), neither of which are available in most real-world scenarios. In many such cases one currently needs to retreat to transfer-based attacks which rely on cumbersome substitute models, need access to the training data and can be defended against. Here we emphasise the importance of attacks which solely rely on the final model decision. Such decision-based attacks are (1) applicable to real-world black-box models such as autonomous cars, (2) need less knowledge and are easier to apply than transfer-based attacks and (3) are more robust to simple defences than gradient- or score-based attacks. Previous attacks in this category were limited to simple models or simple datasets. Here we introduce the Boundary Attack, a decision-based attack that starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial. The attack is conceptually simple, requires close to no hyperparameter tuning, does not rely on substitute models and is competitive with the best gradient-based attacks in standard computer vision tasks like ImageNet. We apply the attack on two black-box algorithms from Clarifai.com. The Boundary Attack in particular and the class of decision-based attacks in general open new avenues to study the robustness of machine learning models and raise new questions regarding the safety of deployed machine learning systems. An implementation of the attack is available as part of Foolbox (https://github.com/bethgelab/foolbox).
accepted-poster-papers
The reviewers all agree this is a well written and interesting paper describing a novel black box adversarial attack. There were missing relevant references in the original submission, but these have been added. I would suggest the authors follow the reviewer suggestions on claims of generality beyond CNN; although there may not be anything obvious stopping this method from working more generally, it hasn't been tested in this work. Even if you keep the title you might be more careful to frame the body in the context of CNN's.
test
[ "BkP-T1qgM", "SyWXJWqgf", "HJ3OcT3gG", "S1lew6QNf", "rJFt8lM4f", "SkqcaKqfM", "Bk06LctMz", "SJcYIcFzf", "ryCQLcYzG", "Bkw25o8GM", "r1uU2GiZM", "HkPCcEuWM", "SJvgtpHeM", "SySvBkSJf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "author", "author", "author", "public", "author", "author", "public" ]
[ "The authors identify a new security threat for deep learning: Decision-based adversarial attacks. This new class of attacks on deep learning systems requires from an attacker only the knowledge of class labels (previous attacks required more information, e.g., access to a gradient oracle). Unsurprisingly, since the attacker has so few information, such kind of attacks involves quite a lot trial and error. The authors propose one specific attack instance out of this class of attacks. It works as follows.\n\nFirst, an initial point outside of the benign region is guessed. Then multiple steps towards the decision boundary is taken, finally reaching the boundary (I am not sure about the precise implementation, but it seems not crucial; the author may please check whether their description of the algorithm is really reproducable). Then, in a nutshell, a random walk on a sphere centered around the original, benign point is performed, where after each step, the radius of the sphere is slightly reduced (drawing the point closer to the original point), if and only if the resulting point still is outside of the benign region.\n\nThe algorithm is evaluated on the following datasets: MNIST, CIFAR, VGG19, ResNet50, and InceptionV3.\n\nThe paper is rather well written and structured. The text was easy to follow. I suggest that a self-contained description of the problem setting (assumptions on attacker and defender; aim?) shall be added to the camera-ready version (being not familiar with the area, I had to read a couple of papers to get a feeling for the setting, before reviewing this paper). As in many DL papers these days, there really isn't any math in it worth a mention; so no reason here to say anything about mathematical soundness. The authors employ a reasonable evaluation criterion in their experiments: the median squared Euclidean distance between the original and adversarially modified data point. The results show consistent improvement for most data sets. \n\nIn summary, this is an innovative paper, proposing a new class of attacks that totally makes sense in my opinion. Apart from some minor weaknesses in the presentation that can be easily fixed for the camera ready, this is a nice, fresh paper, that might spur more attacks (and of course new defenses) from the new class of decision-based attacks. It is worth to note that the authors show that distillation is not a useful defense against such attacks, so we may expect follow-up proposing useful defenses against the new attack (which BTW is shown to be about a factor of 10 in terms of iterations more costly than the SOTA).", "This is a nice paper proposing a simple but effective heuristic for generating adversarial examples from class labels with no gradient information or class probabilities. Highly relevant prior work was overlooked and there is no theoretical analysis, but I think this paper still makes a valuable contribution worth sharing with a broader audience.\n\nWhat this paper does well:\n- Suggests a type of attack that hasn't been applied to image classifiers\n- Proposes a simple heuristic method for performing this attack\n- Evaluates the attack on both benchmark neural networks and a commercial system \n\nProblems and limitations:\n\n1. No theoretical analysis. Under what conditions does the boundary attack succeed or fail? What geometry of the classification boundaries is necessary? How likely are those conditions to hold? Can we measure how well they hold on particular networks?\n\nSince there is no theoretically analysis, the evidence for effectiveness is entirely empirical. That weakens the paper and suggests an important area of future work, but I think the empirical evidence is sufficient to show that there's something interesting going. Not a fatal flaw.\n\n2. Poor framing. The paper frames the problem in terms of \"machine learning models\" in general (beginning with the first line of the abstract), but it only investigates image classification. There's no particular reason to believe that all machine learning algorithms will behave like convolutional neural network image classifiers. Thus, there's an implicit claim of generality that is not supported.\n\nThis is a presentation issue that is easily fixed. I suggest changing the title to reflect this, or at least revising the abstract and introduction to make the scope clearer.\n\nA minor presentation quibble/suggestion: \"adversarial\" is used in this paper to refer to any class that differs from the true class of the instance to be disguised. But an image of a dalmation that's labeled as a dalmation isn't adversarial -- it's just a different image that's labeled correctly. The adversarial process is about constructing something that will be mislabeled, exploiting some kind of weakness that doesn't show up on a natural distribution of inputs. I suggest rewording some of the mentions of adversarial.\n\n3. Ignorance of prior work. Finding deceptive inputs using only the classifier output has been done by Lowd and Meek (KDD 2005) for linear classifiers and Nelson et al. (AISTATS 2010, JMLR 2012) for convex-inducing classifiers. Both works include theoretical bounds on the number of queries required for near-optimal adversarial examples. Biggio et al. (ECML 2013) further propose training a surrogate classifier on similar training data, using the predictions of the target classifier to relabel the training data. In this way, decision information from the target model is used to help train a more similar surrogate, and then attacks can be transferred from the surrogate to the target.\n\nThus, \"decision-based attacks\" are not new, although the algorithm and experiments in this paper are. \n\n\nOverall, I think this paper makes a worthwhile contribution, but needs to revise the claims to match what's done in the paper and what's been done before.", "In this paper, the authors propose a novel method for generating adversarial examples when the model is a black-box and we only have access to its decisions (and a positive example). It iteratively takes steps along the decision boundary while trying to minimize the distance to the original positive example.\n\n\nPros:\n- Novel method that works under much stricter and more realistic assumptions.\n- Fairly thorough evaluation.\n- The paper is clearly written.\n\n\nCons:\n- Need a fair number of calls to generate a small perturbation. Would like to see more analysis of this.\n- Attack works for making something outside the boundary (not X), but is less clear how to generate image to meet a specific classification (X). 3.2 attempts this slightly by using an image in the class, but is less clear for something like FaceID.\n- Unclear how often the images generated look reasonable. Do different random initializations given different quality examples?\n", "Thanks for your comment. The attack you mention is strictly limited to linear classifiers whereas our method is applicable to highly nonlinear classifiers like deep neural networks. That generality makes theoretical guarantees almost impossible. A close analogy is gradient descent: optimizing linear classifiers (convex) comes with convergence guarantees whereas optimizing neural networks (non-convex) comes with no guarantees at all.", "Very interesting paper!\nI found the idea quite similar with the paper:https://ix.cs.uoregon.edu/~lowd/kdd05lowd.pdf\nBoth modify the malicious instance towards benign heuristically, except that the prior work have theoretic guarantees for simple models and binary features.\nGiven this, I think the novelty of the paper is reduced. Can the authors help to make further comparisons?\nThanks a lot!", "We have uploaded an updated version of the paper with the following changes:\n\n1.) The Boundary attack has now been run to full convergence for all experiments reported in the paper. On some experiments, in particular on ImageNet, the performance considerably increased and now places the attack squarely between DeepFool and Carline & Wagner in all setups.\n\n2.) We added a new figure showing the result of the Boundary Attack for repeated runs on the same sample. The attack converges to only two minima with similar distance to the original image (Figure 5).\n\n3.) We added a new figure showing the distance to the original image as a function of model calls (Figure 6). The figure further highlights that relatively few model calls are already sufficient to find excellent adversarial examples. A large number of calls is only necessary to fully converge.\n\n4.) We improved the introduction to be more self-contained and easier to read and added additional references to prior work.\n", "Thanks a lot for your positive review!\n\nWe appreciate your nice summary of the attack. To find the first point „on“ (i.e. close to) the boundary, we just perform a line search between the initial point and the original. As you said, this is indeed not crucial and it would be perfectly fine to leave out this step and directly follow the actual algorithm of the Boundary Attack. We use the line search because it basically just gives us a better starting point and therefore speeds up the beginning of the attack.\n\nRegarding your first sentence, we’d like to add that the Boundary Attack does not just pose a new threat, it also makes measuring the robustness of models more reliable. In the past virtually all defence strategies proposed in the literature did not actually increase the robustness of the model but only disabled the attack strategy. A prime example is gradient masking where the backpropagated gradient needed in gradient-based attacks is simply zero (e.g. defensive distillation). The Boundary Attack is robust to many such nuisances and will make it easier to spot whether a model is truly robust.\n\nWe improved the introduction to make the paper more self-contained and thus easier to to read for people not familiar with the area. We’d appreciate your feedback if something is still unclear.\n", "Thanks a lot for your insightful comments!\n\n1. We agree that a theoretical analysis of the decision boundaries of neural networks is an important area of future work. One assumption of the attack is that the boundary is fairly smooth, otherwise the linearity assumption in the proximity of the current adversarial wouldn’t hold. Other then that the success of the attack very much depends on the number and distribution of local minimas on the boundary, but that’s true for all attacks. So just as the success of stochastic gradient descent is first and foremost an empirical result, so is the success of the Boundary Attack and it will be challenging (yet interesting) to get better theoretical insights in the future. \n\n2. The Boundary Attack is by no means restricted to CNNs. However, virtually all papers on adversarial attacks from recent years were evaluated on CNNs in computer vision tasks and so we followed this setup in order to be as accountable as possible. Of course, we do not claim that the Boundary Attack will work equally well on all Machine Learning algorithms, but that’s something no attack can legitimately claim (including gradient-based attacks). In other words, the Boundary Attack is in principle applicable to any machine learning algorithm but how well it will perform is an empirical question that future work will highlight. We’d thus choose to leave the title as is in order to stimulate more progress in this direction.\n\n3. We thank you for these pointers to relevant prior work which have prompted us to adapt the claims of the manuscript accordingly. More concretely, we now delineate our work more clearly as the first decision-based attack that scales to complex machine learning algorithms (such as DNNs) and complex data sets (such as ImageNet).\n\nWe also added more explanations as to our definition of an “adversarial”.\n", "Thanks a lot for the positive review! We have added two new figures to address your comments (figures 5 and 6).\n\n- It is true that the Boundary Attack needs a fair number of calls until it converges to the absolute minimal perturbation. In most practical scenarios, however, it is sufficient to generate adversarials that are perceptually indistinguishable from the original image in which case a much smaller number of iterations is necessary. We added figure 6 to show how fast the size of the perturbation is reduced by the Boundary Attack.\n\n- The Boundary Attack is made to find minimal perturbations from a given image that is misclassified (or classified as a certain target). More graphically, it tries to find an image that is classified as a bus but clearly looks like a cat. In the case you mentioned - Face ID - one would try to find any image that is classified as a certain target. In other words: you’d try to find an image that is classified as a bus, and it’s completely fine if it also looks like a bus to humans. That’s a totally different threat scenario that we do not consider in the paper.\n\n- All results shown in the paper are for a single run of the Boundary Attack, there is no cherry picking. To make this point clearer we added figure 5 where we show the result of ten runs of the Boundary Attack on the same image with random initialization. The final perturbation ends up in one of two minima, and both are of similar quality (the size of the perturbations varies only by a factor of 2).\n", "> The draft does not include a threat model describing what you consider to be an adversarial example. \n\nAs stated in the manuscript we consider an adversarial example to be any image that is differently classified as the original image. Most importantly, this definition is independent of human perception but is only relative to the original image. To goal of the attack is to make the difference between the adversarial and the original image as small as possible.\n\n> Given that you only report the median, could you comment on the maximum L2 perturbation norm of adversarial inputs your approach produces in your experiments?\n\nFor VGG-19 on ImageNet in the untargeted setting the maximum L2 perturbation is as follows:\n\nFGSM: 2.2e-2\nDeepFool: 1.8e-4\nBoundary Attack: 7.1e-5 \nCarlini & Wagner: 4.8e-5\n\n> How do you guarantee that all inputs produced are individually adversarial?\n\nThe Boundary Attack starts from an adversarial image and stays within the adversarial region. Thus, the result of the attack is guaranteed to be adversarial.\n\n> In the literature, attack papers report the success rate in addition to the perturbation norm, which helps better evaluate their effectiveness.\n\nI find the success rate to be fairly meaningless: an attack should always be successful if it works correctly, the question is just how large the perturbation needs to be until one gets an adversarial.\n\n> producing a single adversarial [...] requires more queries than needed to train a substitute model [...] or compute the gradients in a gradient-based attack. \n\nObviously gradient-based attacks need fewer model calls, but that’s simply because the gradient yields a lot of information about the model. Training a substitute model doesn’t need any calls to the original model but then there are no guarantees that the adversarials will transfer (they do empirically, but that might change with future architectures). Furthermore, a few thousand calls are totally sufficient for the Boundary Attack to produce adversarials that can hardly be distinguished from the original image. One just needs millions of queries if one really tries to find the absolute minimum perturbation.\n\n> Furthermore, your algorithm is initialized with an input that is already adversarial.\n\nFinding an initial adversarial is easy: just take any image from a different class. Of course, the distance to the original image is high initially, but that changes over the course of the optimisation. I think the confusion here is related to how we define adversarial images (see above). We’ll clarify that more clearly in the manuscript.\n\n> Could you comment on the number of queries needed to evade the Clarifai model?\n\nFor each image around 2000 - 4000 calls where needed. The attack was untargeted and we only produced the images you find in the article.\n\n> a natural defense strategy would be to reject the queries based on their large number and similarity to one-another.\n\nSurely with proper engineering one can have a chance to detect this attack (but also to evade the defence by properly distributing the calls over clients and time), but this argument can be used against basically all attacks other then transfer attacks (which have their own problems) and FGSM (if one has access to gradients). On a different note, the Boundary Attack is not only about security but also about evaluating the robustness of models as it is not so easily fooled by things like gradient masking.\n\n> In the abstract, you state that transfer-based attacks \"need access to the training data\" but later point to a work that does not need this adversarial capability [http://doi.acm.org/10.1145/3052973.3053009].\n\nThe work you cite is not a classical transfer attack but we qualify it as a decision-based attack because it needs strictly less knowledge then transfer attacks.\n\n> In the introduction, you state that gradient-based attacks can be defended against by masking gradients. However, this is wrong as pointed out by prior work that you cited.\n\nThat prior work makes precisely the point that gradient-based attacks fail due to gradient masking (that’s basically by definition!). The papers show this by making certain interventions that remove gradient masking (after which the gradient-based attacks work again).\n\n> However, the related work you cite also includes experiments on a dataset of traffic sign images (GTRSRB).\n\nWe’ve extended this sentence to include the traffic sign images. Basically, the variant of transfer attacks you are pointing to will work on any dataset for which the intra-class variability is low, but this is not true for CIFAR and ImageNet (and most other interesting datasets).\n\n> The draft claims that the attack works on all machine learning models but only evaluates on CNNs.\n\nThere is nothing in the Boundary Attack that restricts it to CNNs. CNNs are just the class of models that basically all relevant prior work has been evaluated on, and so we follow this direction.", "I enjoyed reading your submission and found that the attack proposed is interesting. I however found that the presentation and scope of the paper claims misrepresent results included in the evaluation. \n\nThe draft does not include a threat model describing what you consider to be an adversarial example. This is especially important given that the only metric used in the evaluation, which you proposed in Equation 4, differs from metrics used by all previous work in the literature. Given that you only report the median, could you comment on the maximum L2 perturbation norm of adversarial inputs your approach produces in your experiments? How do you guarantee that all inputs produced are individually adversarial? In the literature, attack papers report the success rate in addition to the perturbation norm, which helps better evaluate their effectiveness.\n\nCould you elaborate on the cost of your attacks? You make the strong claim that the attack is easier to mount than a transfer-based attack and \"performs on par with gradient-based attacks\" as written in the conclusion. Yet, producing a single adversarial example with the attack proposed requires more queries than needed to train a substitute model in a transfer-based attack or compute the gradients in a gradient-based attack. This means that if the adversary is interested in scaling the attack to find multiple inputs misclassified by the model, it will potentially need to make millions of additional queries. Furthermore, your algorithm is initialized with an input that is already adversarial. Do you include the queries needed to find such an input in the cost of your attack? \n\nCould you also comment on the number of queries needed to evade the Clarifai model? Where these attacks targeted or untargeted? How many adversarial examples did you produce that successfully attacked the Clarifai model?\n\nRelated to the attack's cost, have you considered how the attack will fare against detection by the defender: all queries made are very similar to each other so it seems like a natural defense strategy would be to reject the queries based on their large number and similarity to one-another. \n\nI also found several inconsistencies, which you might be able to clarify: \n\nIn the abstract, you state that transfer-based attacks \"need access to the training data\" but later point to a work that does not need this adversarial capability [http://doi.acm.org/10.1145/3052973.3053009].\n\nIn the introduction, you state that gradient-based attacks can be defended against by masking gradients. However, this is wrong as pointed out by prior work that you cited [http://arxiv.org/abs/1607.04311, http://doi.acm.org/10.1145/3052973.3053009, http://arxiv.org/abs/1704.01547]. \n\nWhen comparing to prior work in decision-based attacks [http://doi.acm.org/10.1145/3052973.3053009], you state that \"While this approach works well on MNIST it has yet to be shown that it scales to more complex natural datasets such as CIFAR or ImageNet.\" However, the related work you cite also includes experiments on a dataset of traffic sign images (GTRSRB). \n\nThe draft claims that the attack works on all machine learning models but only evaluates on CNNs. It might make sense to reconsider the title.", "Our implementation of the Boundary Attack is now available at http://bit.ly/2kF0JKQ.\nWe post a short URL because of the double-blind review.", "Hi,\n\nthe Reproducibility Challenge is a great idea!\nWe will make our implementation available on GitHub as soon as we have cleaned it up in a way that makes it easy to apply.\n\nWe will also post a link here and update the paper accordingly once the double-blind review period has ended.", "Hi:\nWe are students at Carnegie Mellon University participating the ICLR 2018 Reproducibility Challenge. We find this paper quite interesting and were wondering if it is possible for you to share the implementation of boundary attack.\n\nMuch appreciated!\n\n\n\n\n\n" ]
[ 7, 7, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SyZI0GWCZ", "iclr_2018_SyZI0GWCZ", "iclr_2018_SyZI0GWCZ", "rJFt8lM4f", "iclr_2018_SyZI0GWCZ", "iclr_2018_SyZI0GWCZ", "BkP-T1qgM", "SyWXJWqgf", "HJ3OcT3gG", "r1uU2GiZM", "iclr_2018_SyZI0GWCZ", "SJvgtpHeM", "SySvBkSJf", "iclr_2018_SyZI0GWCZ" ]
iclr_2018_rJQDjk-0b
Unbiased Online Recurrent Optimization
The novel \emph{Unbiased Online Recurrent Optimization} (UORO) algorithm allows for online learning of general recurrent computational graphs such as recurrent network models. It works in a streaming fashion and avoids backtracking through past activations and inputs. UORO is computationally as costly as \emph{Truncated Backpropagation Through Time} (truncated BPTT), a widespread algorithm for online learning of recurrent networks \cite{jaeger2002tutorial}. UORO is a modification of \emph{NoBackTrack} \cite{DBLP:journals/corr/OllivierC15} that bypasses the need for model sparsity and makes implementation easy in current deep learning frameworks, even for complex models. Like NoBackTrack, UORO provides unbiased gradient estimates; unbiasedness is the core hypothesis in stochastic gradient descent theory, without which convergence to a local optimum is not guaranteed. On the contrary, truncated BPTT does not provide this property, leading to possible divergence. On synthetic tasks where truncated BPTT is shown to diverge, UORO converges. For instance, when a parameter has a positive short-term but negative long-term influence, truncated BPTT diverges unless the truncation span is very significantly longer than the intrinsic temporal range of the interactions, while UORO performs well thanks to the unbiasedness of its gradients.
accepted-poster-papers
The reviewers agree that the proposed method is theoretically interesting, but disagree on whether it has been properly experimentally validated. My view is that the the theoretical contribution is interesting enough to warrant inclusion in the conference, and so I will err on the side of accepting.
val
[ "B1Ud2yqgz", "B1QPFb5eG", "S1zt98K4z", "r11STCqxG", "rk7gF0cXG", "SyaSrR5Xf", "SJHpNAqQG", "H1thm0c7G" ]
[ "official_reviewer", "official_reviewer", "public", "official_reviewer", "author", "author", "author", "author" ]
[ "The authors introduce a novel approach to online learning of the parameters of recurrent neural networks from long sequences that overcomes the limitation of truncated backpropagation through time (BPTT) of providing biased gradient estimates.\n\nThe idea is to use a forward computation of the gradient as in Williams and Zipser (1989) with an unbiased approximation of Delta s_t/Delta theta to reduce the memory and computational cost.\n\nThe proposed approach, called UORO, is tested on a few artificial datasets.\n\nThe approach is interesting and could potentially be very useful. However, the paper lacks in providing a substantial experimental evaluation and comparison with other methods.\nRather than with truncated BPTT with smaller truncation than required, which is easy to outperform, I would have expected a comparison with some of the other methods mentioned in the Related Work Section, such as NBT, ESNs, Decoupled Neural Interfaces, etc. Also the evaluation should be extended to other challenging tasks. \n\nI have increased the score to 6 based on the comments and revisions from the authors.", "Post-rebuttal update:\nI am happy with the rebuttal and therefore I will keep the score of 7.\n\nThis is a very interesting paper. Training RNN's in an online fashion (with no backpropagation through time) is one of those problems which are not well explored in the research community. And I think, this paper approaches this problem in a very principled manner. The authors proposes to use forward approach for the calculation of the gradients. The author proposes to modify RTRL by maintaining a rank one approximation of jacobian matrix (derivative of state w.r.t parameters) which was done in NoBackTrack Paper. The way I think this paper is different from NoBackTrack Paper is that this version can be implemented in a black box fashion and hence easy to implement using current DL libraries like Pytorch. \n\nPros.\n\n- Its an interesting paper, very easy to follow, and with proper literature survey.\n\nCons:\n\n- The results are quite preliminary. I'll note that this is a very difficult problem.\n- \"The proof of UORO’s convergence to a local optimum is soon to be published Masse & Ollivier (To appear).\" I think, paper violates the anonymity. So, I'd encourage the authors to remove this. \n\nSome Points: \n\n- I find the argument of stochastic gradient descent wrong (I could be wrong though). RNN's follow the markov property (wrt hidden states from previous time step and the current input) so from time step t to t+1, if you change the parameters, the hidden state at time t (and all the time steps before) would carry stale information unless until you're using something like eligibility traces from RL literature. I also don't know how to overcome this issue. \n\n- I'd be worried about the variance in the estimate of rank one approximation. All the experiments carried out by the authors are small scale (hidden size = 64). I'm curious if authors tried experimenting with larger networks, I'd guess it wont perform well due to the high variance in the approximation. I'd like to see an experiment with hidden size = 128/256/512/1024. My intuition is that because of high variance it would be difficult to train this network, but I could be wrong. I'm curious what the authors had to say about this. \n\n- If the variance of the approximation is indeed high, can we use something to control the dynamics of the network which can result in less variance. Have authors thought about this ? \n\n- I'd also like to see experiments on copying task/adding task (as these are standard experiments which are done for analysis of long term dependencies) \n\n- I'd also like to see what effect the length of sequence has on the approximation. As small errors in approximation on each step can compound giving rise to chaotic dynamics. (small change in input => large change in output)\n\n- I'd also like to know how using UORO changes the optimization as compared to Back-propagation through time in the sense, does the two approaches would reach same local minimum ? or is there a possibility that the former can reach \"less\" number of potential local minimas as compared to BPTT. \n\n\nI'm tempted to give high score for this paper( Score - 7) , as it is unexplored direction in our research community, and I think this paper makes a very useful contribution to tackle this problem in a very principled way. But I'd like some more experiments to be done (which I have mentioned above), failing to do those experiments, I'd be forced to reduce the score (to score - 5) ", "I just wanted to point out (for general readers) that this paper tries to address a very interesting research problem i.e training RNN's in an online fashion , an important open problem that has been severely under-explored in machine learning community. Even though, the presented results are preliminary, this paper proposes a principled way to approach this problem. \n\nRegarding the paper, I think reviewers did good job asking \"right\" questions, and I feel satisfied by the authors response! I am really excited to see how can we extend this! Good work! :-)", "This paper presents a generic unbiased low-rank stochastic approximation to full rank matrices that makes it possible to do online RNN training without the O(n^3) overhead of real-time recurrent learning (RTRL). This is an important and long-sought-after goal of connectionist learning and this paper presents a clear and concise description of why their method is a natural way of achieving that goal, along with experiments on classic toy RNN tasks with medium-range time dependencies for which other low-memory-overhead RNN training heuristics fail. My only major complaint with the paper is that it does not extend the method to large-scale problems on real data, for instance work from the last decade on sequence generation, speech recognition or any of the other RNN success stories that have led to their wide adoption (eg Graves 2013, Sutskever, Martens and Hinton 2011 or Graves, Mohamed and Hinton 2013). However, if the paper does achieve what it claims to achieve, I am sure that many people will soon try out UORO to see if the results are in any way comparable.", "The paper has been revised following the reviewers' advice. Two sections focusing on the evolution of the variance of the gradient approximation, both with respect to the length of the input sequence and to the size of the network have been added, along with corresponding experiments.", "Thank you for your comments and suggestions.\n\n1/ Regarding comparison to other online methods such as NoBackTrack and Echo State Networks. For plain, fully connected RNNs, NoBackTrack and UORO turn out to be mathematically identical (though implemented quite differently), so they will perform the same. On the contrary, for LSTMs, NoBackTrack is extremely difficult to implement (to our knowledge, it has never been done); this was one of the motivations for UORO, but it makes the comparison difficult.\n\nFor Echo State Networks: ESNs amount in great part to not learning the internal weights, only the output weights (together with a carefully tuned initialization). As much as we are aware, they are not known to fare particularly well on the kind of task we consider, but we may have missed relevant references.\n\n2/ We have included a few more tasks and tests, although this remains relatively small-scale.\n", "Thank you for your insights, questions and suggestions. We have tried to attend your concerns in the revised version of the paper. \n\n1/ As you pointed out, the results are indeed preliminary. As pointed out in the answer to Reviewer 2, it is difficult to obtain results competitive with BPTT on large scale benchmarks given the additional constraints on UORO (namely, no storage of past data, and good convergence properties, which is not the case of truncated backpropagation if dependencies exceed its truncation range).\n\n2/ About the variance of UORO for large networks: We have added an experiment to test this. The variance of UORO does increase with netowrk size (probably sublinearly), and larger networks will require smaller learning rates.\n\n3/ About the effect of length on the quality of the approximation: We have added an experiment to test the evolution of UORO variance when time increases along the input sequence. The variance of UORO does not explode over time, and is stationary. A key point of UORO is the whole renormalization process (variable rho), designed precisely for this. An independent, theoretical proof for the similar case of NoBackTrack is in (Masse 2017). Thus UORO is applicable to unbounded sequences (notably, in the experiments, datasets are fed as a single sequence, containing 10^6-10^7 characters). \n\n4/ About the stochastic gradient descent argument: indeed one has to be careful. If UORO is used to process a number of finite training sequences, and gradient steps are performed at the end of each sequence only, then this is a fully standard SGD argument: UORO computes, in a streaming fashion, an unbiased estimate of the same gradient as BPTT for each training sequence. However, if the gradients steps are performed at every time step, as we do here, then you are right that an additional argument is needed. The difference between applying gradients at each step and applying gradients only at the end of each sequence is at *second order* in the learning rate: if the learning rate is small, applying gradients at each time does not change the computations too much, and the SGD argument applies up to second-order terms. This is fully formalized in (Masse 2017). If moreover, only one infinite training sequence is provided, then an additional assumption of ergodicity (decay of correlations) is needed. But in any case unbiasedness is the central property.\n\n5/ About the optima reached by UORO vs BPTT: in the limit of very small learning rates, UORO, RTRL, and BPTT with increasing truncation lengths will all produce the same limit trajectories. The theory from (Masse 2017) proves local convergence to the *same* set of local optima for RTRL and UORO (if starting close enough to the local optimum). On the other hand, for large learning rates, we are not aware of theoretical results for any recurrent algorithm.\n\n6/ Regarding reference (Masse 2017): this reference is now publically\navailable and we provide a link in the bibliography. We were indeed aware of Masse's work a bit before it was put online, but that still covers many people, so we do not believe this breaks anonymity. Our paper is disjoint from (Masse 2017), as can be directly checked by comparing the texts.\n", "Thank you for the constructive feedback. At the moment, we haven't\nsucceeded in scaling UORO up to the state of the art with results competitive with backpropagation on large scale benchmarks. This may be due to the additional constraints borne by UORO, namely, both\nmemorylessness and unbiasedness at all time scales. Such datasets (notably next-character or next-word predictions) contain difficult short-term dependencies: Truncated BPTT with relatively small truncation is expected to learn those dependencies better than an algorithm like UORO, which must consider all time ranges at once.\n" ]
[ 6, 7, -1, 8, -1, -1, -1, -1 ]
[ 4, 4, -1, 5, -1, -1, -1, -1 ]
[ "iclr_2018_rJQDjk-0b", "iclr_2018_rJQDjk-0b", "iclr_2018_rJQDjk-0b", "iclr_2018_rJQDjk-0b", "iclr_2018_rJQDjk-0b", "B1Ud2yqgz", "B1QPFb5eG", "r11STCqxG" ]
iclr_2018_ryup8-WCW
Measuring the Intrinsic Dimension of Objective Landscapes
Many recently trained neural networks employ large numbers of parameters to achieve good performance. One may intuitively use the number of parameters required as a rough gauge of the difficulty of a problem. But how accurate are such notions? How many parameters are really needed? In this paper we attempt to answer this question by training networks not in their native parameter space, but instead in a smaller, randomly oriented subspace. We slowly increase the dimension of this subspace, note at which dimension solutions first appear, and define this to be the intrinsic dimension of the objective landscape. The approach is simple to implement, computationally tractable, and produces several suggestive conclusions. Many problems have smaller intrinsic dimensions than one might suspect, and the intrinsic dimension for a given dataset varies little across a family of models with vastly different sizes. This latter result has the profound implication that once a parameter space is large enough to solve a problem, extra parameters serve directly to increase the dimensionality of the solution manifold. Intrinsic dimension allows some quantitative comparison of problem difficulty across supervised, reinforcement, and other types of learning where we conclude, for example, that solving the inverted pendulum problem is 100 times easier than classifying digits from MNIST, and playing Atari Pong from pixels is about as hard as classifying CIFAR-10. In addition to providing new cartography of the objective landscapes wandered by parameterized models, the method is a simple technique for constructively obtaining an upper bound on the minimum description length of a solution. A byproduct of this construction is a simple approach for compressing networks, in some cases by more than 100 times.
accepted-poster-papers
The authors make an empirical study of the "dimension" of a neural net optimization problem, where the "dimension" is defined by the minimal random linear parameter subspace dimension where a (near) solution to the problem is likely to be found. I agree with reviewers that in light of the authors' revisions, the results are interesting enough to be presented at the conference.
train
[ "B1IwI-2xz", "BkJsM2vgf", "BJva6gOgM", "SJohldaXz", "HkDPl_aXG", "S1e7luTQM", "SJ1yeuTmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper proposes an empirical measure of the intrinsic dimensionality of a neural network problem. Taking the full dimensionality to be the total number of parameters of the network model, the authors assess intrinsic dimensionality by randomly projecting the network to a domain with fewer parameters (corresponding to a low-dimensional subspace within the original parameter), and then training the original network while restricting the projections of its parameters to lie within this subspace. Performance on this subspace is then evaluated relative to that over the full parameter space (the baseline). As an empirical standard, the authors focus on the subspace dimension that achieves a performance of 90% of the baseline. The authors then test out their measure of intrinsic dimensionality for fully-connected networks and convolutional networks, for several well-known datasets, and draw some interesting conclusions.\n\nPros:\n\n* This paper continues the recent research trend towards a better characterization of neural networks and their performance. The authors show a good awareness of the recent literature, and to the best of my knowledge, their empirical characterization of the number of latent parameters is original. \n\n* The characterization of the number of latent variables is an important one, and their measure does perform in a way that one would intuitively expect. For example, as reported by the authors, when training a fully-connected network on the MNIST image dataset, shuffling pixels does not result in a change in their intrinsic dimensionality. For a convolutional network the observed 3-fold rise in intrinsic dimension is explained by the authors as due to the need to accomplish the classification task while respecting the structural constraints of the convnet.\n\n* The proposed measures seem very practical - training on random projections uses far fewer parameters than in the original space (the baseline), and presumably the cost of determining the intrinsic dimensionality would presumably be only a fraction of the cost of this baseline training.\n\n* Except for the occasional typo or grammatical error, the paper is well-written and organized. The issues are clearly identified, for the most part (but see below...).\n\nCons:\n\n* In the main paper, the authors perform experiments and draw conclusions without taking into account the variability of performance across different random projections. Variance should be taken into account explicitly, in presenting experimental results and in the definition and analysis of the empirical intrinsic dimension itself. How often does a random projection lead to a high-quality solution, and how often does it not?\n\n* The authors are careful to point out that training in restricted subspaces cannot lead to an optimal solution for the full parameter domain unless the subspace intersects the optimal solution region (which in general cannot be guaranteed). In their experiments (FC networks of varying depths and layer widths for the MNIST dataset), between projected and original solutions achieving 90% of baseline performance, they find an order of magnitude gap in the number of parameters needed. This calls into question the validity of random projection as an empirical means of categorizing the intrinsic dimensionality of a neural network.\n\n* The authors then go on to propose that compression of the network be achieved by random projection to a subspace of dimensionality greater than or equal to the intrinsic dimension. However, I don't think that they make a convincing case for this approach. Again, variation is the difficulty: two different projective subspaces of the same dimensionality can lead to solutions that are extremely different in character or quality. How then can we be sure that our compressed network can be reconstituted into a solution of reasonable quality, even when its dimensionality greatly exceeds the intrinsic dimension?\n\n* The authors argue for a relationship between intrinsic dimensionality and the minimum description length (MDL) of their solution, in that the intrinsic dimensionality should serve as an upper bound on the MDL. However they don't formally acknowledge that there is no standard relationship between the number of parameters and the actual number of bits needed to represent the model - it varies from setting to setting, with some parameters potentially requiring many more bits than others. And given this uncertain connection, and given the lack of consideration given to variation in the proposed measure of intrinsic dimensionality, it is hard to accept that \"there is some rigor behind\" their conclusion that LeNet is better than FC networks for classification on MNIST because its empirical intrinsic dimensionality score is lower.\n\n* The experimental validation of their measure of intrinsic dimension could be made more extensive. In the main paper, they use three image datasets - MNIST, CIFAR-10 and ImageNet. In the supplemental information, they report intrinsic dimensions for reinforcement learning and other training tasks on four other sets.\n\nOverall, I think that this characterization does have the potential to give insights into the performance of neural networks, provided that variation across projections is properly taken into account. For now, more work is needed.\n\n====================================================================================================\nAddendum:\n\nThe authors have revised their paper to take into account the effect of variation across projections, with results that greatly strengthen their results and provide a much better justification of their approach. I'm satisfied too with their explanations, and how they incorporated them into their revised version. I've adjusted my rating of the paper accordingly.\n\nOne point, however: the revisions seem somewhat rushed, due to the many typos and grammatical errors in the updated sections. I would like to encourage the authors to check their manuscript once more, very carefully, before finalizing the paper.\n====================================================================================================", "[ =============================== REVISION =========================================================]\nMy questions are answered, paper undergone some revision to clarify the presentation. I still maintain that it is a good paper and argue for acceptance - it provides a witty way of checking whether the network is overparameterized. Mnist with shuffled labels is a great example that demonstrates the value of the approach, I would though have moved the results of it into the main paper, instead of supplemental materials\n[ ======================== END OF REVISION =========================================================]\n\nAuthors introduce ransom subspace training (random subspace neural nets) where for a fixed architecture, only a subset of the parameters is trained, and the update for all the parameters is derived via random projection which is fixed for the duration of the training. Using this type of a network, authors introduce a notion of intrinsic dimension of optimization problems - it is minimal dimension of a subset, for which random subset neural net already reaches best (or comparable) performance.\nAuthors mention that this can be used for compressing networks - one would need to store the seed for the random matrix and the # of params equal to the intrinsic dimension of the net. \nThey then demonstrate that the intrinsic dimension for the same problem stays the same when different architectures are chosen. Finally they mention neural nets with comparable number of params to intrinsic dimension but that don’t use random subspace trick don’t achieve comparable performance. This does not always hold for CNNs\nModel with smaller intrinsic dimension is suggested to be better . They also suggest that intrinsic dimension might be a good approximation to Minimum Description Length metric\n\nMy main concern is computational efficiency. They state that if used for compressing, their method is different from post-train compression, and the authors state that they train once end-to-end. It is indeed the case that once they found model that performs well, it is easy to compress, however they do train a number of models (up to a number of intrinsic dimension) until they get to this admissible model, which i envision would be computationally very expensive.\n\nQuestions:\n- Are covets always better on MNIST: didn’t understand when authors said that intrinsic dimension of FC on shuffled data stayed the same (why) and then say that it becomes 190K - which one is correct?\n- MNIST - state the input dimension size, not clear how you got to that number of parameters overall\n", "While deep learning usually involves estimating a large number of variable, this paper suggests to reduce its number by assuming that these variable lie in a low-dimensional subspace. In practice, this subspace is chosen randomly. Simulations show the promise of the proposed method. In particular, figure 2 shows that the number of parameters could be greatly reduced while keeping 90% of the performance; and figure 4 shows that this method outperforms the standard method. The method is clearly written and the idea looks original. \n\nA con that I have is about the comparison in figure 4. While the proposed subspace method might have the same number of parameters as the direct method, I wonder if it is a fair comparison since the subspace method could still be more computational expensive, due to larger number of latent variables.\n\n", "Thanks for your kind and helpful comments! Replies below:\n\n> My main concern is computational efficiency [to compute intrinsic dimension]\n\nYou are correct that the computational cost can be high to obtain the intrinsic dimension. In practice, one may find the intrinsic dimension by either (a) running many training runs at different dimensions in parallel (low wall clock time, as we used here), or (b) using binary search (low total amount of computation). In the latter case, the precision of the measured intrinsic dimension is then related to the number of iterations in binary search. Faster methods to obtain intrinsic dimension could be an interesting direction of future work. Slight inefficiency notwithstanding, we see the approach as valuable overall for the insight into network behavior that it provides, even if this insight comes at a cost of a binary search or use of a cluster.\n\n> Are covets always better on MNIST:\n\nYes, convnets are always better on MNIST than FC networks, EXCEPT in the case where the pixel order is shuffled (see more on this below).\n\n> didn’t understand when authors said that intrinsic dimension of FC on shuffled data stayed the same (why) and then say that it becomes 190K - which one is correct?\n\nSorry for this confusion, and thanks for pointing it out. The setting will make sense for those that know the Zhang et al 2017 ICLR paper in detail, but for those not familiar, the wording as submitted was indeed quite confusing. We’ve updated the text to explain the situation more completely (See Section S5.3).\n\nTo wit, there are two different shuffled MNIST datasets:\n\n(a) a “shuffled pixel” dataset in which the label for each example remains the same as the normal dataset, but a random permutation of pixels is chosen once and then applied to all images in the training and test sets. FC networks solve the shuffled pixel datasets exactly as easily as the base dataset, because there is no privileged ordering of input dimension in FC networks -- all orderings are equivalent. Convnets suffer here because they expect local structure but the local structure was destroyed by shuffling pixel locations.\n\n(b) a “shuffled label” dataset in which the images remain the same as the base dataset, but training labels are randomly shuffled for the entire training set. Here, as in [Zhang et al, ICLR 2017], we only evaluate training accuracy, as test set accuracy remains forever at chance level (the training set X and y convey no information about test set p(y|X), because test set labels are shuffled independent of the training set).\n\nThe intrinsic dimension of FC nets on the shuffled-label training set becomes huge: 190K. This is because the network must memorize every training set label (well, 90% of them), and the capacity required to do so is large. This illustrates cleanly an important concept: while the standard dataset and the shuffled-label dataset are of exactly the same size, containing exactly the same bits, and providing exactly the same number of constraints on the learned p(y|X), the random version contains much higher entropy, a fact which we can *measure* by computing intrinsic dimension of neural network! In the standard dataset, the constraints imposed by the image-label pairs in the same class are very similar, thus parameters representing those constraints can be shared. The number of unique constraints is small. In contrast, the shuffled-label dataset fully randomizes the well-structured relationship among image-label pairs. Each randomized pair provides a unique constraint for the model, and the neural network has to be optimized to satisfy all the unique constraints. Hence, the number of unique constraints is very large. The number of unique constraints reflects the intrinsic dimension we obtained for each dataset.\n\n> MNIST - state the input dimension size, not clear how you got to that number of parameters overall\n\nThanks for your careful study; it was a typo! the FC network size is 784-200-200-10 (200, not 400, as we had stated). The total number of parameters (including “+ 1” for biases) is (784 + 1) * 200 + (200 + 1) * 200 + (200 + 1) * 10 = 199210. The draft has been updated to fix this.\n", "Thanks for your kind and helpful comments! A few replies and thoughts are below:\n\n> ...this paper suggests to reduce its number by assuming that these variable lie in a low-dimensional subspace. In practice, this subspace is chosen randomly.\n\nNot to belabor minutiae, but it may be worth discussing a few subtleties here in case any were not already clear. First, the paper doesn’t assume that parameters lie in a low-dimensional subspace; instead, it asks whether they happen to (it turns out they often do) and whether if they do, we could measure the dimensionality of that subspace using a simple approach -- by intersecting the solution space with random subspaces (this does seem to work). So there are two subspaces under consideration: the subspace of solutions, which is certainly far more structured than random, and the subspace in which we search for intersection, which we do choose to be random. For example, in the Figure 1 toy example, the solution space is 990 dimensional and highly structured, but the 10 dimensional subspace in which we find intersection is random.\n\n> Simulations show the promise of the proposed method. In particular, figure 2 shows that the number of parameters could be greatly reduced while keeping 90% of the performance; and figure 4 shows that this method outperforms the standard method. The method is clearly written and the idea looks original.\n\nIndeed, the random projection method “outperforms the standard method” if we consider more parsimonious models to outperform those with more parameters. However, note that while this is a fun by-product of the approach, we’d like to emphasize that the primary importance of the work is that it provides a tool that can be used to analyze and measure network behavior. By using random projections, we obtain a window into the complex, high dimensional objective landscape that wasn’t previously reported, and we think this will be quite useful for the field!\n\nWe’ve rewritten parts of the introduction to emphasize more clearly that our paper is not primarily one proposing a better model, but a paper providing insights into network properties.\n\n> A con that I have is about the comparison in figure 4. While the proposed subspace method might have the same number of parameters as the direct method, I wonder if it is a fair comparison since the subspace method could still be more computationally expensive, due to larger number of latent variables.\n\nIndeed, the subspace method is always at least a little more computationally expensive, though minimally so. And, as mentioned above, the paper isn’t so much about fast tricks for better training as it about teasing out subtleties of high dimensional landscapes. This latter undertaking would be worth it even if computationally inconvenient; it just so happens that the approach ends up being computationally reasonable compared to the cost of ordinary training!\n", "> The authors are careful to point out... [T]hey find an order of magnitude gap in the number of parameters needed. This calls into question the validity of random projection as an empirical means of categorizing the intrinsic dimensionality of a neural network.\n\nPerhaps we misunderstand part of this objection, but if not, we believe the approach is still quite defensible. We may see this from two directions: empirical and theoretical.\n\nEmpirical: trying many different random projections tends to produce very similar results (thanks for the suggestion to report this directly!)\n\nTheoretical: given two random hyperplanes of dimension m and n in a space of dimension D, m and n will intersect with probability 0 if m + n < D and probability 1 if m + n >= D. The transition from “almost surely will not intersect” to “almost surely will intersect” is sudden. This result is for the intersection of two random hyperplanes, not for the intersection of a structured solution set and a random hyperplane, but we expect (and above measure) similar sudden transitions for actual solution sets.\n\n> The authors argue for a relationship between intrinsic dimensionality and the minimum description length (MDL) ...However there is no standard relationship between the number of parameters and the actual number of bits\n\nWe just give a loose upper bound: one where each parameter -- native or subspace -- requires 32 bits to represent in floating point format. It’s true that some parameters may be represented using far fewer bits, but this does not make the upper bound any less valid. We’ve clarified that the upper bound refers to 32 bit floats, which were used for all experiments.\n\n> [More extensive experimental validation using other datasets]\n\nThe results presented in the paper (not including replicates re-run thanks to the above suggestions) derive from over 10,000 experimental runs from seven datasets/environments. Generally speaking we’d love to include even more, but we believe the theory and method as presented has been sufficiently validated through experiments. We certainly do believe that researchers will be able to generate great insights by applying these methods to further datasets, but at this point we feel it is defensible to delegate that to future work.\n\nNonetheless, we note extra experimentation added to the current draft after the initial submission:\n\nIn Section S8 (paragraph “The role of regularizers”), we investigate the regularization ability of subspace training vs traditional regularizers (weight decay and Dropout) and their combinations. It is shown that subspace training itself has strong regularization ability by reducing the dimensionality of solution set, and its combination with traditional regularizers can further improve validation performance. Regarding intrinsic dimension, stronger traditional regularizers generally lead to slightly larger measured intrinsic dimension, as regularizers restrict the expressiveness of the model, which must be overcome by extra dimensions.\nIn Section S10, we apply intrinsic dimension to further understand the contribution of each component in convolutional networks for image classification task: local receptive fields and weight-tying (i.e., the two aspects that a convolutional network is a special case of a FC network)\n\nAll told: again, thanks for the helpful, critical feedback! We think the paper as amended is much stronger than it was on submission and hope you will agree.\n", "Thanks for taking the time to provide such thorough comments and such detailed feedback. Much of the feedback alludes to elements of the work that had been critically omitted. We’ve since updated the paper to include 7 missing or modified sections and experiments as described below.\n\n> [Summary and Pros]\n\nThanks kindly!\n\n> Cons:\n\n(We’ve slightly reordered these in response)\n\n> In the main paper, the authors perform experiments and draw conclusions without taking into account the variability of performance across different random projections. Variance should be taken into account explicitly, in presenting experimental results and in the definition and analysis of the empirical intrinsic dimension itself. How often does a random projection lead to a high-quality solution, and how often does it not?\n\nThis is an important question. Initial experiments showed that random projections of the sizes under consideration -- e.g. through a dense random matrix with shape (750 by 200,000) -- contain so many IID random elements (e.g. 150,000,000) that the difference between the luckiest random projection and the least lucky random projection among N = 10 or 20 was tiny and thus possible to ignore. However, we should definitely have described these initial experiments in the paper and otherwise justified the assumption.\n\nWe’ve rectified this omission by (a) repeating all FC MNIST experiments three times each and including error bars both on each individual measurement of performance at a given dimension (see, for example, vertical error bars on each dot in the updated Fig S6), and (b) by using these multiple measurements to produce a bootstrapped estimate of the error bars on measurements of intrinsic dimension (see, for example, horizontal error bars in the updated Fig S6).\n\nAs can be seen in Fig S6, the variance of performance for a given subspace dimension is very small, and the mean of performance monotonically increases (very similar to one run result). It indicates the luckiness in random projection has a little impact on the quality of solutions, while the subspace dimensionality has a great impact on the quality of solutions. \n\nWe further repeat our experiments three times for more network architectures and datasets, and report the mean and standard deviation (std) on bootstrapped samples. \n\nFC (depth=2, width=200) on MNIST: mean=802.25, std=67.83\nLeNet on MNIST: mean=290.0, std=0.0\nFC (depth=2, width=200) on CIFAR: mean=8277.5, std=1378.36\nLeNet on CIFAR: mean=2840.0, std=120.0\n\nThe std here is of our measurement of the intrinsic dimension based on one-run result. Even considering the interval report here, the numbers can still serve as sufficient evidences for the main interest of paper: providing the insights of understanding network behavior. Hence, we can rely on one run result for fast compute of the intrinsic dimension, though slightly more accurate solutions can be obtained via multiple runs and refined interval of subspace dimensionality. We’ve explained this justification in Section S5 and Fig S6 in Supplementary Material (see updated draft). \n\nFurther, in the cases where the randomness of the learning process has large performance variance (apart from the random projection), we have adjusted and clarified our definition of “intrinsic dimension” to take account of the variance. In reinforcement learning tasks, where the large randomness of the tasks themselves leads to very different performance in different runs, we performed the experiments multiple times, and defined the intrinsic dimension as the dimension at which the mean reward crossed the threshold, where the mean reward is averaged over 30 runs for a given subspace dimension. (Details given in Section 3.3)\n\n> The authors then go on to propose that compression of the network… two different projective subspaces of the same dimensionality can lead to solutions that are extremely different in character or quality.\n\nAs now more carefully quantified above, the performance variation across different random projections is minimal. (Also: see next section).\n\n> How then can we be sure that our compressed network can be reconstituted into a solution of reasonable quality, even when its dimensionality greatly exceeds the intrinsic dimension?\n\nWe’re not training full networks, compressing them, and then reconstituting them into networks we hope will happen still to perform well (such an approach could indeed fail). Instead, networks are trained directly in the compressed space (“end to end”) and evaluated during training and validation exactly as they would be in production. So training and validation accuracies may be interpreted as faithfully as one would in a normal training scenario." ]
[ 7, 7, 6, -1, -1, -1, -1 ]
[ 3, 4, 2, -1, -1, -1, -1 ]
[ "iclr_2018_ryup8-WCW", "iclr_2018_ryup8-WCW", "iclr_2018_ryup8-WCW", "BkJsM2vgf", "BJva6gOgM", "SJ1yeuTmM", "B1IwI-2xz" ]
iclr_2018_rkO3uTkAZ
Memorization Precedes Generation: Learning Unsupervised GANs with Memory Networks
We propose an approach to address two issues that commonly occur during training of unsupervised GANs. First, since GANs use only a continuous latent distribution to embed multiple classes or clusters of data, they often do not correctly handle the structural discontinuity between disparate classes in a latent space. Second, discriminators of GANs easily forget about past generated samples by generators, incurring instability during adversarial training. We argue that these two infamous problems of unsupervised GAN training can be largely alleviated by a learnable memory network to which both generators and discriminators can access. Generators can effectively learn representation of training samples to understand underlying cluster distributions of data, which ease the structure discontinuity problem. At the same time, discriminators can better memorize clusters of previously generated samples, which mitigate the forgetting problem. We propose a novel end-to-end GAN model named memoryGAN, which involves a memory network that is unsupervisedly trainable and integrable to many existing GAN models. With evaluations on multiple datasets such as Fashion-MNIST, CelebA, CIFAR10, and Chairs, we show that our model is probabilistically interpretable, and generates realistic image samples of high visual fidelity. The memoryGAN also achieves the state-of-the-art inception scores over unsupervised GAN models on the CIFAR10 dataset, without any optimization tricks and weaker divergences.
accepted-poster-papers
I am going to recommend acceptance of this paper despite being worried about the issues raised by reviewer 1. In particular, 1: the best possible inception score would be obtained by copying the training dataset 2: the highest visual quality samples would be obtained by copying the training dataset 3: perturbations (in the hidden space of a convnet) of training data might not be perturbations in l2, and so one might not find a close nearest neighbor with an l2 search 4: it has been demonstrated in other works that perturbations of convnet features of training data (e.g. trained as auto-encoders) can make convincing "new samples"; or more generally, paths between nearby samples in the hidden space of a convnet can be convincing new samples. These together suggest the possibility that the method presented is not necessarily doing a great job as a generative model or as a density model (it may be, we just can't tell...), but it is doing a good job at hacking the metrics (inception score, visual quality). This is not an issue with only this paper, and I do not want to punish the authors of this papers for the failings of the field; but this work, especially because of its explicit use of training examples in the memory, nicely exposes the deficiencies in our community's methodology for evaluating GANs and other generative models.
train
[ "Bko3dzDlG", "SyzkuzYxG", "S1ck4rYxM", "rkQUyPd7z", "ryl76gumM", "BJMceY9Mf", "ryYDlFcfG", "S16Vet5Mz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "In summary, the paper introduces a memory module to the GANs to address two existing problems: (1) no discrete latent structures and (2) the forgetting problem. The memory provides extra information for both the generation and the discrimination, compared with vanilla GANs. Based on my knowledge, the idea is novel and the Inception Score results are excellent. However, there are several major comments should be addressed, detailed as follows:\n\n1. The probabilistic interpretation seems not correct.\n\nAccording to Eqn (1), the authors define the likelihood of a sample x given a slot index c as p(x|c=i) = N(q; K_i, sigma^2), where q is the normalized output of a network mu given x. It seems that this is not a well defined probability distribution because the Gaussian distribution is defined over the whole space while the support of q is restricted within a simplex due to the normalization. Then, the integral over x should be not equal to 1 and hence all of the probabilistic interpretation including the equations in the Section 3. and results in the Section 4.1. are not reliable. I'm not sure whether there is anything misunderstood because the writing of the Section 3 is not so clear. \n\n2. The writing of the Section 3 should be improved.\n\nCurrently, the Section 3 is not easy to follow for me due to the following reasons. First, there lacks a coherent description of the notations. For instance, what's the difference between x and x', used in Section 3.1.1 and 3.1.2 respectively? According to the paper, both denote a sample. Second, the setting is somewhat unclear. For example, it is not natural to discuss the posterior without the clear definition of the likelihood in Eqn (1). Third, a lot of details and comparison with other methods should be moved to other parts and the summary of the each part should be stated explicitly and clearly before going into details.\n\n3. Does the large memory hurt the generalization ability of the GANs?\n\nFirst of all, I notice that the random noise is much lower dimensional than the memory, e.g. 2 v.s. 256 on affine-MNIST. Does such large memory hurt the generalization ability of GANs? I suspect that most of the information are stored in the memory and only small change of the training data is allowed. I found that the samples in Figure 1 and Figure 5 are very similar and the interpolation only shows a very small local subspace near by a training data, which cannot show the generalization ability. Also note that the high Inception Score cannot show the generalization ability as well because memorizing the training data will obtain the highest score. I know it's hard to evaluate a GAN model but I think the authors can at least show the nearest neighbors in the training dataset and the training data that maximizes the activation of the corresponding memory slot together with the generated samples to see the difference.\n\nBesides, personally speaking, Figure 1 is not so fair because a MemoryGAN only shows a very small local subspace near by a training data while the vanilla GAN shows a large subspace, making the quality of the generation different. The MemoryGAN also has failure samples in the whole latent space as shown in Figure 4.\n\nOverall, I think this paper is interesting but currently it does not reach the acceptance threshold.\n\nI change the rating to 6 based on the revised version, in which most of the issues are addressed.", "MemoryGAN is proposed to handle structural discontinuity (avoid unrealistic samples) for the generator, and the forgetting behavior of the discriminator. The idea to incorporate memory mechanism into GAN is interesting, and the authors make nice interpretation why this needed, and clearly demonstrate which component helps (including the connections to previous methods). \n\nMy major concerns:\n\nFigure 1 is questionable in demonstrating the advantage of proposed MemoryGAN. My understanding is that four z's used in DCGAN and MemoryGAN are \"randomly sampled\" and fixed, interpolation is done in latent space, and propagate to x to show the samples. Take MNIST for example, It can be seen that the DCGAN has to (1) transit among digits in different classes, while MemoryGAN only (2) transit among digits in the same class. Task 1 is significantly harder than task 2, it is not surprise that DCGAN generate unrealistic images. A better experiment is to fix four digits from different class at first, find their corresponding latent codes, do interpolation, and propagate back to sample space to visualize results. If the proposed technique can truly handle structural discontinuity, it will \"jump\" over the sample manifold from one class to another, and thus avoid unrealistic samples. Also, the current illustration also indicates that the generated samples by MemoryGAN is not diverse.\n\nIt seems the memory mechanism can bring major computational overhead, is it possible to provide the comparison on running time?\n\nTo what degree the MemoryGAN can handle structural discontinuity? It can be seen from Table 2 that larger improvement is observed when tested on a more diverse dataset. For example, the improvement gap from MNIST to CIFAR is larger. If the MemoryGAN can truly deal with structural discontinuity, the results on generating a wide range of different images for ImageNet may endow the paper with higher impact.\n\nThe authors should consider to make their code reproducible and public. \n\n\nMinor comments:\n\nIn Section 4.3, Please fix \"Results in 2\" as \"Results in Table 2\".\n\n\n", "[Overview]\n\nIn this paper, the authors proposed a novel model called MemoryGAN, which integrates memory network with GAN. As claimed by the authors, MemoryGAN is aimed at addressing two problems of GAN training: 1) difficult to model the structural discontinuity between disparate classes in the latent space; 2) catastrophic forgetting problem during the training of discriminator about the past synthesized samples by the generator. It exploits the life-long memory network and adapts it to GAN. It consists of two parts, discriminative memory network (DMN) and Memory Conditional Generative Network (MCGN). DMN is used for discriminating input samples by integrating the memory learnt in the memory network, and MCGN is used for generating images based on random vector and the sampled memory from the memory network. In the experiments, the authors evaluated memoryGAN on three datasets, CIFAR-10, affine-MNIST and Fashion-MNIST, and demonstrated the superiority to previous models. Through ablation study, the authors further showed the effects of separate components in memoryGAN. \n\n[Strengths]\n\n1. This paper is well-written. All modules in the proposed model and the experiments were explained clearly. I enjoyed much to read the paper.\n\n2. The paper presents a novel method called MemoryGAN for GAN training. To address the two infamous problems mentioned in the paper, the authors proposed to integrate a memory network into GAN. Through memory network, MemoryGAN can explicitly learn the data distribution of real images and fake images. I think this is a very promising and meaningful extension to the original GAN. \n\n3. With MemoryGAN, the authors achieved best Inception Score on CIFAR-10. By ablation study, the authors demonstrated each part of the model helps to improve the final performance.\n\n[Comments]\n\nMy comments are mainly about the experiment part:\n\n1. In Table 2, the authors show the Inception Score of images generated by DCGAN at the last row. On CIFAR-10, it is ~5.35. As the authors mentioned, removing EM, MCGCN and Memory will result in a conventional DCGAN. However, as far as I know, DCGAN could achieve > 6.5 Inception Score in general. I am wondering what makes such a big difference between the reported numbers in this paper and other papers?\n\n2. In the experiments, the authors set N = 16,384, and M = 512, and z is with dimension 16. I did not understand why the memory size is such large. Take CIFAR-10 as the example, its training set contains 50k images. Using such a large memory size, each memory slot will merely count for several samples. Is a large memory size necessary to make MemoryGAN work? If not, the authors should also show ablated study on the effect of different memory size; If it is true, please explain why is that. Also, the authors should mention the training time compared with DCGAN. Updating memory with such a large size seems very time-consuming.\n\n3. Still on the memory size in this model. I am curious about the results if the size is decreased to the same or comparable number of image categories in the training set. As the author claimed, if the memory network could learn to cluster training data into different category, we should be able to see some interesting results by sampling the keys and generate categoric images.\n\n4. The paper should be compared with InfoGAN (Chen et al. 2016), and the authors should explain the differences between two models in the related work. Similar to MemoryGAN, InfoGAN also did not need any data annotations, but could learn the latent code flexibly.\n\n[Summary]\n\nThis paper proposed a new model called MemoryGAN for image generation. It combined memory network with GAN, and achieved state-of-art performance on CIFAR-10. The arguments that MemoryGAN could solve the two infamous problem make sense. As I mentioned above, I did not understand why the authors used such large memory size. More explanations and experiments should be conducted to justify this setting. Overall, I think MemoryGAN opened a new direction of GAN and worth to further explore.\n\n", "We'd like to appreciate Reviewer 1 again for the constructive comments, which are greatly helpful to make our paper better. We are very glad that our rebuttal clarifies Reviewer 1's concerns.", "Thanks for the detailed rebuttal. I'm glad to see most of the issues are addressed in the revision and I'd like to change the rating to 6.", "We thank Reviewer 1 for positive and constructive reviews. Below, we respond each comment in details. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\n\n1. The probabilistic interpretation.\nThanks for pointing out the unclearness of our formulation. First of all, the normalizing constant does not affect the model formulation, because it is a common denominator in the posterior. However, as Reviewer 1 pointed out, we use distributions on a unit sphere, and thus they should be Von Mises-Fisher (vMF) distributions with a concentration constant k=1, instead of Gaussian distributions. Without changing any fundamentals of MemoryGAN, we change the Gaussian mixtures to Von Mises-Fisher Mixtures in the draft. We appreciate Reviewer 1 for the correction.\n\n2. Writing improvement of Section 3.\n(1) The difference between x and x' in Section 3.1.1-3.1.2.\nWe used x to denote samples for updating discriminator parameters, and x’ for updating the memory module. Since every training sample goes through these two updating operations, there is no need to use both, and we unify them to x.\n(2) Discuss the posterior without the clear definition of the likelihood in Eq.(1). \nThe likelihood for Eq.(1) is identical to that of the standard vMF mixture model. Thus, we omitted it and directly introduced the posterior equation. We will clarify them.\n(3) Overall organization \nWe will re-organize the draft so that key ideas in each part are explicitly summarized before the details.\n\n3. Generalization ability.\nAs Reviewer 1 suggested, we add an additional result to the Figure 5, where for each sample produced by MemoryGAN (in the left-most column), the seven nearest images in the training set are shown in the following columns. Apparently, our MemoryGAN generates novel images rather than merely memorizing and retrieving the images in the training set.\nThe memory is used to represent not only positive samples but also possible fake samples. Thus, the memory size is rather large (n=16384), for the CIFAR10 dataset. That is, the more diverse the dataset is, the larger memory size is required to represent both variability. In our experiments, we set the memory size based on the performance on the validation set. \n\n4. Fig.1.\nInitially, we fixed the discrete latent variable c for MemoryGAN, because it is a memory index, and thus it is meaningless to interpolate over c. However, we follow Reviewer’s rationale and update Fig.1 in the new draft. Please check it.\nIn new Fig.1.(b,d), we first randomly sample both (z,c) shown at the four corners in blue boxes. We then generate 64 images by interpolating both (z,c). However, since the interpolation over c is meaningless, we take key values K_c of the four randomly sampled c’s, and then perform interpolation over their K_c’s. Then, for each interpolated K_c’, we find the memory slot c = argmax p(c|K_c’), i.e. the memory index whose posterior is the highest with respect to K_c’.\nAs shown in Fig.1.(b,d), different classes are shown at the four corners, and other samples gradually change, but no structural discontinuity occurs. We hope the modified Fig.1 delivers the merits of MemoryGAN more intuitively.\n\n5. Failure cases of Fig.4.\nAs Reviewer 1 pointed out, MemoryGAN also has failure samples in the whole latent space as shown in Figure 4. Since our approach is completely unsupervised, sometimes a single memory slot may include similar images from different classes. It causes failure cases. Nevertheless, significant proportion of memory slots of MemoryGAN contain similar shaped single class, which leads much better performance than existing unsupervised GAN models. \n", "We thank Reviewer 2 for positive and constructive reviews. Below, we respond each comment in details. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\n\n1. Fig.1.\nInitially, we fixed the discrete latent variable c for MemoryGAN, because it is a memory index, and thus it is meaningless to interpolate over c. However, we follow Reviewer’s rationale and update Fig.1 in the new draft. Please check it.\nIn new Fig.1.(b,d), we first randomly sample both (z,c) shown at the four corners in blue boxes. We then generate 64 images by interpolating both (z,c). However, since the interpolation over c is meaningless, we take key values K_c of the four randomly sampled c’s, and then perform interpolation over their K_c’s. Then, for each interpolated K_c’, we find the memory slot c = argmax p(c|K_c’), i.e. the memory index whose posterior is the highest with respect to K_c’.\nAs shown in Fig.1.(b,d), different classes are shown at the four corners, and other samples gradually change, but no structural discontinuity occurs. We hope the modified Fig.1 delivers the merits of MemoryGAN more intuitively.\n\n2. Computation overhead.\nAs we replied to Reviewer 2, we measure the training time per epoch for MemoryGAN (4,128K parameters) and DCGAN (2,522K parameters), which are 135 sec and 124 sec, respectively.\nIt means MemoryGAN is only 8.9% slower than DCGAN for training, even with a scalable memory module. At test time, since only generator is used, there is no time difference between MemoryGAN and DCGAN. \n\n3. ImageNet experiments.\nWe observed that the memory module significantly helps improve the performance when using highly diverse datasets. For example, inception scores are higher for CIFAR10 than for FashionMNIST. Thus, as Reviewer 2 suggested, we can easily expect that the our MemoryGAN works better for the ImageNet dataset. We did not test with ImageNet, mainly because of too long training time (more than two weeks by our estimation). However, we will do it as a future work.\n\n4. Source code and typos.\nWe plan to make public the source code. \nThank you for correct typos!\n", "We thank Reviewer 3 for positive and constructive reviews. Below, we respond each comment in details. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\n\n1. DCGAN inception scores.\nThanks for a correction. As R3 pointed out, the DCGAN inception score of the original paper is 6.54+-0.67. The value 5.35 that we reported previously was the score of “MemoryGAN without memory”, which is identical to the DCGAN in terms of model structure. That was the reason why we named it as DCGAN. However, the “MemoryGAN without memory” had different details from the DCGAN, including the ELU activation (instead of ReLU and Leaky ReLU) and layer-normalization (instead of batch normalization). To resolve the confusion, we change the values of Table 1 to 6.54+-0.67 (the numbers reported in the original DCGAN paper).\n\n2. The memory size of MemoryGAN.\nIn our experiments, we set the memory size based on the performance on the validation set. The memory is used to represent not only positive samples but also possible fake samples. Thus, the memory size is rather large (n=16384), for the CIFAR10 dataset whose size is 50,000. That is, the more diverse the dataset is, the larger memory size is required to represent both variability. When we used a half-size memory (n=8192), the inception score for CIFAR10 decreased from 8.04 to 6.71. \nAs Reviewer 3 suggested, we test with decreasing the memory size to n=16, which is similar to the number of classes, on the Fashion-MNIST and CIFAR10 datasets. We obtain the inception score 6.14 for Fashion-MNIST with n=16, which is slightly lower than the reported score 6.39 with n=4096. On the other hand, for CIFAR10, the inception score significantly decreases from 8.04 with n=16384 to 3.06 with n=16. These results indicate that intra-class variability of Fashion-MNIST is small, while that of CIFAR10 is very high.\n\n3. Training/Test time.\nThe training time per epoch for MemoryGAN (4,128K parameters) and DCGAN (2,522K parameters) are 135 sec and 124 sec, respectively. It means MemoryGAN is only 8.9% slower than DCGAN for training, even with a scalable memory module. At test time, since only the generator is used, there is no time difference between MemoryGAN and DCGAN.\n\n4. Comparison with InfoGAN.\nThere are two key differences between InfoGAN and MemoryGAN. First, InfoGAN implicitly learns the latent cluster information of data into model parameters, while MemoryGAN explicitly maintains the information about the whole training set using a life-long memory network. Thus, MemoryGAN keeps track of current cluster information stably and flexibly without suffering from forgetting old samples. Second, MemoryGAN explicitly offers various distributions like prior distribution p(c), conditional likelihood p(x|c) and marginal likelihood p(x), unlike InfoGAN. Such interpretability is useful for designing or training the models.\n" ]
[ 6, 6, 7, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rkO3uTkAZ", "iclr_2018_rkO3uTkAZ", "iclr_2018_rkO3uTkAZ", "ryl76gumM", "BJMceY9Mf", "Bko3dzDlG", "SyzkuzYxG", "S1ck4rYxM" ]
iclr_2018_H1uR4GZRZ
Stochastic Activation Pruning for Robust Adversarial Defense
Neural networks are known to be vulnerable to adversarial examples. Carefully chosen perturbations to real images, while imperceptible to humans, induce misclassification and threaten the reliability of deep learning systems in the wild. To guard against adversarial examples, we take inspiration from game theory and cast the problem as a minimax zero-sum game between the adversary and the model. In general, for such games, the optimal strategy for both players requires a stochastic policy, also known as a mixed strategy. In this light, we propose Stochastic Activation Pruning (SAP), a mixed strategy for adversarial defense. SAP prunes a random subset of activations (preferentially pruning those with smaller magnitude) and scales up the survivors to compensate. We can apply SAP to pretrained networks, including adversarially trained models, without fine-tuning, providing robustness against adversarial examples. Experiments demonstrate that SAP confers robustness against attacks, increasing accuracy and preserving calibration.
accepted-poster-papers
This is a borderline paper. The reviewers are happy with the simplicity of the proposed method and the fact that it can be applied after training; but are concerned by the lack of theory explaining the results. I will recommend accepting, but I would ask the authors add the additional experiments they have promised, and would also suggest experiments on imagenet.
train
[ "ryrXQ4wyz", "SJFnpOYxM", "ry5D1Z5xf", "HJvA3yQQG", "rkk5517Qf", "B1DQ5J77z", "HJRkw1X7f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper investigates a new approach to prevent a given classifier from adversarial examples. The most important contribution is that the proposed algorithm can be applied post-hoc to already trained networks. Hence, the proposed algorithm (Stochastic Activation Pruning) can be combined with algorithms which prevent from adversarial examples during the training.\n\nThe proposed algorithm is clearly described. However there are issues in the presentation.\n\nIn section 2-3, the problem setting is not suitably introduced.\nIn particular one sentence that can be misleading:\n“Given a classifier, one common way to generate an adversarial example is to perturb the input in direction of the gradient…”\nYou should explain that given a classifier with stochastic output, the optimal way to generate an adversarial example is to perturb the input proportionally to the gradient. The practical way in which the adversarial examples are generated is not known to the player. An adversary could choose any policy. The only thing the player knows is the best adversarial policy.\n\nIn section 4, I do not understand why the adversary uses only the sign and not also the value of the estimated gradient. Does it come from a high variance? If it is the case, you should explain that the optimal policy of the adversary is approximated by “fast gradient sign method”. \n\nIn comparison to dropout algorithm, SAP shows improvements of accuracy against adversarial examples. SAP does not perform as well as adversarial training, but SAP could be used with a trained network. \n\nOverall, this paper presents a practical method to prevent a classifier from adversarial examples, which can be applied in addition to adversarial training. The presentation could be improved.\n", "This paper propose a simple method for guarding trained models against adversarial attacks. The method is to prune the network’s activations at each layer and renormalize the outputs. It’s a simple method that can be applied post-training and seems to be effective.\n\nThe paper is well written and easily to follow. Method description is clear. The analyses are interesting and done well. I am not familiar with the recent work in this area so can not judge if they compare against SOTA methods but they do compare against various other methods.\n\nCould you elaborate more on the findings from Fig 1.c Seems that the DENSE model perform best against randomly perturbed images. Would be good to know if the authors have any intuition why is that the case.\n\nThere are some interesting analysis in the appendix against some other methods, it would be good to briefly refer to them in the main text.\n\nI would be interested to know more about the intuition behind the proposed method. It will make the paper stronger if there were more content arguing analyzing the intuition and insight that lead to the proposed method.\n\nAlso would like to see some notes about computation complexity of sampling multiple times from a larger multinomial.\n\nAgain I am not familiar about different kind of existing adversarial attacks, the paper seem to be mainly focus on those from Goodfellow et al 2014. Would be good to see the performance against other forms of adversarial attacks as well if they exist.", "The authors propose to improve the robustness of trained neural networks against adversarial examples by randomly zeroing out weights/activations. Empirically the authors demonstrate, on two different task domains, that one can trade off some accuracy for a little robustness -- qualitatively speaking.\n\nOn one hand, the approach is simple to implement and has minimal impact computationally on pre-trained networks. On the other hand, I find it lacking in terms of theoretical support, other than the fact that the added stochasticity induces a certain amount of robustness. For example, how does this compare to random perturbation (say, zero-mean) of the weights? This adds stochasticity as well so why and why not this work? The authors do not give any insight in this regard.\n\nOverall, I still recommend acceptance (weakly) since the empirical results may be valuable to a general practitioner. The paper could be strengthened by addressing the issues above as well as including more empirical results (if nothing else).\n\n", "We thank Reviewer2 for thorough comments. We are glad that the reviewer appreciated the value of an adversarial defense technique that can be applied post-hoc and our exposition. We reply to specific points below:\n\n1. You are correct, the defender does not know the what policy any actual adversary will use, only what the optimal adversary might do, thus the objective of minimizing the worst-case performance. We are improving the draft to be clearer in this regard.\n\n2. Regarding: “In section 4, I do not understand why the adversary uses only the sign and not also the value of the estimated gradient.”: The reason why we are considering only the sign is because we cap the infinity norm of the adversarial perturbation. This leads to taking a step of equal size in each input dimension and thus the gradient magnitude does not come into play. This approach is standard in the recent academic study of adversarial examples and follows work by Goodfellow et al. (2014), which showed that imperceptible adversarial examples could be produced efficiently in this manner.. \n\nOne motivation for considering the infinity norm (vs L2 or L1) for constraining the size of an adversarial perturbation is that it accords more closely with perceptual similarity. For example, it’s possible to devise a perturbation with small L2 norm that is perceptually obvious because it moves a small group of pixels a large amount. \n\nNaturally, a stronger adversary might pursue an iterative approach rather than making one large perturbation. To this end, we are currently running experiments with iterative attacks and the initial results are promising - SAP continues to significantly outperform the dense model. We will add these results to the paper when they are ready.\n\n3. We are grateful for the reviewer’s suggestions for improving the exposition and are currently working to revise the draft in accordance with these recommendations. To start, we have improved some of the (previously) confusing language that might have failed to distinguish between the optimal adversary and some arbitrary adversary which may not apply the optimal perturbation.\n", "Thanks for your clear review of our paper. We are glad that you appreciated both the method and the clarity of exposition. \n\n1. Regarding Fig 1.c: While dense models are susceptible to adversarial attack, they are actually quite robust to random noise. The purpose of reporting the results of this experiment is to provide context for the other results. Because dense models are not especially vulnerable to random noise, we are not surprised that they perform well here. \n\n2. Thanks for the suggestion that the analysis in the appendix should be summarized within the body of the paper. Per your request, we have added an additional subsection (5.3) in the current draft that briefly describes the baselines and we have included a corresponding figure that shows the quantitative results for each.\n\n3. While we are reluctant to present an explanation for a phenomena that we do not fully understand, we are happy to share the intuitions that guided us in developing the algorithm: \n\nOriginally we were looking sparsifying the weights and/or activations of the network. We were encouraged by results, e.g. https://arxiv.org/abs/1510.00149, showing high accuracy with sparsified weights (as by pruning). We thought that by sparsifying a network, we might maintain high accuracy while lowering the Lipschitz constant and thus conferring some robustness against small perturbations. We later drew some inspiration from randomized algorithms that sparsify matrices by randomly dropping entries according to their weights and scaling up the survivors to produce a sparse matrix with similar spectral properties to the original.\n\n4. Sampling from the multinomial is fast. Without getting into detail about how many random bits are needed, given uniform samples, we can convert to a sample from a multinomial by performing a binary search. So it’s roughly k log(n) where k is the number of samples and n is the number of activations. As a practical concern, sampling from the multinomial in our algorithms does not comprise a significant computational obstacle.\n\n5. As you correctly point out, In our experiments, we adopt approach from Goodfellow et al. of evaluating with adversarial perturbations produced by taking a single step with capped infinity norm. However, we generate these attacks differently for each model. Against our stochastic models, the adversary produces the attack by estimating the gradient with MC samples. \n\n6. Per your suggestions we have compared against a stronger modes of attack, namely an iterative update where we take multiple small updates, each of capped infinity norm. In these experiments, SAP continues to outperform the dense model significantly. We are currently compiling these results and will add them to the draft when ready. \n", "Thanks for the thoughtful review of our paper. We are glad that you recognize the empirical strength of the result and the simplicity of the method. We are also share your desire for greater theoretical understanding.\n\nRegarding: “how does this compare to random perturbation (say, zero-mean) of the weights?”.\nWe ran this experiment, and found that it did not help. Additionally, for a more direct comparison, we compared against zero-mean Gaussian noise applied to the activations. We call this method Random Noisy Activations (RNA). It was previously described only in Appendix B, but we have now added a brief description to section 5 and reported the quantitative results in Figure 5.\n\nDespite extensive empirical study, precisely why our method works but random noise on the activations does not remains unclear. While we can imagine some ways of spinning a theoretical story post-hoc, the honest answer is that we do not yet possess a solid theoretical explanation. We share your desire for a greater understanding and plan to investigate this direction further in future work.\n\n***TL;DR: Per your suggestions, we have improved the draft by running additional experiments. Please find in Figure 5 results for 0-mean gaussian noise applied to weights with sigma values {.01, .02, …, .05}, as well as results for several other sensible baselines and greater detail in Appendix B.***\n", "We would like to thank the reviewers for their thoughtful responses to our paper. We are glad to see that there is a consensus among the reviewers to accept and are grateful to each of the reviewers for critical suggestions that will help us to improve the work. Please find individual replies to each of the reviews in the respective threads." ]
[ 6, 7, 6, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_H1uR4GZRZ", "iclr_2018_H1uR4GZRZ", "iclr_2018_H1uR4GZRZ", "ryrXQ4wyz", "SJFnpOYxM", "ry5D1Z5xf", "iclr_2018_H1uR4GZRZ" ]
iclr_2018_HkxF5RgC-
Sparse Persistent RNNs: Squeezing Large Recurrent Networks On-Chip
Recurrent Neural Networks (RNNs) are powerful tools for solving sequence-based problems, but their efficacy and execution time are dependent on the size of the network. Following recent work in simplifying these networks with model pruning and a novel mapping of work onto GPUs, we design an efficient implementation for sparse RNNs. We investigate several optimizations and tradeoffs: Lamport timestamps, wide memory loads, and a bank-aware weight layout. With these optimizations, we achieve speedups of over 6x over the next best algorithm for a hidden layer of size 2304, batch size of 4, and a density of 30%. Further, our technique allows for models of over 5x the size to fit on a GPU for a speedup of 2x, enabling larger networks to help advance the state-of-the-art. We perform case studies on NMT and speech recognition tasks in the appendix, accelerating their recurrent layers by up to 3x.
accepted-poster-papers
The reviewers find the work interesting and well made, but are concerned that ICLR is not the right venue for the work. I will recommend that the paper be accepted, but ask the authors to add the NMT results to the main paper (any other non-synthetic applications they could add would be helpful).
train
[ "rkoKvifef", "BJ6cxWFlM", "H1PcMAKeG", "SkzfXdpmf", "SJ5iMUt7f", "BkwIocfQz", "S1gFF5fmM", "HJ4pB5M7f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "The paper devises a sparse kernel for RNNs which is urgently needed because current GPU deep learning libraries (e.g., CuDNN) cannot exploit sparsity when it is presented and because a number of works have proposed to sparsify/prune RNNs so as to be able to run on devices with limited compute power (e.g., smartphones). Unfortunately, due to the low-level and GPU specific nature of the work, I would think that this work will be better critiqued in a more GPU-centric conference. Another concern is that while experiments are provided to demonstrate the speedups achieved by exploiting sparsity, these are not contrasted by presenting the loss in accuracy caused by introducing sparsity (in the main portion of the paper). It may be the case by reducing density to 1% we can speedup by N fold but this observation may not have any value if the accuracy becomes abysmal.\n\nPros:\n- Addresses an urgent and timely issue of devising sparse kernels for RNNs on GPUs\n- Experiments show that the kernel can effectively exploit sparsity while utilizing GPU resources well\n\nCons:\n- This work may be better reviewed at a more GPU-centric conference\n- Experiments (in main paper) only show speedups and do not show loss of accuracy due to sparsity", "This paper introduces sparse persistent RNNs, a mechanism to add pruning to the existing work of stashing RNN weights on a chip. The paper describes the use additional mechanisms for synchronization and memory loading. \n\nThe evaluation in the main paper is largely on synthetic workloads (i.e. large layers with artificial sparsity). With evaluation largely over layers instead of applications, I was left wondering whether there is an actual benefit on real workloads. Furthermore, the benefit over dense persistent RNNs for OpenNMT application (of absolute 0.3-0.5s over dense persistent rnns?) did not appear significant unless you can convince me otherwise. \n\nStoring weights persistent on chip should give a sharp benefit when all weights fit on the chip. One suggestion I have to strengthen the paper is to claim that due to pruning, now you can support a larger number of methods or method configurations and to provide examples of those.\n\nTo summarize, the paper adds the ability to support pruning over persistent RNNs. However, Narang et. al., 2017 already explore this idea, although briefly. Furthermore, the gains from the sparsity appear rather limited over real applications. I would encourage the authors to put the NMT evaluation in the main paper (and perhaps add other workloads). Furthermore, a host of techniques are discussed (Lamport timestamps, memory layouts) and implementing them on GPUs is not trivial. However, these are well known and the novelty or even the experience of implementing these on GPUs should be emphasized.", "The paper proposes improving performance of large RNNs by combing techniques of model pruning and persistent kernels. The authors further propose model-pruning optimizations which are aware of the persistent implementation.\n\nIt's not clear if the paper is relevant to the ICLR audience due to its emphasize on low-level optimization which has little insight in learning representations. The exposition in the paper is also not well-suited for people without a systems background, although I'll admit I'm mostly using myself as a proxy for the average machine learning researcher here. For instance, the authors could do more to explain Lamport Timestamps than a 1974 citation.\n\nModulo problems of relevance and expected audience, the paper is well-written and presents useful improvements in performance of large RNNs, and the work has potential for impact in industrial applications of RNNs. The work is clearly novel, and the contributions are clear and well-justified using experiments and ablations.", "Thanks to the feedback of the reviewers, we have updated our submission. The key differences are these:\n- All performance numbers are now gathered on a V100 GPU\n- We added more information about Lamport timestamps in the text to clarify their behavior and benefit\n- We add Deep Speech 2 to the case study in the appendix (up to a 3x speedup for baseline accuracy)\n- We mention the speedups from the case study in the main text to make concrete layer speedups on real tasks clear to readers\n- We make the \"side effect\" of our algorithm more clear: it is now worthwhile to prune recurrent layers, whereas persistent kernels before would regularly outperform sparse GEMMs for a target accuracy. We see this point as a key contribution.", "When I read the paper (and not the Appendix), I was left wondering how much this benefits real applications as opposed to synthetic workloads. Figure 3 is in the right direction. But can you connect the dots for the reader and describe some applications which especially benefit from large layers of the specific sizes you have mentioned?\n\nI agree that the optimizations are non-trivial but if they can be made interesting to the larger ICLR community, it will be great! \n\nI have upgraded my score. I still find the paper little bit weak on novelty but I am confident that you will fix the other issues/clarifications raised in my review, in the final revision.", "Thank you for your comments and observations. Let us first address the critical importance of network accuracy after pruning. We completely agree that large speed improvements are a moot point if the accuracy does not hold up. However, an exhaustive study of the sparsity/accuracy tradeoff is out of the scope of this paper. Instead, we refer to several other published results that show good accuracy results for recurrent networks around the 10% density point [Han et al. 2016(a,b), Narang et al. 2017, See et al. 2016, Anonymous 2018]. So, we centered our experiments around this density and swept from 1% to 30% to cover a wider range. After submission, densities down to 3% have been recently used to achieve state of the art results on some workloads (https://blog.openai.com/block-sparse-gpu-kernels/), and we show good speedups for higher densities, especially when the layer size is too large for a dense persistent kernel. Finally, we provided more accuracy vs. sparsity vs. speed results in the appendix to show why our technique is important. We'll gladly move this analysis into the main paper if extending beyond 8 pages is preferable to merely including references to other works with accuracy results.\n\nWe feel that this work is relevant to the ICLR audience. As you noted, sparsity is not regularly accelerated by deep learning libraries. More importantly, as we show in our appendix, some recurrent layers are actually better off staying dense and using a persistent approach if possible (without our sparse persistent technique). Simply increasing accuracy for the same number of effective parameters is not sufficient to claim success; the network's speed may not increase over a dense network! Thus, one of the fundamental benefits of sparsity is tempered in some cases. Our main contribution shifts this balance back in favor of pruning for recurrent layers.", "Thank you comments and suggestions. It is fair to wonder about the performance on real workloads; we decided to show the performance of our technique over a wide range of synthetic workloads so that practitioners can look to see where their application lives in the space and judge the relative performance accordingly. Our appendix shows the performance of recurrent layers of one particular application.\n\nWith respect to the speedup over the dense persistent LSTMs in the OpenNMT network, 0.3-0.5s (looking at layers of the same size) is not the proper comparison. Instead, we think that the comparison should be between networks of the same accuracy. In this case, the improvement is up to 0.7ms (from 1.26ms for a BLEU score of 24.62 to 0.55ms for a BLEU score of 24.60). Also, this is a per-layer improvement; a full network will be composed of several such layers leading to a larger absolute improvement for the network as a whole. More important than absolute speedup for a single iteration, however, is the potential speedup for training networks. This absolute 0.7ms reduces the run time to 44% of the previous time, roughly halving the time needed to train the network to a given accuracy. We'll make this clear in the final text. Finally, it's worth noting that without our contributions, the benefit of sparsity would be negative: existing sparse methods are worse than persistent kernels for a given accuracy or speed target on the workloads we studied.\n\nWe have a question about your suggestion to claim support for a larger number of methods. We do claim this: Figure 3 shows that we can support larger layers in a persistent approach than existing methods. Please let us know if we've misunderstood; we welcome opportunities to strengthen this paper!\n\nWe will certainly move the NMT evaluation into the main paper if the reviewers think it warrants the extra space. We agree that it naturally belongs there.\n\nWe're also willing to emphasize the non-trivial aspects of the optimizations we used, as opposed to the brief mention in past work you point out. It is exactly these optimizations which take the bulk of the main paper; was there something in particular you suggest adding?", "Thank you for your time and comments. With respect to showing the relevance to ICLR, we think the results of our work are very important. Let us try to clarify this relevance by presenting the results of the appendix _without_ the context of the main paper: \"For the recurrent layers of the network we studied, there's no need to prune the weights. A dense persistent implementation of the network is faster for the same accuracy as a pruned network, or more accurate at a given speed target.\" \n\nThere has been significant interest in model pruning, mostly for the purposes of increasing performance. However, realizing increased performance often requires some type of structured pruning, such as pruning filters or channels from convolutional networks, or leaving dense blocks in recurrent networks. (As Narang et al. noted in their 2017 work at ICLR, cuSPARSE achieves limited speedup for unstructured sparsity, especially for large batch sizes.) However, imposing structure on the sparsity reduces the degrees of freedom; unstructured sparsity can represent a proper superset of the patterns that any structured sparse layer can represent. Therefore, it is preferable from a model's point of view to use sparsity without any structure (if sparsity is to be used at all, and second-order regularization effects of imposed structure notwithstanding). So, we are motivated to find the efficient method, presented in the main section, to accelerate recurrent layers with unstructured sparsity.\n\nHowever, presenting an efficient method is only half the story; to start filling in the pieces, we included our appendix (as an appendix, in order to stay within the suggested page limit). We show how both accuracy and speed change with sparsity. In particular, without our method, unstructured sparsity (preferred by the model) is inferior to a dense network. Dense persistent kernels are faster and more accurate than their pruned cuSPARSE counterparts for the model we studied. We will make these points more clear in the next version of the text -- as well as spending some more space on describing Lamport Timestamps!" ]
[ 6, 6, 6, -1, -1, -1, -1, -1 ]
[ 2, 4, 2, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HkxF5RgC-", "iclr_2018_HkxF5RgC-", "iclr_2018_HkxF5RgC-", "iclr_2018_HkxF5RgC-", "S1gFF5fmM", "rkoKvifef", "BJ6cxWFlM", "H1PcMAKeG" ]
iclr_2018_ByKWUeWA-
GANITE: Estimation of Individualized Treatment Effects using Generative Adversarial Nets
Estimating individualized treatment effects (ITE) is a challenging task due to the need for an individual's potential outcomes to be learned from biased data and without having access to the counterfactuals. We propose a novel method for inferring ITE based on the Generative Adversarial Nets (GANs) framework. Our method, termed Generative Adversarial Nets for inference of Individualized Treatment Effects (GANITE), is motivated by the possibility that we can capture the uncertainty in the counterfactual distributions by attempting to learn them using a GAN. We generate proxies of the counterfactual outcomes using a counterfactual generator, G, and then pass these proxies to an ITE generator, I, in order to train it. By modeling both of these using the GAN framework, we are able to infer based on the factual data, while still accounting for the unseen counterfactuals. We test our method on three real-world datasets (with both binary and multiple treatments) and show that GANITE outperforms state-of-the-art methods.
accepted-poster-papers
The reviewers agree that the method is original and mostly well communicated, but have some doubts about the significance of the work.
val
[ "rk3S-gKez", "ryaoluFgG", "SyIFK-9lG", "HJAYMyyfz", "rkl8zJyfM", "S1VTby1fG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Summary:\nThis paper proposes to estimate the individual treatment effects (ITE) through\ntraining two separate conditional generative adversarial networks (GANs). \n\nFirst, a counterfactual GAN is trained to estimate the conditional distribution \nof the potential outcome vector, which consists of factual outcome and all \nother counterfactual outcomes, given 1) the feature vector, 2) the treatment \nvariable, and 3) the factual outcome. After training the counterfactual GAN, \nthe complete dataset containing the observed potential outcome vector can be \ngenerated by sampling from its generator.\n\nSecond, a ITE GAN is trained to estimate the conditional distribution of the\npotential outcome vector given only the feature vector. In this way, for any\ntest sample, its potential outcomes can be estimated using the generator of the\ntrained ITE GAN. Given its pontential outcome vector, its ITE can be estimated\nas well.\n\nExperimental results on the synthetic data shows the proposed approach, called\nGANITE, is more robust to the existence of selection bias, which is defined as\nthe mismatch between the treated and controlled distributions, compared to\nits competing alternatives. Experiments on three real world datasets show\nGANITE achieves the best performance on two datasets, including Twins and Jobs.\nIt does not perform very well on the IHDP dataset. The authors also run\nexperiments on the Twins dataset to show the proposed approach can estimate the\nmultiple treatment effects with better performance.\n\nComments\n1) This paper is well written. The background and related works are well\norganized. \n\n2) To the best of my knowledge, this is the first work that applies \nGAN to ITE estimation.\n\n3) Experiments on the synthetic data and the real-world data demonstrate the\nadvantage of the proposed approach.\n\n4) The authors directly present the formulation without providing sufficient \nmotivations. Could the authors provide more details or intuitions on why GAN \nwould improve the performance of ITE estimation compared to approaches that\nlearn representations to minimize the distance between the distributions of\ndifferent treatment groups, such as CFR_WASS?\n\n5) As is pointed out by the authors, the proposed approach does not perform\nwell when the dataset is small, such as the IHDP data. However, in practice, a\nlot of real-world datasets might have small sample size, such as the LaLonde\ndataset. Did the authors plan to extend the model to handle those small-sized \ndata sets without completely changing the model.\n\n6) When training the ITE GAN, the objective is to learn the conditional\ndistribution of the potential outcome vector given the feature vector. Did the\nauthors try the option of replacing ITE GAN with multi-task regression? Will\nthe performance become worse using multi-task regression? I think this \ncomparison would be a sanity check on the utility of using GAN instead of \nregression models for ITE estimation.", "This paper presents GANITE, a Generative Adversarial Network (GAN) approach for estimating Individualized Treatment Effects (ITE). This is achieved by utilising a GAN to impute the `missing` counterfactuals, i.e. the outcomes of the treatments that were not observed in the training (i.e. factual) sample, and then using another GAN to estimate the ITE based on this `complete` dataset. The authors then proceed in combining the two GAN objectives with extra supervised losses to better account for the observed data; the GAN loss for the `G` network has an extra term for the `G` network to better predict the factual outcome `y_f` (which should be easy to do given the fact that y_f is an input to the network) and the GAN loss for the `I` network has an extra term w.r.t. the corresponding performance metric used for evaluation, i.e. PEHE for binary treatment and MSE for multiple treatments. This model is then evaluated on extensive experiments.\n\nThe paper is reasonably well-written with clear background and diagrams for the overall architecture. The idea is novel and seems to be relatively effective in practice although I do believe that it has a lot of moving parts and introduces a considerable amount of hyperameters (which generally are problematic to tune in causal inference tasks). Other than that, I have the following questions and remarks:\n- I might have misunderstood the motivation but the GAN objective for the `G` network is a bit weird; why is it a good idea to push the counterfactual outcomes close to the factual outcomes (which is what the GAN objective is aiming for)? Intuitively, I would expect that different treatments should have different outcomes and the distribution of the factual and counterfactual `y` should differ.\n- According to which metric did you perform hyper-parameter optimization on all of the experiments? \n- From the first toy experiment that highlights the importance of each of the losses it seems that the addition of the supervised loss greatly boosts the performance, compared to just using the GAN objectives. What was the relative weighting on those losses in general? \n- From what I understand the `I` network is necessary for out-of-sample predictions where you don’t have the treatment assignment, but for within sample prediction you can also use the `G` network. What is the performance gap between the `I` and `G` networks on the within-sample set? Furthermore, have you experimented with constructing `G` in a way that can represent `I` by just zeroing the contribution of `y_f` and `t`? In this way you can tie the parameters and avoid the two-step process (since `G` and `I` represent similar things).\n- For figure 2 what was the hyper parameters for CFR? CFR includes a specific knob to account for the larger mismatches between treated and control distributions. Did you do hyper-parameter tuning for all of the methods in this task?\n- I would also suggest to not use “between” when referring to the KL-divergence as it is not a symmetric quantity.\n\nAlso it should be pointed out that for IHDP the standard evaluation protocol is 1000 replications (rather than 100) so there might be some discrepancy on the scores due to that.", "This paper introduces a generative adversarial network (GAN) for estimating individualized treatment effects (ITEs) by (1) learning a generator that tries to fool a discriminator with feature, treatment, potential outcome- vectors, and (2) by learning a GAN for the treatment effect. In my view, the counterfactual component is the interesting and original component, and the results show that the ITE GAN component further improves performance (marginally but not significant). The analysis is conducted on semi-synthetic data sets created to match real data distributions with synthetically introduced selection bias and conducts extensive experimentation. While the results show worse performance compared to existing literature in the experiment with small data sizes, the work does show improvements in larger data sets. However, Table 5 in the appendix suggests these results are not significant when considering average treatment effect estimation (eATE and eATE).\n\nQuality: good. Clarity: acceptable. Originality: original. Significance: marginal.\n\nThe ITE GAN does not significantly outperform the counterfactual GAN alone (in the S and GAN loss regime), and in my understanding the counterfactual GAN is the particularly innovative component here, i.e., can the algorithm effectively enough generate indistinguishable counterfactual outcomes from x and noise. I wonder if the paper should focus on this in isolation to better understand and characterize this contribution.\n\nWhat is the significance of bold in the tables? I'd remove it if it's just to highlight which method is yours.\n\nDiscussion section should be called \"Conclusion\" and a space permitting a Discussion section should be written.\nE.g. exploration of the form of the loss when k>2, or when k is exponential e.g. a {0,1}^c hypercube for c potentially related treatment options in an order set. \nE.g. implications of underperformance in settings with small data sets. We have lots of large data sets where ground truth is unknown, and relatively more small data sets where we can identify ground truth at some cost.\nE.g. discussion of Table 2 (ITEs) where GANITE is outperforming the methods (at least on large data sets) and Table 5 (ATEs) which does not show the same result is warranted. Why might we expect this to the case?", "Answer 1: Inherent to the approach of learning a balanced representation is that the representation must trade off between predictive accuracy and bias. This is because often it will be the case that information that is biased is also highly predictive (in fact in the medical setting this is precisely why it is biased - because the doctors will assign treatments based on predictive features). GANITE on the other hand is not forced to make this bias trade-off and so, as shown in Figure 2, is able to outperform methods such as CFR_WASS, particularly when the bias is high.\n\nA further advantage of GANITE is that PEHE can be estimated from the generated counterfactuals of the “G” network.Therefore, we can directly optimize the hyperparameters that minimize the estimated PEHE which is the performance metric for ITE estimation. On the other hand, existing work such as CFR_WASS [1] cannot directly optimize PEHE.\n\n[1] Shalit, Uri, Fredrik Johansson, and David Sontag. \"Estimating individual treatment effect: generalization bounds and algorithms.\" ICML, 2016.\n\nAnswer 2: In the revised manuscript we will show improved results for GANITE on small datasets by searching for hyper-parameters in a larger space.\n\nAnswer 3: The ITE GAN can be replaced by any regression method. We show in Table 1 that the ITE GAN outperforms the alternative of replacing it with a multi-layer perceptron (MLP) - this corresponds to Row 1 and Column 3 of Table 1 (S loss only). However, this is not the only reason that we use ITE GAN instead of other regression models. The ITE GAN allows us to estimate the distribution of the potential outcomes, rather than just the expectation, which gives us access to, for example, the variance, capturing the underlying variability of the potential outcomes. We believe this is very important information when a decision about treatments assignments needs to be made [1]. We will try to highlight this more in the revised manuscript.", "Answer 1: We acknowledge that, due to the lack of ground truth, it is often difficult in causal inference tasks to optimize the hyper-parameters. This is because we never have access to the true loss function (in our case PEHE or MSE) that we are trying to minimize, and so the difficulty arises about what metric to use to tune hyperparameters with respect to, in the absence of the target loss. One of the advantages of GANITE is that our target loss (PEHE or MSE) can be estimated from the generated counterfactuals, unlike other methods such as in [1]. Therefore, we can directly optimize the hyperparameters that minimize this estimated PEHE/MSE - exact details of our hyper-parameter optimization are given in the Appendix. In the revised manuscript, we will state the optimal values for the hyper-parameters that we found for each dataset using greedy search. \n\n[1] Shalit, Uri, Fredrik Johansson, and David Sontag. \"Estimating individual treatment effect: generalization bounds and algorithms.\" ICML, 2016.\n\nAnswer 2: We think that our explanation in the manuscript may not have been clear to understand. We will break section 4 into two subsections that explain “G” and “I” networks separately. We agree that different treatments will have different outcomes, however, we think the misunderstanding has come from confusing “factual/counterfactual” with different treatment assignments. It should be noted that “factual” and “counterfactual” do not correspond to specific treatments – for any given sample, it is possible that any treatment is the factual one. It therefore makes sense to try and push counterfactual outcomes from one sample toward the factual outcomes from other (similar) samples for the same treatments. We achieve this by making the objective of “G” to generate counterfactuals in a way that, given the whole vector (factuals and counterfactuals), the discriminator cannot distinguish which element is factual.\n\nWe think there may also be some confusion around the “S loss” used for “G”. Due to the structure of “G”, it outputs a full vector of potential outcomes, and so it not only outputs counterfactuals, but also gives a value for the one factual that was used as input. We account for this by using the “S loss” to force the generated outcome for the factual treatment assignment to be close to the factual outcome actual observed. This is because, conditional on observing y_f, the component of y corresponding to y_f should clearly be equal to y_f.\n\nAnswer 3: As can be seen in Answer 1, GANITE generates proxies for the counterfactuals and this allows us to estimate the PEHE directly. We minimized the estimated PEHE over the hyper-parameter space. We will clarify this in the revised paper.\n\nAnswer 4: In the revised manuscript, we will include the optimal hyper-parameters (alpha and beta) that we found (using greedy search) for each dataset.\n\nAnswer 5: The tasks that “G” and “I” perform are fundamentally different. “G” predicts outcomes conditional on features, treatment assignment and outcome of the chosen treatment, (x, t, y_f). “I” predicts outcomes conditional only on the features, x, and so it would be expected for “G” to perform better than “I” when the task is predicting based on (x, t, y_f). This is the only comparison we can make, since “G” is not capable of predicting with only x. Furthermore, “G” and “I” are not at all independent, “I” is trained on a dataset generated by “G” and so “I” is being forced to “fit” to the outcomes “G” has generated. This means that any errors “G” has made in-sample will be pushed forward to “I”.\n\nHowever, the performance gap between the two will indicate how well “I” is able to learn from “G”, which we believe is an interesting question and so we will add results for this comparison in the Appendix of the revised manuscript. \n\nAnswer 6: We did think about this idea. In order to zero out the contribution of “y_f” and “t”, we need to marginalise out the distributions of “y_f” and “t” conditional on “x”, i.e. P(y_f, t | x) because P(y | x) = int P(y | x, y_f, t) P(y_f, t | x). Therefore, in order to zero out “y_f” and “t” we would need to learn P(y_f, t | x). This requires a model to learn and we do not believe this would be any simpler than our proposed structure - in both cases we would still have 2 learning stages.\n\nWe will try this idea (zero out the contribution of “y_f” and “t”) and report the results in the Appendix of the revised manuscript.\n\nAnswer 7: We follow the code published in https://github.com/clinicalml/cfrnet. We followed the parameter search process suggested in the github using cfr_param_search.py. We will clarify this in the revised manuscript.\n\nAnswer 8: We will revise it. Thanks.\n\nAnswer 9: For IHDP, we indeed do 1000 replications and report the results. For other experiments (twins and Jobs), we do 100 replications and report the results. We will clarify this in the revised manuscript.", "Answer 1: We agree with the reviewer’s comment that the most innovative part of our GANITE structure is the counterfactual GAN component. However, as shown in Table 1, using the ITE GAN does improve the PEHE over using just the Counterfactual GAN (See Row 1 Column 3 of Table 1). On top of this, we believe that there is a further novelty in the ITE GAN - using it allows us to estimate the conditional distribution of the potential outcomes, rather than just the expectation. This allows us to estimate the uncertainty in the true distribution of the outcomes which is important to know when deciding which treatments to assign. We will highlight this point in the revised manuscript.\n\nAnswer 2: Bold is just to highlight the results of our model. In the revised manuscript, we will use * to highlight statistically significant improvement(s) and remove the bold.\n\nAnswer 3: We will do this in the revised manuscript. \n\nAnswer 3-1: We agree with the reviewer that this is an interesting discussion point and will add it to the discussion at the end of the revised manuscript.\n\nAnswer 3-2: First, we would like to highlight that inherent to this problem is the fact that ground truth is not available, and so in the small datasets in which we can identify the ground truth GANITE is not needed. We therefore believe that what is important is its performance in the large datasets where ground truth is often impossible to identify. We will, however, show improvements for GANITE on smaller datasets by searching for hyper-parameters in a larger space.\n\nAnswer 3-3: The problem we address with GANITE is to estimate the ITE. We used the ATE performance as a sanity check for our method - and believe it passes the sanity check, being competitive with most other methods - but do not believe that it is an important metric for distinguishing models where the task is predicting treatment effects on an individual level (and so we only included these results in the Appendix). To highlight why ATE is not a good metric for comparison of ITE methods, we give a simple example. Consider the model that, for each treatment, simply predicts the population mean of the observed outcomes. Then this will often be a highly underfitted model for the task of ITE prediction since there will be many samples that deviate significantly from this mean, however, the ATE will be close to optimal (in this case bias hasn’t actually been accounted for)." ]
[ 6, 6, 6, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1 ]
[ "iclr_2018_ByKWUeWA-", "iclr_2018_ByKWUeWA-", "iclr_2018_ByKWUeWA-", "rk3S-gKez", "ryaoluFgG", "SyIFK-9lG" ]
iclr_2018_S18Su--CW
Thermometer Encoding: One Hot Way To Resist Adversarial Examples
It is well known that it is possible to construct "adversarial examples" for neural networks: inputs which are misclassified by the network yet indistinguishable from true data. We propose a simple modification to standard neural network architectures, thermometer encoding, which significantly increases the robustness of the network to adversarial examples. We demonstrate this robustness with experiments on the MNIST, CIFAR-10, CIFAR-100, and SVHN datasets, and show that models with thermometer-encoded inputs consistently have higher accuracy on adversarial examples, without decreasing generalization. State-of-the-art accuracy under the strongest known white-box attack was increased from 93.20% to 94.30% on MNIST and 50.00% to 79.16% on CIFAR-10. We explore the properties of these networks, providing evidence that thermometer encodings help neural networks to find more-non-linear decision boundaries.
accepted-poster-papers
This paper is borderline. The reviewers agree that the method is novel and interesting, but have concerns about scalability and weakness to attacks with larger epsilon. I will recommend accepting; but I think the paper would be well served by imagenet experiments, and hope the authors are able to include these for the final version
val
[ "ByzXBMDxf", "HJDuim3lM", "Bk9IXvzWf", "H16--t2Xz", "HkYs0dnXz", "r1fzgYnQM", "ryPcrG9xG", "B19zAPflM", "HkALrvckf", "SJJbNyK0Z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "public", "author", "public" ]
[ "This paper studies input discretization and white-box attacks on it to make deep networks robust to adversarial examples. They propose one-hot and thermometer encodings as input discretization and \nalso propose DGA and LS-PGA as white-box attacks on it.\nRobustness to adversarial examples for thermometer encoding is demonstrated through experiments.\n\nThe empirical fact that thermometer encoding is more robust to adversarial examples than one-hot encoding,\nis interesting. The reason why thermometer performs better than one-hot should be pursued more.\n\n[Strong points]\n* Propose a new type of input discretization called thermometer encodings.\n* Propose new white-box attacks on discretized inputs.\n* Deep networks with thermometer encoded inputs empirically have higher accuracy on adversarial examples.\n\n[Weak points]\n* No theoretical guarantee for thermometer encoding inputs.\n* The reason why thermometer performs better than one-hot has not unveiled yet.\n\n[Detailed comments]\nThermometer encodings do not preserve pairwise distance information.\nConsider the case with b_1=0.1, b_2=0.2, b_3=0.3, b_4=0.4 and x_i=0.09, x_j=0.21 and x_k=0.39.\nThen, 0.12=|x_j-x_i|<|x_k-x_j|=0.18 but ||tau(b(x_i))-tau(n(x_j))||_2=sqrt(2)>1=||tau(b(x_k))-tau(n(x_j))||_2.", "This is a beautiful work that introduces both (1) a novel way of defending against adversarial examples generated in a black-box or white-box setting, and (2) a principled attack to test the robustness of defenses based on discretized input domains. Using a binary encoding of the input to reduce the attack surface is a brilliant idea. Even though the dimensionality of the input space is increased, the intrinsic dimensionality of the data is drastically reduced. The direct relationship between robustness to adversarial examples and intrinsic dimensionality is well known (paper by Fawzi.). This article exploits this property nicely by designing an encoding that preserves pairwise distances by construction. It is well written overall, and the experiments support the claims of the authors. \n\nThis work has a crucial limitation: scalability.\nThe proposed method scales the input space dimension linearly with the number of discretization steps. Consequently, it has a significant impact on the number of parameters of the model when the dimensionality of the inputs is large. All the experiments in the paper report use relatively small dimensional datasets. For larger input spaces such as Imagenet, the picture could be entirely different:\n\n\t- How would thermometer encoding impact the performance on clean examples for larger dimensionality data (e.g., Imagenet)?\n\t- Would the proposed method be significantly different from bit depth reduction in such setting? \n\t- What would be the impact of the hyper-parameter k in such configuration?\n\t- Would the proposed method still be robust to white box attack?\n\t- The DGA and LS-PGA attacks look at all buckets that are\nwithin ε of the actual value, at every step. Would this be feasible in a large dimensional setting? More generally, would the resulting adversarial training technique be practically possible?\n\nWhile positive results on Imagenet would make this work a home run, negative results would not affect the beauty of the proposal and would shed critical light on the settings in which thermometer encoding is applicable. I lean on the accept side, and I am willing to increase the score greatly if the above questions are answered.", "The authors present an in-depth study of discretizing / quantizing the input as a defense against adversarial examples. The idea is that the threshold effects of discretization make it harder to find adversarial examples that only make small alterations of the image, but also that it introduces more non-linearities, which might increase robustness. In addition, discretization has little negative impact on the performance on clean data. The authors also propose a version of single-step or multi-step attacks against models that use discretized inputs, and present extensive experiments on MNIST, CIFAR-10, CIFAR-100 and SVHN, against standard baselines and, on MNIST and CIFAR-10, against a version of quantization in which the values are represented by a small number of bits.\n\nThe merits of the paper is that the study is rather comprehensive: a large number of datasets were used, two types of discretization were tried, and the authors propose an attack mechanism better that seems reasonable considering the defense they consider. The two main claims of the paper, namely that discretization doesn't hurt performance on natural test examples and that better robustness (in the author's experimental setup) is achieved through the discretized encoding, are properly backed up by the experiments.\n\nYet, the applicability of the method in practice is still to be demonstrated. The threshold effects might imply that small perturbations of the input (in the l_infty sense) will not have a large effect on their discritized version, but it may also go the other way: an opponent might be able to greatly change the discretized input without drastically changing the input. Figure 8 in the appendix is a bit worrysome on that point, as the performance of the discretized version drops rapidly to 0 when the opponents gets a bit stronger. Did the authors observe the same kind of bahavior on other datasets? What would the authors propose to mitigate this issue? To what extend the good results that are exhibited in the paper are valid over the wide range of opponent's strengths?\n\nminor comment:\n- the experiments on CIFAR-100 in Appendix E are carried out by mixing adversarial / clean examples while training, whereas those on SVHN in Appendix F use adversarial examples only.\n", "Thank you for your feedback! The finding that thermometer encodings perform better than one-hot encodings was a primarily empirical finding, and further exploration is certainly needed. The goal of this work was primarily to establish the strength of discretized encodings, and thermometer encodings in particular. We currently believe that the primary reason that thermometer encodings perform better and exhibit smoother convergence than one-hot encodings in the adversarial defense regime is the superior inductive bias; nearby pixel values typically have similar semantic content.\n\nYou are correct that thermometer encodings do not preserve pairwise distance information; our statement was true only in the case where the number of discretization levels is equal to the number of possible pixel values, as in the PixelRNN paper. Since this is not realistic in most settings, we have weakened our claim to instead state that thermometer encodings maintain ordering information, which is true in all cases.", "Thank you for your feedback!\n\n> ...the performance of the discretized version drops rapidly to 0 when the opponents gets a bit stronger.\n\nWe observe this behavior on the datasets we tested on, which were MNIST and CIFAR. (CIFAR does not drop all the way to 0, but does drop sharply.) We believe that this is an expected result, stemming from the intuition that the relationship between the input and the loss is highly nonlinear. When presented with an input which it has never been exposed to (i.e. a pixel has been moved into a bucket that is beyond the adversarial training threshold), the effect on the loss is highly random. Many of these perturbed inputs will increase the loss, and it is therefore easy to find an adversarial example.\n\nControlling for the wide range of opponent’s strengths is an important issue, one which is endemic to adversarial defenses in general. The “standard setting” for the adversarial example problem (in which we constrain the L-infinity norm of the perturbed image to an epsilon ball around the original image) was designed to ensure that any adversarially-perturbed image is still recognizable as its original image by a human. However, this artificial constraint excludes many other potential attacks that also result in human-recognizable images. State-of-the art defenses in the standard setting can still be easily defeated by non-standard attacks; for recent examples of this, see ICLR submission “Adversarial Spheres” (appendix A), as well as “Adversarial Patch” by Brown et. al (https://arxiv.org/abs/1712.09665).\n\nWith this in mind, we believe that the fact that the performance of thermometer-encoded models degrades more quickly than that of vanilla models beyond the training epsilon is a weakness, but no worse in practice than other defenses. A “larger epsilon” attack is just one special case of a “non-standard” attack; there are an enormous number of other non-standard attacks, some of which are more effective against vanilla models, some of which are more effective against thermometer encodings, and some of which are devastating to both. If we permit non-standard attacks, a fair comparison would show that all current approaches are easily breakable. There is nothing special about the “larger epsilon” attack that makes a vulnerability to this non-standard attack in particular more problematic than vulnerabilities to other non-standard attacks, in practice.\n\nAdditionally, on the CIFAR dataset, we found that even though discretized inputs are impacted much more severely by examples perturbed by more than the training threshold, the discretized models are sufficiently strong to begin with that they still outperform real-valued models even after this vulnerability has been exploited. (See updated Figure 8b.) CIFAR is more reflective of real-world datasets, so even with this weakness, thermometer-encoded models may outperform real-valued models in practice.\n\nBased on your feedback, we updated our submission to include this discussion in Appendix G, and added Figure 8b showing the CIFAR results. Also, we discovered a bug which caused the unquantized attack in Figure 8 to be too weak; essentially, we were using a fixed step size of 0.01 for 40 steps which caused the perturbation to never hit the boundary for epsilon > 0.4. We have updated the figure to reflect the correct values. (The fixed results are qualitatively equivalent, so this change does not affect the conclusions.)\n", "Thank you for your feedback! We agree that learning more about the scaling properties would be enormously useful, but unfortunately, running adversarial training on full-size ImageNet simply proved too challenging. In our first attempt, we were forced to reduce our batch size in order to fit everything into memory, but this lead to poor convergence. We have now scaled up our resources in order to run it properly, and we hope to have results within the next few weeks; however, these experiments are still ongoing, and we have no results to report at this time. Unfortunately, since most of your concerns are empirical, this means that we cannot properly address them.\n\nOne note is that when setting up these experiments, the primary bottleneck was the memory consumption of the model itself, especially under multiple steps of attack. It’s true that increasing the number of discretization levels causes a linear increase in memory consumption proportional to the size of the input, but this is negligible compared to the memory usage of the actual model: in our Wide ResNet implementation, we estimate the memory used by the first layer (and thus multiplied by the input discretization step) to be only 1-2% of the overall memory used. Both vanilla and 16-level-thermometer-encoded inputs were subject to approximately the same constraints on batch sizes, steps of attack, etc., when using a wide ResNet with width of 4 and depth of 16.\n\nIn addition to having a relatively small proportional memory increase, input discretization requires relatively few additional model parameters: 0.03% extra parameters for MNIST, 0.08% for CIFAR-10 and CIFAR-100, and 2.3% for SVHN, as described in section 5. This indicates that the model size will not be a bottleneck in scaling discretized models.", "Apologies for my mix-up on the training. I meant the relatively small difference between the Clean and LS-PGA targets when attacked by the Blackbox PGD method. I hope that clarifies. It's really the same issue: the LS-PGA trained target performs poorly against the Blackbox PGD attacker.\n\nI'll read the paper you've linked. Thanks!", "Nice work, do you reckon that the density of adv. samples has gone down compared to Madry et al., or is it just that they are hard to find using gradient based techniques?", "Thanks very much for your question and interest in our paper.\n\nWe do not know for sure what causes the higher attack success rate for black-box adversarial examples compared to white box adversarial examples in Table 14. This discrepancy is consistent with the commonly-observed \"gradient masking\" problem that can be largely overcome with ensemble adversarial training: https://arxiv.org/pdf/1705.07204.pdf\nIt is possible that combining thermometer encoding and ensemble adversarial training could yield models that retain the white box robustness we have obtained with thermometer encoding but also have increased black box robustness.\nHowever, we have not done enough tests to verify that the issue is gradient masking, so we can't guarantee that ensemble adversarial training would help. This investigation is left to future work.\n\nWe don't understand what you mean by the \"limited gains of PGD-trained themometer target model.\"\nThe thermometer model is trained using LS-PGA adversarial examples, not PGD adversarial examples, but we interpret your comment to mean adversarially-trained thermometer models. We aren't sure whether you mean that adversarial training gives limited improvement to thermometer models or that thermometer models give limited improvement to adversarial training. Neither of these is supported by Table 14. Adversarial training causes thermometer models to become state of the art in all three categories in table 14. Likewise, thermometer coding causes adversarial training to become state of the art in all three categories. If you're concerned that the difference caused by thermometer coding on black box adversarial examples is small enough that it might be statistically insignificant, we can add error bars showing the 95% confidence interval. We can tell you ahead of time that these error bars do not overlap. SVHN has over 26,000 test examples, so the standard error of the test accuracy is smaller than on datasets with smaller test sets like MNIST and CIFAR.\n\nAgain, thank you for your interest.", "As I understand these results, I think that the SVHN results in Table 14 are very curious, and would like to see more analysis there. The discrepancy between white-box and black-box is quite odd, as are the limited gains of PGD-trained themometer target model." ]
[ 6, 6, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ 2, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S18Su--CW", "iclr_2018_S18Su--CW", "iclr_2018_S18Su--CW", "ByzXBMDxf", "Bk9IXvzWf", "HJDuim3lM", "HkALrvckf", "iclr_2018_S18Su--CW", "SJJbNyK0Z", "iclr_2018_S18Su--CW" ]
iclr_2018_HyrCWeWCb
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL, which exploits an observation that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. The introduction of relative entropy regularization allows Trust-PCL to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL significantly improves the solution quality and sample efficiency of TRPO.
accepted-poster-papers
This paper adapts (Nachum et al 2017) to continuous control via TRPO. The work is incremental (not in the dirty sense of the word popular amongst researchers, but rather in the sense of "building atop a closely related work"), nontrivial, and shows empirical promise. The reviewers would like more exploration of the sensitivity of the hyper-parameters.
train
[ "rJ-4JL_Vf", "ByDPYkUxG", "H11zfWQZf", "B1tQ10rVG", "H1ccXfmeG", "HkF_6L6Qz", "BJ--aUT7M", "Hk772U6XM" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "These comments continue to reveal some fundamental misunderstandings we should clarify.\n\nR2: \"Our paper does not present a policy gradient method\" <- This is obviously untrue.\n\n- To correct such a misunderstanding, one first needs to realize that policy gradient algorithms update model parameters along the gradient that maximizes expected reward (or possibly a regularized variant thereof). By contrast, the Trust-PCL updates are not designed to follow any gradient that maximizes (even regularized) expected reward. Instead, they are designed to minimize a temporal consistency error, much like Q-learning algorithms.\n\n- If attempting to suggest that any algorithm that updates a parametric policy representation is automatically a “policy gradient” method, that would not be consistent with the standard terminology used in RL.\n\n\nR2: \"Unfortunately, TRPO is restricted to the use of on-policy data\" <- There is no such restrictions. Actually, the proposed Trust-PCL does NOT deal with off-policy data, but only that the authors believe that it can handle off-policy data and thus feed it with such data. This is the same for TRPO. Off-policy data can also be fed to TRPO to see how well TRPO works, which is a must baseline.\n\n- It is again helpful to understand that Trust-PCL is off-policy in the same sense that Q-learning is off-policy: both use off-policy data to train models to satisfy temporal consistencies. Models learned in this way then induce a policy that is optimal iff all the temporal consistencies are satisfied.\n\n- The consistencies expressed in this paper (Eq. 21) are off-policy consistencies, in the sense that they do not rely on a specific sampling distribution of actions. Note that the expectations are only with respect to environment stochasticity (which is covered by a replay buffer). If the environment were deterministic, the consistency equations would still be well posed and contain no expectations.\n\n- Other work has also used similar ideas to derive off-policy algorithms:\n- -- Nachum, et al. 2017: “Bridging the Gap Between Value and Policy Based Reinforcement Learning”\n- -- Peters, et al. 2010: “Relative Entropy Policy Search”\n\n- TRPO updates involve importance weights of the form [current policy action probability] / [behavior policy action probability]. For this reason, off-policy training becomes highly unstable if the current policy deviates too far from the behavior policy. We are not aware of any work that trains TRPO in an off-policy manner using a replay buffer. The naive suggestion given above obviously leads to wild instability. If the reviewer is aware of any successful such attempts, a reference would certainly be appreciated, if any exists.\n\n- One could perhaps attempt to interpret the learning rate of TRPO (\\epsilon, the max divergence) as a way to tune the degree of “on-policy” behavior. Under such an interpretation one could then attempt to choose \\epsilon to be as large as possible without incurring instability. However, our experiments have already demonstrated the results of using the best \\epsilon after exhaustive tuning, so there would not be any additional information to be gained through such an interpretation.\n\n\nR2: \"We included 6 standard benchmarks in the paper\" <- They are the simplest ones.\n\n- The tasks we included cover a wide range of difficulties. The results show significant advantages on harder tasks such as Hopper, Walker2d, and Ant. These tasks are by no means “simple” as can be deduced by comparing our results to those in other papers, including many submitted to this year’s ICLR:\n- -- https://openreview.net/forum?id=H1tSsb-AW\n- -- https://openreview.net/forum?id=BkUp6GZRW\n- -- https://openreview.net/forum?id=HJjvxl-Cb\n- -- https://openreview.net/forum?id=B1nLkl-0Z \n", "Clarity \nThe paper is well-written and clear. \n\nOriginality\nThe paper proposes a path consistency learning method with a new combination of entropy regularization and relative entropy. The paper leverages a novel method in determining the coefficient of relative entropy. \n\nSignificance\n- Trust-PCL achieves overall competitive with state-of-the-art external implementations.\n- Trust-PCL (off-policy) significantly outperform TRPO in terms of data efficiency and final performance. \n- Even though the paper claims Trust-PCL (on-policy) is close to TRPO, the initial performance of TRPO looks better in HalfCheetah, Hopper, Walker2d and Ant. \n- Some ablation studies (e.g., on entropy regularization and relative entropy) and sensitivity analysis on parameters (e.g. \\alpha and update frequency on \\phi) would be helpful. \n\nPros:\n- The paper is well-written and clear. \n- Competitive with state-of-the-art external implementations\n- Significant empirical advantage over TRPO.\n- Open source codes.\n\nCons:\n- No ablation studies. \n", "This paper presents a policy gradient method that employs entropy regularization and entropy constraint at the same time. The entropy regularization on action probability is to encourage the exploration of the policy, while the entropy constraint is to stabilize the gradient.\n\nThe major weakness of this paper is the unclear presentation. For example, the algorithm is never fully described, though a handful variants are discussed. How the off-policy version is implemented is missing.\n\nIn experiments, why the off-policy version of TRPO is not compared. Comparing the on-policy results, PCL does not show a significant advantage over TRPO. Moreover, the curves of TRPO is so unstable, which is a bit uncommon. \n\nWhat is the exploration strategy in the experiments? I guess it was softmax probability. However, in many cases, softmax does not perform a good exploration, even if the entropy regularization is added.\n\nAnother issue is the discussion of the entropy regularization in the objective function. This regularization, while helping exploration, do changes the original objective. When a policy is required to pass through a very narrow tunnel of states, the regularization that forces a wide action distribution could not have a good performance. Thus it would be more interesting to see experiments on more complex benchmark problems like humanoids.", "The revised paper has made improvements. I thus raise my score a bit. However there are still some issues:\n\n\"Our paper does not present a policy gradient method\" <- This is obviously untrue.\n\n\"Unfortunately, TRPO is restricted to the use of on-policy data\" <- There is no such restrictions. Actually, the proposed Trust-PCL does NOT deal with off-policy data, but only that the authors believe that it can handle off-policy data and thus feed it with such data. This is the same for TRPO. Off-policy data can also be fed to TRPO to see how well TRPO works, which is a must baseline.\n\n\"We included 6 standard benchmarks in the paper\" <- They are the simplest ones. \n\nThe major concern is the unclear relationship between the methodology of entropy regularization and entropy constraint and the goal of off-policy learning.\n", "The paper extends softmax consistency by adding in a relative entropy term to the entropy regularization and applying trust region policy optimization instead of gradient descent. I am not an expert in this area. It is hard to judge the significance of this extension.\n\nThe paper largely follows the work of Nachum et al 2017. The differences (i.e., the claimed novelty) from that work are the relative entropy and trust region method for training. However, the relative entropy term added seems like a marginal modification. Authors claimed that it satisfies the multi-step path consistency but the derivation is missing.\n\nI am a bit confused about the way trust region method is used in the paper. Initially, problem is written as a constrained optimization problem (12). It is then converted into a penalty form for softmax consistency. Finally, the Lagrange parameter is estimated from the trust region method. In addition, how do you get the Lagrange parameter from epsilon?\n\nThe pseudo code of the algorithm is missing. It would be much clearer if a detailed description of the algorithmic procedure is given.\n\nHow is the performance of Trust-PCL compared to PCL? ", "R3: \"The paper largely follows the work of Nachum et al 2017. The differences (i.e., the claimed novelty) from that work are the relative entropy and trust region method for training. However, the relative entropy term added seems like a marginal modification.\"\n\nThe extension of the work of Nachum et al. by including relative entropy is novel and significant because it enables applying softmax consistency to difficult continuous control tasks. Nachum et al (2017) only evaluated PCL on simple discrete control tasks, and without including the additional trust region term, we were not able to obtain promising results. Our results achieve state-of-the-art in continuous control by substantially outperforming TRPO. Other than the introduction of relative entropy as an implicit trust region constraint, the technique described in Section 4.3 is novel and plays a key role in the success of Trust-PCL.\n\nR3: \"Authors claimed that it satisfies the multi-step path consistency but the derivation is missing.\"\n\nWe apologize for the lack of clarity. We have updated the paper to expand the derivation of the multi-step consistency over several equations (see Eqs. 16-21).\n\nR3: \"I am a bit confused about the way trust region method is used in the paper. Initially, problem is written as a constrained optimization problem (12). It is then converted into a penalty form for softmax consistency. Finally, the Lagrange parameter is estimated from the trust region method. In addition, how do you get the Lagrange parameter from epsilon?\"\n\nTrust-PCL trains towards a trust region objective (Eq. 12 or equivalently Eq. 14) implicity by training a policy and a value function to satisfy a set of path-wise consistencies on off-policy data (Eq. 21). The Lagrange multiplier \\lambda is easier to work with to formulate the path-wise consistencies, but \\lambda is not constant for a fixed \\epsilon, and \\epsilon is easier and more intuitive to tune. Hence, we describe a technique in Section 4.3 to adjust \\lambda for a given \\epsilon, and in the paper we switch between the constraint and Lagrangian form.\n\nR3: \"The pseudo code of the algorithm is missing. It would be much clearer if a detailed description of the algorithmic procedure is given.\"\n\nGood suggestion. We have updated the paper to include a pseudo code of the algorithm in Appendix C. The link to the source code will become available after the blind review as well (footnote 1).\n\nR3: \"How is the performance of Trust-PCL compared to PCL?”\n\nPCL is equivalent to Trust-PCL with \\epsilon = infinity or \\lambda = 0. Section 5.2.1 shows the effect of different values of \\epsilon on the results of Trust-PCL. It is clear that as \\epsilon increases, the solution quality of Trust-PCL quickly degrades. We found that PCL (corresponding to an even larger \\epsilon) is largely ineffective on the difficult continuous control tasks considered in the paper. This shows the significance of the new technique over the original PCL.\n", "R2: \"This paper presents a policy gradient method that employs entropy regularization and entropy constraint at the same time… \"\n\nOur paper does not present a policy gradient method. Rather, we show that the optimal policy for an expected reward objective regularized with entropy and relative entropy satisfies a set of path-wise consistencies. Then, we propose an off-policy algorithm to implicitly train towards this objective.\n\nR2: \"The major weakness of this paper is the unclear presentation. For example, the algorithm is never fully described, though a handful variants are discussed. How the off-policy version is implemented is missing.\"\n\nTo improve the clarity of the presentation, we have updated the paper and included a pseudo-code in Appendix C. Moreover, we included the implementation details in Appendix B, and we have released an open-source package with all of the variants of the algorithm for completeness (see footnote 1; the link will become available after the blind review).\n\nR2: \"In experiments, why the off-policy version of TRPO is not compared.\"\n \nUnfortunately, TRPO is restricted to the use of on-policy data. This is the major limitation of TRPO. We address this limitation by introducing Trust-PCL, which optimizes a trust region objective using off-policy data. This is the major contribution of the paper\n\nR2: \"Comparing the on-policy results, PCL does not show a significant advantage over TRPO.\"\n\nThe results of Trust-PCL (off-policy) are the key takeaway of the paper, showing that we obtain both stability and sample-efficiency in a single algorithm, significantly outperforming TRPO. We present the results of Trust-PCL (on-policy) for completeness, to give a curious reader a sense of the performance loss when only on-policy data is used. We expect practitioners to only use off-policy Trust-PCL.\n\nR2: \"the curves of TRPO is so unstable, which is a bit uncommon.\"\n\nOur TRPO implementation obtains similar performance compared with other implementations by J Schulman and rllab. In Table 1, we compare against a number of externally available implementations. We also find the stability of our TRPO curves to be qualitatively similar to those appearing externally.\n\nR2: \"What is the exploration strategy in the experiments?\"\n\nAll of the algorithms in the paper have a model of the policy \\pi_\\theta during training (parameterized as a unimodal Gaussian, as is standard for continuous control). Accordingly, this policy is used to sample actions. Thus, there is no additional exploration injected. This is standard for continuous control RL algorithms like TRPO.\n\nR2: \"I guess it was softmax probability. However, in many cases, softmax does not perform a good exploration, even if the entropy regularization is added.\"\n\nPlease note that the multinomial distribution (so-called softmax probability) is standard in *discrete* control to parametrize the policy, but we are mostly considering continuous control problems in this paper. Our policy is parameterized by a unimodal Gaussian, as is standard in the continuous control benchmarks we evaluate.\n\nR2: \"Another issue is the discussion of the entropy regularization in the objective function. This regularization, while helping exploration, do changes the original objective. When a policy is required to pass through a very narrow tunnel of states, the regularization that forces a wide action distribution could not have a good performance.\"\n\nAugmenting the expected reward objective using entropy regularization is standard in reinforcement learning. Often the multiplier of entropy is annealed to zero by the end of training to enable learning concentrated policies.\n\nR2: \"Thus it would be more interesting to see experiments on more complex benchmark problems like humanoids.\"\n\nWe included 6 standard benchmarks in the paper, including: Acrobot, Half Cheetah, Swimmer, Hopper, Walker2D, and Ant. On all of the environments our Trust-PCL (off-policy) algorithm outperforms TRPO in both final reward and sample efficiency. We believe these experiments are enough to demonstrate the promise of the approach.\n", "We thank the reviewer for carefully reading the details of the paper; we greatly appreciate it.\n\nR1: \"Even though the paper claims Trust-PCL (on-policy) is close to TRPO, the initial performance of TRPO looks better in HalfCheetah, Hopper, Walker2d and Ant.\"\n\nTrust-PCL (on-policy) achieves equal or better final reward compared to TRPO, but TRPO has a better initial performance. The results of Trust-PCL (off-policy) are the main point of the paper, showing that we can get both stability and sample-efficiency at the same time in a single algorithm. The presentation of the results for Trust-PCL (on-policy) is to convey the advantage of using off-policy data.\n\nR1: \"Some ablation studies (e.g., on entropy regularization and relative entropy) and sensitivity analysis on parameters (e.g. \\alpha and update frequency on \\phi) would be helpful.\"\nSection 5.2.1 of the the paper shows the effect of changing \\epsilon on the performance. As discussed in Section 4.3, the value of \\epsilon directly determines \\lambda, the coefficient of relative entropy. The main contribution of the paper is stabilizing off-policy training via a suitable trust region constraint and hence, \\epsilon and \\lambda are the key hyper-parameters. However, we have expanded Section 5.2.1 to include anecdotal experience regarding the values of \\tau and the degree of off/on-policy (determined by \\beta, \\alpha, P).\n" ]
[ -1, 6, 5, -1, 5, -1, -1, -1 ]
[ -1, 4, 4, -1, 1, -1, -1, -1 ]
[ "B1tQ10rVG", "iclr_2018_HyrCWeWCb", "iclr_2018_HyrCWeWCb", "BJ--aUT7M", "iclr_2018_HyrCWeWCb", "H1ccXfmeG", "H11zfWQZf", "ByDPYkUxG" ]
iclr_2018_rk49Mg-CW
Stochastic Variational Video Prediction
Predicting the future in real-world settings, particularly from raw sensory observations such as images, is exceptionally challenging. Real-world events can be stochastic and unpredictable, and the high dimensionality and complexity of natural images requires the predictive model to build an intricate understanding of the natural world. Many existing methods tackle this problem by making simplifying assumptions about the environment. One common assumption is that the outcome is deterministic and there is only one plausible future. This can lead to low-quality predictions in real-world settings with stochastic dynamics. In this paper, we develop a stochastic variational video prediction (SV2P) method that predicts a different possible future for each sample of its latent variables. To the best of our knowledge, our model is the first to provide effective stochastic multi-frame prediction for real-world video. We demonstrate the capability of the proposed method in predicting detailed future frames of videos on multiple real-world datasets, both action-free and action-conditioned. We find that our proposed method produces substantially improved video predictions when compared to the same model without stochasticity, and to other stochastic video prediction methods. Our SV2P implementation will be open sourced upon publication.
accepted-poster-papers
Not quite enough for an oral but a very solid poster.
train
[ "SJHVI1WSG", "S1gH28vgM", "r17bOI8yG", "S1riI7OxM", "H1ajkrZEf", "S1W4e5U7f", "HJtZzEW7z", "Hk7A25bmM", "r1ULac-Qf", "rkLWT5Z7f", "HJ1NygMgM", "H1i2t1egM" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "public", "author", "author", "author", "author", "public" ]
[ "\nFor comparison with VPN, we did NOT train any model. Instead, the authors of Reed et al. 2017 provided their trained model which we used for evaluation.\n\nIn terms of numbers, the model from Reed et al. 2017 has 119,538,432 while our model has 8,378,497. Hopefully this helps to get a better understanding of the generalizations.", "1) Summary\nThis paper proposed a new method for predicting multiple future frames in videos. A new formulation is proposed where the frames’ inherent noise is modeled separate from the uncertainty of the future. This separation allows for directly modeling the stochasticity in the sequence through a random variable z ~ p(z) where the posterior q(z | past and future frames) is approximated by a neural network, and as a result, sampling of a random future is possible through sampling from the prior p(z) during testing. The random variable z can be modeled in a time-variant and time-invariant way. Additionally, this paper proposes a training procedure to prevent their method from ignoring the stochastic phenomena modeled by z. In the experimental section, the authors highlight the advantages of their method in 1) a synthetic dataset of shapes meant to clearly show the stochasticity in the prediction, 2) two robotic arm datasets for video prediction given and not given actions, and 3) A challenging human action dataset in which they perform future prediction only given previous frames.\n\n\n\n2) Pros:\n+ Novel/Sound future frame prediction formulation and training for modeling the stochasticity of future prediction.\n+ Experiments on the synthetic shapes and robotic arm datasets highlight the proposed method’s power of multiple future frame prediction possible.\n+ Good analysis on the number of samples improving the chance of outputting the correct future, the modeling power of the posterior for reconstructing the future, and a wide variety of qualitative examples.\n+ Work is significant for the problem of modeling the stochastic nature of future frame prediction in videos.\n\n\n\n\n3) Cons:\nApproximate posterior in non-synthetic datasets:\nThe variable z seems to not be modeling the future very well. In the robot arm qualitative experiments, the robot motion is well modeled, however, the background is not. Given that for the approximate posterior computation the entire sequence is given (e.g. reconstruction is performed), I would expect the background motion to also be modeled well. This issue is more evident in the Human 3.6M experiments, as it seems to output blurriness regardless of the true future being observed. This problem may mean the method is failing to model a large variety of objects and clearly works for the robotic arm because a very similar large shape (e.g. robot arm) is seen in the training data. Do you have any comments on this?\n\n\n\nFinn et al 2016 PNSR performance on Human 3.6M:\nIs the same exact data, pre-processing, training, and architecture being utilized? In her paper, the PSNR for the first timestep on Human 3.6M is about 41 (maybe 42?) while in this paper it is 38.\n\n\n\nAdditional evaluation on Human 3.6M:\nPSNR is not a good evaluation metric for frame prediction as it is biased towards blurriness, and also SSIM does not give us an objective evaluation in the sense of semantic quality of predicted frames. It would be good if the authors present additional quantitative evaluation to show that the predicted frames contain useful semantic information [1, 2, 3, 4]. For example, evaluating the predicted frames for the Human 3.6M dataset to see if the human is still detectable in the image or if the expected action is being predicted could be useful to verify that the predicted frames contain the expected meaningful information compared to the baselines.\n\n\n\nAdditional comments:\nAre all 15 actions being used for the Human 3.6M experiments? If so, the fact of the time-invariant model performs better than the time-variant one may not be the consistent action being performed (last sentence of 5.2). The motion performed by the actors in each action highly overlaps (talking on the phone action may go from sitting to walking a little to sitting again, and so on). Unless actions such as walking and discussion were only used, it is unlikely the time-invariant z is performing better because of consistent action. Do you have any comments on this?\n\n\n\n4) Conclusion\nThis paper proposes an interesting novel approach for predicting multiple futures in videos, however, the results are not fully convincing in all datasets. If the authors can provide additional quantitative evaluation besides PSNR and SSIM (e.g. evaluation on semantic quality), and also address the comments above, the current score will improve.\n\n\n\nReferences:\n[1] Emily Denton and Vighnesh Birodkar. Unsupervised Learning of Disentangled Representations from Video. In NIPS, 2017.\n[2] Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, and Honglak Lee. Learning to generate long-term future via hierarchical prediction. In ICML, 2017.\n[3] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv preprint arXiv:1710.10196, 2017.\n[4] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved Techniques for Training GANs. In NIPS, 2017.\n\n\nRevised review:\nGiven the authors' thorough answers to my concerns, I have decided to change my score. I would like to thank the authors for a very nice paper that will definitely help the community towards developing better video prediction algorithms that can now predict multiple futures.", "Quality: above threshold\nClarity: above threshold, but experiment details are missing.\nOriginality: slightly above threshold.\nSignificance: above threshold\n\nPros:\n\nThis paper proposes a stochastic variational video prediction model. It can be used for prediction in optionally available external action cases. The inference network is a convolution net and the generative network is using a previously structure with minor modification. The result shows its ability to sample future frames and outperforms with methods in qualitative and quantitive metrics.\n\nCons:\n\n1. It is a nice idea and it seems to perform well in practice, but are there careful experiments justifying the 3-stage training scheme? For example, compared with other schemes like alternating between 3 stages, dynamically soft weighting terms. \n\n2. It is briefly mentioned in the context, but has there any attempt towards incorporating previous frames context for z, instead of sampling from prior? This piece seems much important in the scenarios which this paper covers.\n\n3. No details about training (training data size, batches, optimization) are provided in the relevant section, which greatly reduces the reproducibility and understanding of the proposed method. For example, it is not clear whether the model can generative samples that are not previously seen in the training set. It is strongly suggested training details be provided. \n\n4. Minor, If I understand correctly, in equation in the last paragraph above 3.1, z instead of z_t \n", "The submission presents a method or video prediction from single (or multiple) frames, which is capable of producing stochastic predictions by means of training a variational encoder-decoder model. Stochastic video prediction is a (still) somewhat under-researched direction, due to its inherent difficulty.\n\nThe method can take on several variants: time-invariant [latent variable] vs. time-variant, or action-conditioned vs unconditioned. The generative part of the method is mostly borrowed from Finn et al. (2016). Figure 1 clearly motivates the problem. The method itself is fairly clearly described in Section 3; in particular, it is clear why conditioning on all frames during training is helpful. As a small remark, however, it remains unclear what the action vector a_t is comprised of, also in the experiments.\n\nThe experimental results are good-looking, especially when looking at the provided web site images. \nThe main goal of the quantitative comparison results (Section 5.2) is to determine whether the true future is among the generated futures. While this is important, a question that remains un-discussed is whether all generated stochastic samples are from realistic futures. The employed metrics (best PSNR/SSIM among multiple samples) can only capture the former, and are also pixel-based, not perceptual.\n\nThe quantitative comparisons are mostly convincing, but Figure 6 needs some further clarification. It is mentioned in the text that \"time-varying latent sampling is more stable beyond the time horizon used during training\". While true for Figure 6b), this statement is contradicted by both Figure 6a) and 6c), and Figure 6d) seems to be missing the time-invariant version completely (or it overlaps exactly, which would also need explanation). As such, I'm not completely clear on whether the time variant or invariant version is the stronger performer.\n\nThe qualitative comparisons (Section 5.3) are difficult to assess in the printed material, or even on-screen. The animated images on the web site provide a much better impression of the true capabilities, and I find them convincing.\n\nThe experiments only compare to Reed et al. (2017)/Kalchbrenner et al. (2017), with Finn at al. (2016) as a non-stochastic baseline, but no comparisons to, e.g., Vondrick et al. (2016) are given. Stochastic prediction with generative adversarial networks is a bit dismissed in Section 2 with a mention of the mode-collapse problem.\n\nOverall the submission makes a significant enough contribution by demonstrating a (mostly) working stochastic prediction method on real data.", "Interesting work. A couple of questions about the comparison with Video Pixel Networks (VPN):\n\n- Did you tune the parameters in the VPN model for your specific datasets? Did you try a similar number of hyperparameter combinations for VPN and SV2P?\n\n- Do you have any metrics to suggest that the comparisons between SV2P and VPN are fair? For example, did you see better test set generalization (given the same training error) in both models. Or how many parameters are in the VPN architecture vs SV2P? I guess it's natural to ask if VPN could achieve the same test error if we simply scale up the model.\n\nIts fine if you don't, I'm just wondering. Thanks!", "Thank you for the great comment. We address your old version (since it had some great questions) as well as the updated version. Please let us know if we missed a question and/or if you have more questions/comments.\n\n- Why does it help to sample latents multiple times if the inference procedure is identical at all time steps? Is it simply because you get extra bits of stochasticity?\n\nThank you for the great question. Please note that we are not claiming the time-invariant latent predicts higher quality images. The claim is that it is more stable beyond training time horizon (look at Figure 6b). It is best if we answer this questions intuitively with an example. Think about a simple shape which moves randomly and changes its direction in each time step (e.g. brownian motion). A time-invariant latent should encode the info about *all* of the time steps and the generative model should learn how to decode all of this information, step by step, and therefore it runs out of *information* after all the time-steps which causes the collapse after the training time horizon. However, a time-variant latent only includes information about the *current* time frames and stays stable after any time horizon. However, this pushes the complexity to posterior approximation since it should *find* a distribution aligned with what is happening at training time. That is why the result are not that different (other plots of Figure 6). This can be improved by conditioning the prior or posterior on input (which we are currently working on) or other techniques such as backproping through the best out of multiple samples (e.g. look at Fragkiadaki et al. (2017)).\n\n- The plot in figure 4a shows the KL loss going to 0. This seems odd to me, because the KL term in a VAE usually roughly corresponds with the diversity of the samples. If it's close to 0, then the information passing through the posterior is close to 0, isn't it?\n\nIt does not go to 0 but converges to a small number, usually 3 to 5 (please note that y-axis has a very large scale). Intuitively, the key is to keep the divergence small enough so sampling from prior at test time still makes sense, and big enough so there is enough information to train the generative network. In our experiments we found this magical number to be around 3 to 5. \n\n- In the third phase, where you increase the KL, do you just increase the KL to 1? Or, like in Higgins et al (2016), do you tune the multiplicative constant for the KL term? Did you try any (informal) experiments with other types of pre-training. What did/did not seem to work well?\n\nPlease check (the newly added) Appendix. In the current setting we do not increase KL to 1 but increase it to 1e-3. Regarding the training we also included more information in the updated Figure 4. We tried variations of the proposed training mechanism to see how it affects the training. Besides that (informally) we tried different approaches for KL annealing. Since the explained training mechanism is practical enough, we stopped exploring more. \n\n- In your case, I don't think the latent will capture the type of objects in the scene. Otherwise, the latents could \"conflict\" with the context. \n\nGreat question! We indeed observed the conflict that you mentioned while we were developing the model. e.g. a green circle was being morphed into a red triangle! However, there are is a key remark which prevents this from happening in the current architecture and it’s the reconstruction loss. Intuitively, in the training time, the posterior should encode the information into a distribution in which *any* sample from it results to a correct answer to minimizes the loss. Therefore, first, it avoids adding any not necessary information which is accessible during generative (e.g. the shape and color). Second, it encodes all the required info for a correct prediction (e.g. movement). Therefore, as you mentioned, the latent values include only the *movement* info and not the context. This contains more information compared to random bits though. \n\n- In the inference network (top), could you please be more specific about how you transform the Tx64x64x3 tensor into a 32x32x32 tensor (combining the dimensions across time)? Thanks!\n\nYes, we combine the time dimension and stride of 2 downsamples to 32x32\n", "Nice work! A couple of questions about the architecture and training procedure.\n\n- For the time-variant latent variable case, the paper says that the inference model is q_phi(z_t | x_{0:T}). I want to make sure I'm understanding the time-variant latent setup right - is the inference process exactly the same at all time steps t? This seems a bit puzzling to me. Why does it help to sample latents multiple times if the inference procedure is identical at all time steps? Is it simply because you get extra bits of stochasticity?\n\n- The plot in figure 4a shows the KL loss going to 0. This seems odd to me, because the KL term in a VAE usually roughly corresponds with the diversity of the samples. If it's close to 0, then the information passing through the posterior is close to 0, isn't it? Furthermore, in the third phase, where you increase the multiplicative constant associated with the KL (beta), it seems surprising that you only need to increase it to 0.001. If beta is only 0.001, shouldn't the KL be fairly high?\n\n- In the inference network (top), could you please be more specific about how you transform the Tx64x64x3 tensor into a 32x32x32 tensor (combining the dimensions across time)? Thanks!\n\n- SV2P uses an unconditional prior when generating samples. The top of page 4 (caption for Figure 3) gives an argument for this choice. However, I don't buy the argument for why the \"filtering process at training time\" (as far as I understand, basically conditioning the prior/posterior on the context) won't work. In particular, the \"extra\" information should come from the frames after the context/seed frames to the end of the video. Could you please explain what kind of experiments you ran (informally) to check that this doesn't work better?\n", "Thank you for insightful comments and constructive criticism. We updated the paper to address all of your comments. Please let us know if you have any more suggestions or comments. Thanks!\n\n- \"As a small remark, however, it remains unclear what the action vector a_t is comprised of, also in the experiments.\"\n\nThank you for the good point. We’ve updated the paper to clarify what the actions are for each dataset. Please check 5.1 for more clarification. \n\n\n- \"a question that remains un-discussed is whether all generated stochastic samples are from realistic futures\"\n\nWe’ve updated the paper to clarify this issue. Please look at the 2nd paragraph of Section 4 for updates. In short, as we observed empirically from the predicted videos, the output videos are within the realistic possibilities. However, in some cases, the predicted frames are not realistic and are averaging more than one future (e.g. first random sample in Figure 1-C).\n\n\n- \"The employed metrics (best PSNR/SSIM among multiple samples) can only capture the former, and are also pixel-based, not perceptual.\"\n\nThank you for the great comment. We’ve updated the paper (please look at Figure 7 and 6th paragraph of 5.3) to address your comment. In order to investigate the quality difference between SV2P predicted frames and “Finn et al (2016)”, we performed a new experiment in which we used the open sourced version of the object detector from “Huang et al. (2016)”:\nhttps://github.com/tensorflow/models/blob/master/research/object_detection/models/ssd_mobilenet_v1_feature_extractor.py\nto detect the humans inside the predicted frames. We used the confidence of this detection as an additional metric to evaluate the difference between different methods. The results of this comparison which shows higher quality for SV2P can be found in (newly added) Figure 7. \n\n\n- \"time-varying latent sampling is more stable beyond the time horizon used during training\". While true for Figure 6b), this statement is contradicted by both Figure 6a) and 6c).\"\n\nThank you for the great question. We’ve updated the paper (last two paragraphs of 5.2) to include your observation. Please note that our original claim was that the time-variant latent seems to be more “stable” beyond the time horizon used during training (which is highly evident in Figure 6b). And we are NOT claiming that time-variant latent generates “higher quality” results. However, we agree that this stability is not always the case as it is more evident in late frames of Figure 6a.\n\n\n", "Thank you for your great comments. comments and constructive criticism. We updated the paper to address all of your comments. Please let us know if you have any more suggestions or comments. Thanks!\n\n\n- “No details about training (training data size, batches, optimization) are provided in the relevant section” \n\nThank you for your great comment. To further investigate the effect of our proposed training method, we conducted more experiments by alternating between different steps of suggested method. The updated Figure 4c reflects the results of this experiments. As it can be seen in this graph, the suggested steps help with both stability and convergence of the model.\n\nWe also provided details of the training method in Appendix A to address your comment regarding using soft-terms as well as reproducibility. We will also release the code after acceptance. \n\n\n- “It is briefly mentioned in the context, but has there any attempt towards incorporating previous frames context for z, instead of sampling from prior? This piece seems much important in the scenarios which this paper covers.“\n\nThis is one of the future work directions mentioned in the conclusion section. We’ve expanded this discussion in the conclusion a bit to address this better.\n\n\n- “Minor, If I understand correctly, in equation in the last paragraph above 3.1, z instead of z_t”\n\nThank you for the detailed comment. We fixed the typo.\n", "Thank you for your insightful comments and suggestions. We have addressed most of your concerns. Please see our responses below and let us know if you have any further comments on the paper. Thanks!\n\n- \"Additional evaluation on Human 3.6M: PSNR is not a good evaluation metric for frame prediction\"\n\nThank you for this suggestion. We’ve updated the paper (please look at Figure 7 and 6th paragraph of 5.3) to address your comment. In order to investigate the quality difference between SV2P predicted frames and “Finn et al (2016)”, we performed a new experiment in which we used the open sourced version of the object detector from “Huang et al. (2016)”:\nhttps://github.com/tensorflow/models/blob/master/research/object_detection/models/ssd_mobilenet_v1_feature_extractor.py\nto detect the humans inside the predicted frames. We used the confidence of this detection as an additional metric to evaluate the difference between different methods. The results of this comparison which shows higher quality for SV2P can be found in (newly added) Figure 7. \n\n\n- \"Are all 15 actions being used for the Human 3.6M experiments?\"\n\nWe’ve updated the 2nd bullet point in 5.1 to clear this up in the paper. Yes, we are using all the actions. In regard to changing actions: since the videos are relatively short (about 20 frames), there aren't any videos where the actor changes the behavior in the middle. That said, the identity of the behavior is not the only source of stochasticity, since even within a single action (e.g., walking), the actor might choose to walk at different speeds and in different directions.\n\n\n- \"I would expect the background motion to also be modeled well.”\n\nWe've added a discussion of this in Section 5.3 (paragraph 4). Note that the approximate posterior over z is still trained with the ELBO, which means that it must compress the information in future events. Perfect reconstruction of high-quality images from posterior distributions over latent states is an open problem, and the results in our experiments compare favorably to those typically observed even in single-image VAEs (e.g. see Xue et al. (2016))\n\n\n- \"Finn et al 2016 PNSR performance on Human 3.6M: In her paper, the PSNR for the first timestep on Human 3.6M is about 41 (maybe 42?) while in this paper it is 38\"\n\nFor “Finn et al. (2016)”, we used the open-source version of the code here:\nhttps://github.com/tensorflow/models/tree/master/research/video_prediction\nwhich is a reimplementation of the models used in the Finn et al. ‘16 paper. We are not exactly sure where the discrepancy is coming from. However, we would like to point out that whatever issue resulted in slightly slower PSNR for the deterministic model would have affected our model as well, since we used the same code for the base model. Hence, the comparison is still valid.\n", "We apologize for missing that highly-relevant reference. We will include a reference in the next revision.\n\nNote that Walker et al. '16 does not predict video frames; thus, we cannot compare to the approach. Unlike Walker et al. '16, our work does not require optical flow supervision nor an optical flow solver, which tend to not work consistently on real videos (as optical flow is an open research problem [1,2,3]). Our method only uses raw videos. Furthermore, we show that a CVAE trained from scratch does not work consistently, and propose a pre-training scheme which, in our experiments, consistently finds a good solution.\n\n[1] https://arxiv.org/abs/1705.01352\n[2] https://arxiv.org/abs/1612.01925\n[3] https://arxiv.org/abs/1604.01827", "The authors make the following claim:\n\n\"We believe, our approach is the first latent variable model to successfully demonstrate stochastic multi-frame video prediction on real world datasets.\"\n\nHowever, variational methods have been used before to forecast multiple frames in static images (An Uncertain Future: \nForecasting from Static Images using Variational Autoencoders, Walker et al., ECCV 2016). In this ECCV paper, the output space\nare dense pixel trajectories instead of direct pixels, but the model is trained on realistic videos of human activities. What distinguishes this proposed approach from prior work? The paper has been cited in references of other papers cited by the authors.\n\n" ]
[ -1, 7, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "H1ajkrZEf", "iclr_2018_rk49Mg-CW", "iclr_2018_rk49Mg-CW", "iclr_2018_rk49Mg-CW", "iclr_2018_rk49Mg-CW", "HJtZzEW7z", "iclr_2018_rk49Mg-CW", "S1riI7OxM", "r17bOI8yG", "S1gH28vgM", "H1i2t1egM", "iclr_2018_rk49Mg-CW" ]
iclr_2018_HkXWCMbRW
Towards Image Understanding from Deep Compression Without Decoding
Motivated by recent work on deep neural network (DNN)-based image compression methods showing potential improvements in image quality, savings in storage, and bandwidth reduction, we propose to perform image understanding tasks such as classification and segmentation directly on the compressed representations produced by these compression methods. Since the encoders and decoders in DNN-based compression methods are neural networks with feature-maps as internal representations of the images, we directly integrate these with architectures for image understanding. This bypasses decoding of the compressed representation into RGB space and reduces computational cost. Our study shows that accuracies comparable to networks that operate on compressed RGB images can be achieved while reducing the computational complexity up to 2×. Furthermore, we show that synergies are obtained by jointly training compression networks with classification networks on the compressed representations, improving image quality, classification accuracy, and segmentation performance. We find that inference from compressed representations is particularly advantageous compared to inference from compressed RGB images for aggressive compression rates.
accepted-poster-papers
Some reviewers seem to assign novelty to the compression and classification formulation; however, semi-supervised autoencoders have been used for a long time. Taking the compression task more seriously as is done in this paper is less explored. The paper provides some extensive experimental evaluation and was edited to make the paper more concise at the request of reviewers. One reviewer had a particularly strong positive rating, due to the quality of the presentation, experiments and discussion. I think the community would like this work and it should be accepted.
train
[ "SkE6QMtlG", "r1A9XDwgG", "rJx_tnFeM", "BkCWUB2zM", "HJB3rBhzG", "HkvLrShzM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Thanks for addressing most of the issues. I changed my given score from 3 to 6.\n\nSummary:\nThis work explores the use of learned compressed image representation for solving 2 computer vision tasks without employing a decoding step. \n\nThe paper claims to be more computationally and memory efficient compared to the use of original or the decompressed images. Results are presented on 2 datasets \"Imagenet\" and \"PASCAL VOC 2012\". They also jointly train the compression and classification together and empirically shows it can improve both classification and compression together.\n\nPros:\n+ The idea of learning from a compressed representation is a very interesting and beneficial idea for large-scale image understanding tasks. \n\nCons:\n- The paper is too long (13 pages + 2 pages of references). The suggested standard number of pages is 8 pages + 1 page of references. There are many parts that are unnecessary in the paper and can be summarized. Summarizing and rewording them makes the paper more consistent and easier to read:\n ( 1. A very long introduction about the benefits of inferring from the compressed images and examples.\n 2. A large part of the intro and Related work can get merged. \n 3. Experimental setup part is long but not well-explained and is not self-contained particularly for the evaluation metrics. \n “Please briefly explain what MS-SSIM, SSIM, and PSNR stand for”. There is a reference to the Agustsson et al 2017 paper \n “scalar quantization”, which is not well explained in the paper. It is better to remove this part if it is not an important part or just briefly but clearly explain it.\n 4. Fig. 4 is not necessary. 4.3 contains extra information and could be summarized in a more consistent way.\n 5. Hyperparameters that are applied can be summarized in a small table or just explain the difference between the \n architectures that are used.)\n\n- There are parts of the papers which are confusing or not well-written. It is better to keep the sentences short and consistent:\nE.g: subsection 3.2, page 5: “To adapt the ResNet … where k is the number of … layers of the network” can be changed to 3 shorter sentences, which is easier to follow.\nThere are some typos: e.g: part 3.1, fever ---> fewer, \n\n- As it is mentioned in the paper, solving a Vision problem directly from a compressed image, is not a novel method (e.g: DCT coefficients were used for both vision and audio data to solve a task without any decompression). However, applying a deep representation for the compression and then directly solving a vision task (classification and segmentation) can be considered as a novel idea.\n\n- In the last part of the paper, both compression and classification parts are jointly trained, and it is empirically presented that both results improved by jointly training them. However, to me, it is not clear if the trained compression model on this specific dataset and for the task of classification can work well for other datasets or other tasks. \nThe experimental setup and the figures are not well explained and well written. \n\n", "Neural-net based image compression is a field which is about to get hot, and this paper asks the obvious question: can we design a neural-net based image compression algorithm such that the features it produces are useful for classification & segmentation?\n\nThe fact that it's an obvious question does not mean that it's a question that's worthless. In fact, I am glad someone asked this question and tried to answer it. \n\nPros:\n- Clear presentation, easy to follow.\n- Very interesting, but obvious, question is explored. \n- The paper is very clear, and uses building blocks which have been analyzed before, which leaves the authors free to explore their interactions rather than each individual building block's property.\n- Results are shown on two tasks (classification / segmentation) rather than just one (the obvious one would have been to only discuss results on classification), and relatively intuitive results are shown (i.e., more bits = better performance). What is perhaps not obvious is how much impact does doubling the bandwidth have (i.e., initially it means more, then later on it plateaus, but much earlier than expected).\n- Joint training of compression + other tasks. As far as I know this is the first paper to talk about this particular scenario.\n- I like the fact that classical codecs were not completely discarded (there's a comparison with JPEG 2K).\n- The discussion section is of particular interest, discussing openly the pros/cons of the method (I wish more papers would be as straightforward as this one).\n\nCons:\n- I would have liked to have a discussion on the effect of the encoder network. Only one architecture/variant was used.\n- For PSNR, SSIM and MS-SSIM I would like a bit more clarity whether these were done channel-wise, or on the grayscale channel.\n- While runtime is given as pro, it would be nice for those not familiar with the methods to provide some runtime numbers (i.e., breakdown how much time does it take to encode and how much time does it take to classify or segment, but in seconds, not flops). For example, Figure 6 could be augmented with actual runtime in seconds.\n- I wish the authors did a ctrl+F for \"??\" and fixed all the occurrences.\n- One of the things that would be cool to add later on but I wished to have beeyn covered is whether it's possible to learn not only to compress, but also downscale. In particular, the input to ResNet et al for classification is fixed sized, so the question is -- would it be possible to produced a compact representation to be used for classification given arbitrary image resolutions, and if yes, would it have any benefit?\n\nGeneral comments:\n- The classification bits are all open source, which is very good. However, there are very few neural net compression methods which are open sourced. Would you be inclined to open source the code for your implementation? It would be a great service to the community if yes (and I realize that it could already be open sourced -- feel free to not answer if it may lead to break anonymity, but please take this into consideration).\n", "This is a well-written and quite clear work about how a previous work on image compression using deep neural networks can be extended to train representations which are also valid for semantic understanding. IN particular, the authors tackle the classic and well-known problems of image classification and segmentation.\n\nThe work evolves around defining a loss function which initially considers only a trade-off between reconstruction error and total bit-rate. The representations trained with the loss function, at three different operational points, are used as inputs for variations of ResNet (image classification) and DeepLab (segmentation). The results obtained are similar to a ResNet trained directly over the RGB images, and actually with a slight increase of performance in segmentation. The most interesting part is a joint training for both compression and image classification.\n\nPROS\nP.1 Joint training for both compression and classification. First time to the authors knowledge.\nP.2 Performance on classification and segmentation tasks are very similar when compared to the non-compressed case with state-of-the-art ResNet architectures.\nP.3 Text is very clear.\nP.4 Experimentation is exhaustive and well-reported.\n\nCONS\nC1. The authors fail into providing a better background regarding the metrics MS-SSIM and SSIM (and PSNR, as well) and their relation to the MSE used for training the network. Also, I missed an explanation about whether high or low values for them are beneficial, as actually results compared to JPEG and JPEG-2000 differ depending on the experiment.\nC2. The main problem is of the work is that, while the whole argument is that in an indexing system it would not be necessary to decompress the representation coded with a DNN, in terms of computation JPEG2000 (and probably JPEG) are much lighter that coding with DNN, even if considering both the compression and decompression. The authors already point at another work where they explore the efficient compression with GPUs, but this point is the weakest one for the adoption of the proposed scheme.\nC3. The paper exceeds the recommendation of 8 pages and expands up to 13 pages, plus references. An effort of compression would be advisable, moving some of the non-core results to the appendixes.\n\nQUESTIONS\nQ1. Do you have any explanation for the big jumps on the plots of Figure 5 ?\nQ2. Did you try a joint training for the segmentation task as well ?\nQ3. Why are the dots connected in Figure 10, but not in Figure 11.\nQ4. Actually results in Figure 10 do not seem good... or maybe I am not understanding them properly. This is related to C1.\nQ5. There is a broken reference in Section 5.3. Please fix.", "Thank you for your positive feedback. As for your comments, we have addressed them as follows:\n1) The reason for not exploring more encoders is twofold. First, few of the state-of-the-art compression variants of neural networks are open-source which makes the implementation of different architectures very time consuming and difficult. We picked this particular encoder since it has been used in at least 2 different works, Theis et al. and Agustsson et al. Second, the autoencoder compression methods also all have a similar structure and thus one could expect the performance to be similar for different encoders. This is nevertheless an interesting direction of research worth pursuing.\n2) These were done channel-wise (RGB).\n3) We have not yet had time to do this as the code for segmentation, classification and compression networks have different levels of optimization (e.g., NCHW vs NHWC tensor layout), so doing a fair comparison is time consuming and involves careful engineering. We however can do this before camera ready, if the paper is accepted. We also note that the architectures used for compression and inference are very similar, convolutional networks with residual blocks, and therefore FLOPs should be a good proxy metric.\n4) This has been fixed in the revised edition of the paper\n5) We agree that it would be interesting to learn the downscaling as well, after all it is just an anti-aliasing kernel (i.e. a convolution with a fixed kernel) followed by a subsampling operation and can be learned as well. However, processing the images in full-resolution during training would quite significantly increase training times, which are already pushing our limits in terms of compute resources. Another challenge is that hyperparameters of the ResNet architecture would likely need to be re-tuned. We chose to adhere to the standard setting of the classification literature, so that we could use the hyperparameter settings and training schedules which have already been optimized extensively.", "Thank you for your your review, we have considered your comments in the revised version of the paper. Given the improved paper and positive perspective of the other reviews, we hope you reconsider your rating.\n\nFor specific points:\n\nRegarding paper length: since this is a study paper, we felt it benefited from verbosity. However, we have managed to shorten the paper to 9.5 pages, while keeping the original story intact. We followed most of your suggestions: (1-2) We shortened the introduction and related work; (4) We made Section 4.3 (now Section 4.4) much more concise and moved Figure 4 to the appendix as it did not contain core results of our work. (3) We added a better description of the compression metrics to the experiments section. However, we also moved the compression results to the appendix, and added a more detailed explanation of the metrics there. (5) We also fixed wording in the paper as you suggested and moved hyperparameter settings and details to the appendix, as we felt these details distract from the main message of the paper. In addition to this, we refined presentation of joint training results using plots rather then presenting them in text.\n\nAs we mention in the paper, learning from DCT (of JPEG) has been done before. However, our setting of using features from learned compression networks is significantly different. The DCT of JPEG is simply a linear transform over 8x8 patches, whereas the compressed representation is a feature map from a deep convolutional neural network. This opens directions such as joint learning of compression and inference (see Section 6) and warrants a full study of the problem.\n\nTo show that the improvement of joint training generalizes to another task, we added an experiment: We take the (jointly trained) classification network and finetune it for segmentation. The results are shown in Figure 7, where the resulting network significantly outperforms the separately trained network - achieving a significant performance boost of 1.1-1.8% higher mIoU depending on the compression operating point. See Figure 7 and discussion in Section 6.2 of the revised paper for more details.\nWe emphasize that this generalization is also occurring across datasets, from ILSVRC2012 (classification) to PASCAL VOC (segmentation).\n\nFinally, we made an effort to better clarify and describe the experiments.", "Thank you for your review. We considered your comments as follows.\nC1 We have added definitions and clarification on the metrics to the paper. For all of them high means good.\nC2 We gave an indexing system as an example application, but the main goal of the paper is to study the interplay between learned compression and inference in general. As mentioned in the paper, classical compression can be much faster than learned one - and our computational gains are relative to a system using learned compression. While falling back to classical codecs would be cheaper in terms of compute (since classical encoder+decoder is more efficient than the learned encoder), the story is not so simple, since this would come at the expense of transmitting more data for a given target image quality. This can be crucial, since for mobile devices, data transmission (I/O) is responsible for most energy usage in common applications (Pathak et. al, 2012). Since learned compression is still at its infancy, we expect the gap in terms of compression performance between learned methods and classical ones to grow. Furthermore, with the increasing availability of dedicated neural network processing units on devices, deep image compression methods could become as fast as traditional ones.\nC3 We have condensed the paper to remove redundant text and also moved some non-core results to the appendix. \n\nQ1. Yes, the standard learning rate schedule for training the ResNet classification architectures is a constant learning rate divided by factor 10 at fixed points in the training (every 8 epochs for our setting). At the point when the learning rate is lowered the validation accuracy increases rapidly, and our validation curves show these jumps clearly.\nThe jumps/difference between operating points is due to more detail being present in images at higher bitrate (higher bpp) and therefore doing inference on them is easier.\nQ2. Yes we also did experiments for joint training with segmentation that are detailed in the revised version of the paper. In short, we do not train jointly on the segmentation task but we take the jointly trained classification network (that improves classification) and use that as a starting point for segmentation, showing significant improvement for segmentation. These results are shown in Figure 7.\nQ3. We have made the style of the plots consistent, connecting the dots for both.\nQ4. Figure 10 (also Figure 10 in the revised edition) shows how the compression metrics change when training jointly compared to training only the compression network. It can be seen that training jointly improves the perceptual metrics MS-SSIM and SSIM slightly while PSNR gets slightly worse (higher is better for all metrics). Figure 10’s main point is that the image compression metrics do not get worse in two out of three metrics as we do joint training. At the same time, Figure 7 shows that the inference performance (both segmentation and classification) significantly improves. See Section 6.2 for a thorough discussion in the revised edition. As this is not a core result it was moved to the appendix.\nQ5. We have fixed this in the revised edition of the paper.\n\n(Pathak, A., Hu, Y. C., & Zhang, M. (2012, April). Where is the energy spent inside my app?: fine grained energy accounting on smartphones with eprof. In Proceedings of the 7th ACM european conference on Computer Systems (pp. 29-42). ACM.)" ]
[ 6, 9, 6, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1 ]
[ "iclr_2018_HkXWCMbRW", "iclr_2018_HkXWCMbRW", "iclr_2018_HkXWCMbRW", "r1A9XDwgG", "SkE6QMtlG", "rJx_tnFeM" ]
iclr_2018_ByJIWUnpW
Automatically Inferring Data Quality for Spatiotemporal Forecasting
Spatiotemporal forecasting has become an increasingly important prediction task in machine learning and statistics due to its vast applications, such as climate modeling, traffic prediction, video caching predictions, and so on. While numerous studies have been conducted, most existing works assume that the data from different sources or across different locations are equally reliable. Due to cost, accessibility, or other factors, it is inevitable that the data quality could vary, which introduces significant biases into the model and leads to unreliable prediction results. The problem could be exacerbated in black-box prediction models, such as deep neural networks. In this paper, we propose a novel solution that can automatically infer data quality levels of different sources through local variations of spatiotemporal signals without explicit labels. Furthermore, we integrate the estimate of data quality level with graph convolutional networks to exploit their efficient structures. We evaluate our proposed method on forecasting temperatures in Los Angeles.
accepted-poster-papers
With an 8-6-6 rating all reviewers agreed that this paper is past the threshold for acceptance. The quality of the paper appears to have increased during the review cycle due to interactions with the reviewers. The paper addresses issues related to the quality of heterogeneous data sources. The paper does this through the framework of graph convolutional networks (GCNs). The work proposes a data quality level concept defined at each vertex in a graph based on a local variation of the vertex. The quality level is used as a regularizer constant in the objective function. Experimental work shows that this formulation is important in the context of time-series prediction. Experiments are performed on a dataset that is less prominent in the ML and ICLR community, from two commercial weather services Weather Underground and WeatherBug; however, experiments with reasonable baseline models using a "Forecasting mean absolute error (MAE)" metric seem to be well done. The biggest weakness of this work was a lack of comparison with some more traditional time-series modelling approaches. However, the authors added an auto-regressive model into the baselines used for comparison. Some more details on this model would help. I tend to agree with the author's assertion that: "there is limited work in ICLR on data quality, but it is definitely one essential hurdle for any representation learning model to work in practice. ". For these reasons I recommend a poster.
train
[ "Hk7kJzcxM", "B1GH1Kd4f", "r16AndOEf", "S1GlLvu4G", "rJDUzhtxf", "ry07x_9xG", "rJCAdVTQM", "ByIwKojXM", "rktRm4eQz", "HyChXVl7G", "S1eWfNlmz", "ry6rgEgXG" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "Update:\n\nI have read the rebuttal and the revised manuscript. Paper reads better and comparison to Auto-regression was added. This work presents a novel way of utilizing GCN and I believe it would be interesting to the community. In this regard, I have updated my rating.\n\nOn the downside, I still remain uncertain about the practical impact of this work. Results in Table 1 show that proposed method is capable of forecasting next hour temperature with about 0.45C mean absolute error. As no reference to any state of the art temperature forecasting method is given (i.e. what is the MAE of a weather app on a modern smartphone), I can not judge whether 0.45C is good or bad. Additionally, it would be interesting to see how well proposed method can deal with next day temperature forecasting.\n\n---------------------------------------------\nIn this paper authors develop a notion of data quality as the function of local variation of the graph nodes. The concept of local variation only utilizes the signals of the neighboring vertices and GCN is used to take into account broader neighborhoods of the nodes. Data quality then used to weight the loss terms for training of the LSTM network to forecast temperatures at weather stations.\n\nI liked the idea of using local variations of the graph signals as quality of the signal. It was new to me, but I am not very familiar with some of the related literature. I have one methodological and few experimental questions.\n\nMethodology:\nWhy did you decide to use GCN to capture the higher order neighborhoods? GCN does so intuitively, but it is not clear what exactly is happening due to non-linearities. What if you use graph polynomial filter instead [1] (i.e. linear combination of powers of the adjacency)? It can more evidently capture the K-hop neighborhood of a vertex.\n\nExperiments:\n- Could you please formalize the forecasting problem more rigorously. It is not easy to follow what information is used for training and testing. I'm not quite certain what \"Temperature is used as a target measurement, i.e., output of LSTM, and others including Temperature are used as input signals.\" means. I would expect that forecasting of temperature tomorrow is solely performed based on today's and past information about temperature and other measurements.\n- What are the measurement units in Table 1?\n- I would like to see comparison to some classical time series forecasting techniques, e.g. Gaussian Process regression and Auto-regressive models. Also some references and comparisons are needed to state-of-the-art weather forecasting techniques. These comparisons are crucial to see if the method is indeed practical.\n\nPlease consider proofreading the draft. There are occasional typos and excessively long wordings.\n\n[1] Aliaksei Sandryhaila and José MF Moura. Discrete signal processing on graphs. IEEE transactions\non signal processing, 61(7):1644–1656, 2013.", "Thank you for clarifying. Please consider adding units to the title of Table 1.\n\nI have updated my review and rating.", "Thanks for your pointing out. \n\nYes, the unit in Table 1 is Celcius (C). As you mentioned, we normalize features and responses, and feed them into our model. Once we get the predicted values (which are in a normalized range) from our model, we do denormalization (inverse normalization) to recover the Celcius unit to report MAE results.\nThe predicted values are denormalized by the constants (standard deviation and mean) from the training dataset.", "Thank you for clarifications and improving the draft. Although I am still not certain about the units in Table 1. In the Appendix you mentioned that temperature is in C, however you did not update Table 1. Is MAE given in Celsius? For the training purposes features and response are often normalized or pre-processed somehow, therefore I want to confirm that the results you report correspond to actual Celsius.", "The paper is an application of neural nets to data quality assessment. The authors introduce a new definition of data quality that relies on the notion of local variation defined in (Zhou and Schölkopf, 2004), and they extend it to multiple heterogenous data sources. The data quality function is learned using a GCN as defined in (Kipf and Welling, 2016).\n \n1) How many neighbors are used in the experiments? Is this fixed or is defined purely by the Gaussian kernel weights as mentioned in 4.2? Setting all weights less than 0.9 to zero seems quite abrupt. Could you provide a reason for this? How many neighbors fit in this range?\n2) How many data points? What is the temporal resolution of your data (every day/hour/minute/etc.)? What is the value of N, T? \n3) The bandwidth of the Gaussian kernel (\\gamma) is quite different (0.2 and 0.6) for the two datasets from Weather Underground (WU) and Weather Bug (WB) (sect. 4.2). The kernel is computed on the features (e.g., latitude, longitude, vegetation fraction, etc.). Location, longitude, distance from coast, etc. are the same no matter the data source (WU or WB). Maybe the way they compute other features (e.g., vegetation fraction) vary slightly, but I would guess the \\gamma should be (roughly) the same. \n4) Are the features (e.g., latitude, longitude, vegetation fraction, etc.) normalized in the kernel? If they are given equal weight (which is in itself questionable) they should be normalized, otherwise some of them will always dominate the distances. If they were indeed normalized, that should be made clear in the paper.\n5) Why do you choose the summer months? How does the framework perform on other months and other meteorological signals except temperature? The application is very nice and complex, but I find that the experiments are a little bit too limited. \n6) The notations w and W are used for different things, and that is slightly confusing. Usually one is used as a matrix notation of the other.\n7) I would tend to associate data quality with how noisy observations are at a certain node, and not heterogeneity. It would be good to add some discussion on noise in the paper. How do you define an anomaly in 5.2? Is it a spatial or temporal anomaly? Not sure I understand the difference between an anomaly and a bridge node. \n8) “A bridge node highly likely has a lower data quality level due to the heterogeneity”. I would think a bridge node is very relevant, and I don't necessarily see it as having a lower data quality. This approach seems to give more weight to very similar data, while discarding the transitions, which in a meteorological setting, could be the most relevant ?! \n9) Update the references, some papers have updated information (e.g., Bruna et al. - ICLR 2014, Kipf and Welling – ICLR 2017, etc.).\n\nQuality – The experiments need more work and editing as mentioned in the comments. \nClarity – The framework is fairly clearly presented, however the English needs significant improvement. \nOriginality – The paper is a nice application of machine learning and neural nets to data quality assessment, and the forecasting application is relevant and challenging. However the paper mostly provides a framework that relies on existing work.\nSignificance – While the paper could be relevant to data quality applications by introducing advanced machine learning techniques, it has limited reach outside the field. Maybe publish it in a data quality conference/journal.", "Summary of the reviews:\nPros:\n•\tA novel way to evaluate the quality of heterogeneous data sources.\n•\tAn interesting way to put the data quality measurement as a regularization term in the objective function.\nCons:\n•\tSince the data quality is a function of local variation, it is unclear about the advantage of the proposed data quality regularization versus using a simple moving average regularization or local smoothness regularization.\n•\tIt is unclear about the advantage of using the Multi-layer graph convolutional networks versus two naïve settings for graph construction + simple LSTM, see detailed comments D1\n\nDetailed comments:\nD1: Compared to the proposed approaches, there are two alternative naïve ways: 1) Instead of construct the similarity graph with Gaussian kernel and associated each vertex with different types of time-series, we can also construct one unified similarity graph that is a weighted combination of different types of data sources and then apply traditional LSTM; 2) During the GCN phases, one can apply type-aware random walk as an extension to the deep walk that can only handle a single source of data. It is unclear about the advantage of using the Multi-layer graph convolutional networks versus these two naïve settings. Either some discussions or empirical comparisons would be a plus.\n", "We really appreciate your comments and updated the following.\n\n\"The explanations added in the paper and in the authors comments are valuable and they should be included as much as possible in future versions of the paper, especially answers to questions 7 and 8 concerning the behavior of bridge nodes and anomalies.\"\n>> We added our explanation (Section 5.2 and 5.3) about the behavior of bridge nodes and anomalies in the discussion section (about Q7 and Q8) based on our responses to previous comments. Furthermore, we proofread our discussion to help make the explanation clear. \n\n\"The confusing sentence at Q8 has not been changed in the paper, but the authors explanation in the comments is very useful and should be added to the paper.\"\n>> As you suggested, we updated Section 5.3 to make it clear and less confusing. We added our explanation of how the data quality level is affected by neighboring nodes and quantitatively showed that the level is inferred correctly.\n\n\"Answer to Q6: I understand the notations—my comment was a suggestion to increase readability of the paper by using different letters for different things, not just lowercase vs uppercase vs bold.\"\n>> About Q6: We changed the data quality level notation to $s_i$ instead of $w_i$ for readability. Bold $s$ and $s_i$ are used to point out the data quality level, and $W$ is only used for edge weights.\n\n\"Forecasting applications are extremely important and challenging, but the experimental section needs to be better explained and further developed. Some of the sentences are still confusing and this makes it hard to follow some of the arguments. Please consider having the manuscript edited by someone who is an English expert.\"\n>> We added details of our experiments (including experimental setting, discussion of results) and edited the manuscript carefully (especially the Abstract, Introduction, Experiments, Results, and Conclusion sections) with the help of a native English speaker to improve its readability. \n\nThank you for your comments", "I would like to thank the authors for their effort in revising the manuscript. The paper addresses the very important issue of heterogeneity of data sources, however I still believe the manuscript could be further improved to better make the case for this important issue. \n\nThe explanations added in the paper and in the authors comments are valuable and they should be included as much as possible in future versions of the paper, especially answers to questions 7 and 8 concerning the behavior of bridge nodes and anomalies. The confusing sentence at Q8 has not been changed in the paper, but the authors explanation in the comments is very useful and should be added to the paper. Answer to Q6: I understand the notations—my comment was a suggestion to increase readability of the paper by using different letters for different things, not just lowercase vs uppercase vs bold. \n\nI have updated my grading, but still think that the paper needs more work before publication. Forecasting applications are extremely important and challenging, but the experimental section needs to be better explained and further developed. Some of the sentences are still confusing and this makes it hard to follow some of the arguments. Please consider having the manuscript edited by someone who is an English expert.\n", "6) >>\nWe use the lowercase w for data quality levels for each station and the upper case W for edge weights. The bold w is the vector having N elements and w_i is the data quality level of an i-th weather station. We have fixed the confusions in the updated draft. \n\n7) >>\n- In our paper, we assume that if two connected nodes (weather stations) are located in similar spatial features, these nodes highly likely observe similar meteorological measurements. Thus, if two connected nodes provide very different observations, we think that these observations are not reliable and could be noisy. Furthermore, if there is a group of connected nodes and some nodes observe significantly different measurements with other nodes in the group, we can say that the measurements are too heterogeneous and not reliable. In other words, the data quality level is associated with noisy as well as heterogeneous observations.\n- For the anomaly detection, what we would like to propose is that the embeddings from our model can be used to visualize nodes based on their connectivity and observations. For example, in Figure 3, the red dot (v_22) is connected with the green dots (v_19, v_20, v_21, v_23, v_25, v_29). Since these nodes have similar spatial features and are connected, it is expected to have similar observations. At t=0, the distribution of the nodes seems like a cluster. However, v_25 is far away from other green nodes and the red node at t=4. There are two possible reasons. First, observations of v_25 at t=4 may be too different with those of other green nodes and the red node. Second, observations of a node (v_4) that is only connected to v_25 (not other green nodes and the red node) might be too noisy. In the first case, since it violates our assumption (v_25's observations should be similar with those of another green node.), the observations of v_25 at t=4 might be noisy or not reliable. In the second case, the observations of v_4 at t=4 might be noisy. Thus, we can detect potentially anomalous nodes by looking at the distribution of nodes. (A bridge node is not necessary to be an anomaly.)\n- The anomaly comes from temporal observations by comparing to neighbor observations. Thus, two aspects (spatial and temporal) are jointly considered. \n\n8) >> \nThanks for your pointing out. We agree that the sentence you quote can cause some confusion. A bridge node in our paper is considered as a node connecting two (or more) clusters that consist of nodes having similar features. As a result, a bridge node is affected by two (or more) different groups of nodes simultaneously, and thus, the quality level at the bridge node is more susceptible than those of other nodes. However, it doesn’t directly mean that it is necessary for a bridge node to have a lower data quality level.\n\n9) >>\nThanks for your suggestion and we update the conference information in the revised version. \n\nQuality, Clarity, Originality, Significance >>\nWe do not agree with the comment on significance. Our paper proposes a novel solution to automatically infer data quality levels based on local variations of graph signals and demonstrate that the quality levels can be used to reduce forecasting error and detect potentially anomalous observations. It provides a new idea on inferring interpretable quantity without explicit labels. We agree that there is limited work in ICLR on data quality, but it is definitely one essential hurdle for any representation learning model to work in practice. ", "We sincerely thank the reviewer for the insightful comments and suggestions. We would like to stress that assessing and mitigating heterogeneity of data quality is an extremely important but less-studied topic in machine learning. Our work aims at positioning the task and providing one possible solution to address this important problem. The novelty of the paper lies in this important contribution rather than purely the novelty of the proposed model.\n\nBelow is our response to the questions. \n1) and 3) >>\nThanks for pointing out this potentially confusing issues. To build a symmetric adjacency matrix, we did not set a fixed number of neighbors. Instead, we defined the neighbors by the Gaussian kernel weights with the weight threshold. Since the numbers of the weather stations in WeatherBug(WB) and Weather Underground(WU) are different (#WU:42, #WB:159), the average numbers of neighbors in the datasets are too different under the same bandwidth(\\gamma). We aim at evaluating our model under similar degree distribution (similar topology) and therefore, we adjust the bandwidth to make the average numbers of adjacent neighbors in two dataset be similar under the 0.9 threshold. The average degrees are 6.0 and 7.5 for WU and WB, respectively.\n\n2) >>\n- Each weather station in the data sources (WU and WB) has a different temporal resolution. Some weather stations have observed measurements in every 5 minutes but other weather stations have operated sensors in every 30 minutes. So, we fix the temporal granularity as 1 hour and aggregate observations in each hour by averaging them. \n- N is the total number of weather stations and #WU:42, #WB:159. \n- T is the total length of signals and it is 744(hours) (24(hours/day)*31(day)) for July and August.\n\n4) >>\nThis is a very good point. Yes, we are aware that features represented in a large number (e.g., latitude ~ 34 degree) can dominate features represented in a small number (e.g., vegetation fraction < 1.0) , which is exactly what we want to avoid. Thus, it is a must to normalize the features before using them for computing the distances. As you point out, equal weighting may be not perfectly correct, however, this is what we can do without any domain knowledge.\n\n5) >>\n- It is a very good point too. Interestingly, the Los Angeles area contains many microclimates, which means that the daytime temperatures can vary as much as 36°F (19°C) between inland areas such as the San Fernando Valley and the coastal Los Angeles Basin. The temperature differences between different areas are more obvious during the summer season (than other seasons). For example, the average high temperature (°F) can vary as much as 26°F for July and August on 9 different regions (Downtown LA, LAX, Santa Monica, Culver City, Long Beach, San Fernando Valley, Burbank, Santa Ana, and Anaheim). In contrast, it varies only 6°F for December and January in the same regions. Thus, it is more challenging to predict temperatures in the summer season, which is why we choose two months (7 and 8) that often shows extreme fluctuations of temperatures.\n- The bigger picture of our work is investigating urban heat island effect in Los Angeles areas. Temperature is the most important factor in urban heat island and exhibits more variations (and more difficult to predict). In contrast, other observations, such as Relative Humidity or Precipitation, are very stable in Los Angeles areas. They are very easy to predict and therefore cannot justify for a complex model.\n\n", "Thank you for your comments and suggestions to improve the paper. Below are our responses to the main points of your comments: \n\nMethodology\n>> It is commonly believed that the earth climate system is a complex one involving many nonlinear interactions between climate factors. Neural networks have proven to be an effective solution to capture nonlinear dependencies in many applications. To capture nonlinearity, we use a nonlinear activation function (ReLU). Furthermore, multiple layers are more effective to learn features at various levels of abstraction. \nGraph polynomial filter (Equation (7) in [1]) has the similar form as GCN filter operations, but it does not consider nonlinearity. GCN is also based on the polynomial of the adjacent matrix, it equivalently handles the K-hop neighborhood of a vertex.\n\nExperiments\n>> Could you please formalize the forecasting problem more rigorously.\nThanks for pointing out the confusion. For the sentence you quoted, we did intend to describe as your suggested - In other words, future temperatures are forecasted by looking at past temperatures as well as other meteorological observations. We have updated the expression more clearly.\n\n>> What are the measurement units in Table 1?\nWe have added the details of the climate dataset in the appendix in the updated draft. \n\n>> I would like to see comparison to some classical time series forecasting techniques, e.g. Gaussian Process regression and Auto-regressive models. \nThanks for the baseline suggestions. We have added the results by auto-regressive for robust comparison in the updated draft. \n\nThanks for your comments, and we proofread the draft and make it more clear.", "Thank you for your comments and suggestions to improve the paper. Below are our responses to the main points of your comments: \n\nCons 1.\n>> It is an interesting point. A simple moving average or local smoothness regularization may improve the forecasting performance. However, these regularizations cannot infer the data quality level which is useful to understand time-varying graph signals.\n\nCons 2 and Detailed comments\n>> Thanks for the suggestions of more baselines. For the former one, we do not have a specific way to get the unified similarity graph based on the different types of time-series. That is why we only consider the spatial (static) features to construct the graph structure. But it could be an interesting direction to explore. \nIt is a great idea to think different ways to cover K-hop neighbor nodes. Random deep walk is one of the ways. Although we haven’t compared our method to these random-walk-based methods, the multi-layer GCNs with the data quality networks are more flexible to learn latent interactions between temporal observations due to their nonlinearity and compositional property. It may be possible to improve forecasting quality by using the random walk method, however, we focus on the automatically inferring method of the data quality level, which is not easily achievable by other ways." ]
[ 6, -1, -1, -1, 6, 8, -1, -1, -1, -1, -1, -1 ]
[ 3, -1, -1, -1, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ByJIWUnpW", "r16AndOEf", "S1GlLvu4G", "S1eWfNlmz", "iclr_2018_ByJIWUnpW", "iclr_2018_ByJIWUnpW", "ByIwKojXM", "HyChXVl7G", "rJDUzhtxf", "rJDUzhtxf", "Hk7kJzcxM", "ry07x_9xG" ]
iclr_2018_Sy21R9JAW
Towards better understanding of gradient-based attribution methods for Deep Neural Networks
Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years. While several methods have been proposed to explain network predictions, there have been only a few attempts to compare them from a theoretical perspective. What is more, no exhaustive empirical comparison has been performed in the past. In this work we analyze four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them. By reformulating two of these methods, we construct a unified framework which enables a direct comparison, as well as an easier implementation. Finally, we propose a novel evaluation metric, called Sensitivity-n and test the gradient-based attribution methods alongside with a simple perturbation-based attribution method on several datasets in the domains of image and text classification, using various network architectures.
accepted-poster-papers
With scores of 7-7-6 and the justification below the AC recommends acceptance. One of the reviewers summarizes why this is a good paper as follows: "This paper discusses several gradient based attribution methods, which have been popular for the fast computation of saliency maps for interpreting deep neural networks. The paper provides several advances: - This gives a more unified way of understanding, and implementing the methods. - The paper points out situations when the methods are equivalent - The paper analyses the methods' sensitivity to identifying single and joint regions of sensitivity - The paper proposes a new objective function to measure joint sensitivity"
train
[ "rJUrhpYxf", "Byt56W9lM", "SymYit2xf", "H1QgktIGG", "rJoXE_UMG", "B1NvQ_Izz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper discusses several gradient based attribution methods, which have been popular for the fast computation of saliency maps for interpreting deep neural networks. The paper provides several advances:\n- \\epsilon-LRP and DeepLIFT are formulated in a way that can be calculated using the same back-propagation as training.\n- This gives a more unified way of understanding, and implementing the methods.\n- The paper points out situations when the methods are equivalent\n- The paper analyses the methods' sensitivity to identifying single and joint regions of sensitivity\n- The paper proposes a new objective function to measure joint sensitivity\n\nOverall, I believe this paper to be a useful contribution to the literature. It both solidifies understanding of existing methods and provides new insight into quantitate ways of analysing methods. Especially the latter will be appreciated.", "The paper summarizes and compares some of the current explanation techniques for deep neural networks that rely on the redistribution of relevance / contribution values from the output to the input space.\n\nThe main contributions are the introduction of a unified framework that expresses 4 common attribution techniques (Gradient * Input, Integrated Gradient, eps-LRP and DeepLIFT) in a similar way as modified gradient functions and the definition of a new evaluation measure ('sensitivity n') that generalizes the earlier defined properties of 'completeness' and 'summation to delta'.\n\nThe unified framework is very helpful since it points out equivalences between the methods and makes the implementation of eps-LRP and DeepLIFT substantially more easy on modern frameworks. However, as correctly stated by the authors some of the unification (e.g. relation between LRP and Gradient*Input) has been already mentioned in prior work.\n\nSensitivity-n as a measure tries to tackle the difficulty of estimating the importance of features that can be seen either separately or in combination. While the measure shows interesting trends towards a linear behaviour for simpler methods, it does not persuade me as a measure of how well the relevance attribution method mimics the decision making process and does not really point out substantial differences between the different methods. Furthermore, The authors could comment on the relation between sensitivity-n and region perturbation techniques (Samek et al., IEEE TNNLS, 2017). Sensitivtiy-n seems to be an extension of the region perturbation idea to me.\n\nIt would be interesting to see the relation between the \"unified\" gradient-based explanation methods and approaches (e.g. Saliency maps, alpha-beta LRP, Deep Taylor, Deconvolution Networks, Grad-CAM, Guided Backprop ...) which do not fit into the unification framework. It's good that the author mention these works, still it would be great to see more discussion on the advantages/disadvantages, because these methods may have some nice theoretically properties (see e.g. the discussion on gradient vs. decompositiion techniques in Montavon et al., Digital Signal Processing, 2017) which can not be incorporated into the unified framework.", "The paper shows that several recently proposed interpretation techniques for neural network are performing similar processing and yield similar results. The authors show that these techniques can all be seen as a product of input activations and a modified gradient, where the local derivative of the activation function at each neuron is replaced by some fixed function.\n\nA second part of the paper looks at whether explanations are global or local. The authors propose a metric called sensitivity-n for that purpose, and make some observations about the optimality of some interpretation techniques with respect to this metric in the linear case. The behavior of each explanation w.r.t. these properties is then tested on multiple DNN models tested on real-world datasets. Results further outline the resemblance between the compared methods.\n\nIn the appendix, the last step of the proof below Eq. 7 is unclear. As far as I can see, the variable g_i^LRP wasn’t defined, and the use of Eq. 5 to achieve this last could be better explained. There also seems to be some issues with the ordering i,j, where these indices alternatively describe the lower/higher layers, or the higher/lower layers.", "Thanks for your extensive review and useful feedbacks.\n\nWhile we agree that it would be interesting to compare with other mentioned methods, the reasons we decided not to do so are various. Saliency maps and Deep Taylor Decomposition only produce positive attribution maps and and this would penalize these methods in the sensitivity-n metric given that our task inputs do contain some negative evidence (as shown in Figure 3c). Similarly, alpha-beta LRP also adds some bias towards positive attributions with the parameters suggested by the authors. Grad-CAM, Deconvolutional Networks and Guided Backpropagation can only be applied to specific network architectures and do not fit our goal to compare methods across tasks and architectures, while we believe it is important for attribution methods to be as general as possible. We reported the same arguments at the end of section 2.2.\nWe also agree with your statement that other methods have been shown to have interesting theoretical properties. We actually did not intend to claim the superiority of gradient-based methods and added a note to clarify this in Section 3.1 in our last revision.\n\nAbout the connection with the region perturbation technique (Samek et al. 2017), this is similar to what we use (and now mention explicitly) to produce Figure 3c, with the difference that we occlude one pixel at the time, we produce the curves for the negative ranking as well as for the positive and we plot directly the output variation on the y-axis instead of the AOPC. This technique evaluates methods based on i) how \"fast\" the target activation drops or increase and ii) how \"much\" the target activation changes. However, we show in Figure 3c that these two criteria often collide if the curves for different methods intersect. This is in fact what motivated sensitivity-n.\nWith sensitivity-n, we fix a value n and remove random subsets of n features from the input (without following the ranking given by the attribution maps) and measure the Pearson correlation with the output variation. This shows that different methods are better at producing different explanations (influence of single features vs influence of regions) and therefore the question itself of which is the best attribution method does not make sense if a task is not further specified.", "We thank the reviewer for his/her feedbacks.", "Thanks for your your review. \nWe reworked the last part of the proof A.1 in our last revision. In particular, we removed the variable g_i that was not defined and better explained the last step.\nWe did not find issues with the ordering of the subscripts i,j in the proof itself but we did notice that it was inconsistent with the convention we used in Section 2. We have now fixed it such that, when two subscripts are present, the first one always refers to the layer closer to the output." ]
[ 7, 6, 7, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1 ]
[ "iclr_2018_Sy21R9JAW", "iclr_2018_Sy21R9JAW", "iclr_2018_Sy21R9JAW", "Byt56W9lM", "rJUrhpYxf", "SymYit2xf" ]
iclr_2018_SyJ7ClWCb
Countering Adversarial Images using Input Transformations
This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system. Specifically, we study applying image transformations such as bit-depth reduction, JPEG compression, total variance minimization, and image quilting before feeding the image to a convolutional network classifier. Our experiments on ImageNet show that total variance minimization and image quilting are very effective defenses in practice, in particular, when the network is trained on transformed images. The strength of those defenses lies in their non-differentiable nature and their inherent randomness, which makes it difficult for an adversary to circumvent the defenses. Our best defense eliminates 60% of strong gray-box and 90% of strong black-box attacks by a variety of major attack methods.
accepted-poster-papers
A well written paper proposing some reasonable approaches to counter adversarial images. Proposed approaches include non-differentiable and randomized methods. Anonymous commentators pushed upon and cleared up some important issues regarding white, black and gray "box" settings. The approach appears to be a plausible defence strategy. One reviewers is a hold out on acceptance, but is open to the idea. The authors responded to the points of this reviewer sufficiently. The AC recommends accept.
train
[ "HJLVhosrz", "SJR7osjSG", "SyYA9jsBz", "rkx25isrM", "ryi9KVKBG", "Bk1rU7YSz", "HksQPiUVM", "B1gREqS4f", "HyulgtSVz", "S1wXIrVgM", "Sk47YIYlM", "SJzYnEqef", "rk5sqWNNG", "HkKhk67Nf", "rkqFThmEM", "rJX6P09fz", "r1XVPRqzM", "SyNlDC9GM", "r1-ST6olG", "SJp9ze61G", "H1mJgiqJG", "ry4zt5Vyf", "HyOUaNzkz", "H1dO8OqC-", "HktFIuqC-", "HJkZJjhAb", "ByXm1shC-", "ryuYj65C-", "Hk1TeF9RZ", "SJ67F79R-", "BkDPS_tAb" ]
[ "public", "public", "public", "public", "author", "public", "public", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "public", "author", "author", "author", "public", "public", "public", "author", "public", "author", "author", "author", "author", "public", "public", "public", "public" ]
[ "There is a paper “Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods” (https://nicholas.carlini.com/papers/2017_aisec_breakingdetection.pdf) which explores how stochastic model could be de-randomized and successfully attacked.\n\nThus while randomness makes an attack harder, it does not prevent white-box adversary from conducting successful attack.", "Due to the concerns described above, it appears that the authors’ defense unfortunately reduces to security through obscurity. When this concern was brought up in the comments, the authors claim that this is not the case because while the adversary may know the defense strategy, they will not know the exact randomized transformation. However, as described above, this is not true, and the authors don’t properly address the case when adversary actually knows the defenses.\nThe problem here is that as soon as the paper is published, any potential adversary knows that these methods could be used as a defense -- and thus they can adjust their attacks accordingly.", "All the attacks tested in the paper are done in either “gray-box” or “black-box” settings. In both cases, the adversary does not have full knowledge about the defense being used.\nHowever, the authors’ replies to the reviewers’ comments as well as statements in the paper (e.g. “we focus on increasing the effectiveness of model-agnostic defense strategies by developing approaches that ... are still effective in settings in which the adversary has information on the defense strategy being used”) create an impression that the authors claim their defenses would work in true “white box” case.\n\nIt is worth noting that in the true white box case, the adversary knows both the model and all preprocessing techniques. The authors do not study this case, and their claim of effectiveness is based on the fact that their proposed preprocessing techniques (TV and image quilting) are stochastic and non-differentiable.\n\nIt was already mentioned in the other comments that non-differentiable transformations make it hard to use common gradient based attacks. As a note, it is still possible by training a substitute neural network to approximate the non-differentiable transformation. \n\nMoreover an attack on stochastic defense was successfully performed in “Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods” (https://nicholas.carlini.com/papers/2017_aisec_breakingdetection.pdf).\nThe general idea is the following. Suppose you have a stochastic model f which maps the input into the logits vector (in this case, f will include both input transformation and classifier). Then, you can de-randomize f by sampling multiple instance f_1, ... , f_N of this model. Then you can attack an ensemble of deterministic functions f_1, …, f_N and find adversarial examples that fool all (or most) of them. If the functions f_i happen to be non-differentiable, then you can approximate them with differentiable functions and then conduct an attack.\n\nGiven that the white box attack against stochastic defenses proposed by Carlini et al is well known, we think it was important for the authors to evaluate their defense against this attack. The authors seem to imply this is not an immediate concern (e.g. “\"We leave the investigation of attacks that are tailored to our defenses to future work\" in response to Reviewer #2), but we think that any paper that proposes a stochastic defense should be evaluated against this attack to see how well it performs in the white box case. \n", "There are few problems in methodology which can make the results appear to be overly optimistic.\n\nThe first problem is the way the authors have chosen the size of their adversarial perturbation.\nThey claim (section 2 of the paper) that “success rate is generally measured as a function of the magnitude of the perturbations performed by the attack, using the normalized L2-dissimilarity” and provide formula for L2 dissimilarity in equation (1).\nThis claim is not supported by any references. Moreover, many papers which the authors site (including C&W, FGSM and I-FGSM attacks or any papers about adversarial training) actually use different metrics to measure strength of an adversary. It could be either L2-distance to adversarial example (like in C&W attack) or accuracy of the defense against adversarial examples with fixed L_{\\infty} size of perturbation (like in FGSM and I-FGSM attacks). The authors’ L2 dissimilarity metric makes it impossible to infer corresponding L_2 or L_{\\infty} size of adversarial perturbation, and thus makes it almost impossible to fairly compare results of the paper to any existing work.\n\nThe second problem is the fact that the authors seem to misunderstand the I-FGSM method. As initially described in https://arxiv.org/abs/1607.02533, I-FGSM does clipping after each step (which is called “projection” in PGD). However, according to eq (3) from the paper, the authors do not use projection or clipping. Without projection after each step, the attack may be much weaker than it should be because it will end up greatly overshooting its final epsilon boundary (and have one large clipping at the end). \n\nAlso, the authors say \"Since PGD is related to I-FGSM (the main difference between the two is the projection step after every iteration), we expect that our defenses will have similar performance against PGD.\" Actually, the most important difference between PGD and I-FGSM is not the projection step (both attacks have it), but the fact that PGD starts with random noise and also performs random restarts throughout the attack -- both of which makes the attack much stronger. Because of this, it appears that the PGD accuracies from the comments may be incorrect.\n\nThe last problem is the number of steps used in iterative attacks. The authors use 10 steps for I-FGSM attack and 5 steps for DeepFool. This might have been okay if their step size per iteration was large, but in the paper, the authors say their step size was just large enough to achieve their desired L2-dissimilarity. However to achieve a strong enough attack, step_size_per_iteration * number_of_steps should be greater than the final epsilon boundary. \nIdeally authors need to explore how number of iterations affect strength of the adversary, assuming fixed step size. An example of good justification for choosing number of iterations is figure 1 from https://arxiv.org/abs/1706.06083, which clearly shows that at least 40 - 50 iterations are needed to achieve strong attack in their case.", "Thanks for pointing us to this work! We will update our paper to refer to it.", "Additional prior works in this area [1] have shown transformations to be effective in countering adversarial examples, including cropping and resizing. \n\nAs I did not see the paper cited, I thought it might be worth bringing them to the author's attention, especially given the similar nature of the work.\n\n[1] https://arxiv.org/abs/1610.04256", "I believe that clarifies the statement made. The claim being made is not that the adversary does not have any knowledge of the defense, but just that the adversary doesn't know the exact choices of randomness that the defender makes.", "We agree that security through obscurity is inherently weak and ineffective. However, the defenses that we evaluated, in particular TV minimization and image quilting, are stochastic in nature. This allows the defense to randomize its transformation, and hence prevents the adversary from knowing the exact transformation being applied even if the defense strategy is known. The image quilting defense enjoys the additional property that the patch database used to construct the quilted images can be considered as the secret key, which obeys Kerckhoff's principle: even if the defense strategy is known, the adversary cannot apply the same quilting transformation if he does not have access to the patch database.", "It appears that in the revised manuscript, the authors completely change their threat model. In the new draft of the paper, the authors add the sentence \"our defenses assume that part of the defense strategy (viz., the input transformation) is unknown to the adversary\".\n\nThis is a completely unreasonable assumption. Any algorithm which hopes to be secure must allow the adversary to, at the very least, understand what the defense is that's being used. Consider a world where the defense here is implemented in practice: any attacker in the world could just go look up the paper, read the description of the algorithm, and know how it works.\n\nThe authors mention Kerckhoffs's Principle in a comment below, and that's exactly what is being violated here. It is perfectly reasonable to assume the adversary does not have the model parameters or training data. But declaring the entire defense process a secret is exactly what Kerckhoffs's Principle says is *not* okay.\n\nThe current threat model of this paper appears to be arguing for security through obscurity, and that is not a reasonable defense.", "To increase robustness to adversarial attacks, the paper fundamentally proposes to transform an input image before feeding it to a convolutional network classifier. The purpose of the transformation is to erase the high-frequency signals potentially embedded by an adversarial attack.\n\nStrong points:\n\n* To my knowledge, the proposed defense strategy is novel (even if the idea of transformation has been introduced at https://arxiv.org/abs/1612.01401). \n\n* The writing is reasonably clear (up to the terminology issues discussed among the weak points), and introduces properly the adversarial attacks considered in the work.\n\n* The proposed approach really helps in a black-box scenario (Figure 4). As explained below, the presented investigation is however insufficient to assess whether the proposed defense helps in a true white-box scenario. \n\n\nWeak points:\n\n* The black-box versus white-box terminology is not appropriate, and confusing. In general, black-box means that the adversary ignores everything from the decision process. Hence, in this case, the adversary does not know about the classification model, nor the defensive method, when used. This corresponds to Figure 3. On the contrary, white-box means that the adversary knows everything about the classification method, including the transformation implemented to make it more robust to attacks. Assimilating the parameters of the transform to a secret key is not correct because those parameters could be inferred by presenting many image samples to the transform and looking at the outcome of the transformation (which is supposed to be available in a 'white-box' paradigm) for those samples. \n\n* Using block diagrams would definitely help in presenting the training/testing and attack/defense schemes investigated in Figure 3, 4, and 5.\n\n* The paper does not discuss the impact of the denfense strategy on the classification performance in absence of adversity.\n\n* The paper lacks of positioning with respect to recent related works, e.g. 'Adversary Resistant Deep Neural Networks with an Application to Malware Detection' in KDD 2017, or 'Building Adversary-Resistant Deep Neural Networks without\nSecurity through Obscurity' at https://arxiv.org/abs/1612.01401. \n\n* In a white-box scenario, the adversary knows about the transformation and the classification model. Hence, an effective and realistic attack should exploit this knowledge. Designing an attack in case of a non differentiable transformation is obviously not trivial since back-propagation can not be used. However, since the proposed transformation primarily aim at removing the high frequency pattern induced by the attack, one could for example design an attack that account for a (linear and differentiable) low-pass filter transformation. Another example of attack that account for transformation knowledge (and would hopefully be more robust than the attacks considered in the manuscript) could be one that alternates between a conventional attack and the transformation.\n\n* If I understand correctly, the classification model considered in Figure 3 has been trained on original images, while the one in Figure 4 has been trained on transformed images. However, in absence of attack, they both achieve 76% accuracy. Is it correct? Does it mean that the transformation does not affect the classification accuracy at all?\n\n\nOverall, the works investigates an interesting idea, but lacks maturity to be accepted. Therefore, I would only recommend acceptation if room.\n\nMinor issues:\n\nTypo on p7: to change*s*\nClarify poor formulations:\n* p1: 'enforce model-specific strategies that enforce model properties such as invariance and smoothness via the learning algorithm or regularization schemes'. \n* p1: 'too simple to remove adversarial perturbations from input images sufficiently'", "Summary: This works proposes strategies to make neural networks less sensitive to adversarial attacks. They consist into applying different transformations to the images, such as quantization, JPEG compression, total variation minimization and image quilting. Four adversarial attacks strategies are considered to attack a Resnet50 model for classification of Imagenet images.\nExperiments are conducted in a black box setting (when the model to attack is unknown by the adversary) or white box setting (the model and defense strategy are known by the adversary).\n60% of attacks are countered in this last most difficult setting.\nThe previous best approach for this task consists in ensemble training and is attack specific. It is therefore pretty robust to the attack it was trained on but is largely outperformed by the authors methods that manage to reduce the classifier error drop below 25%. \n\nComments: The paper is well written, the proposed methods are well adapted to the task and lead to satisfying results.\n \nThe discussion remarks are particularly interesting: the non differentiability of the total variation and image quilting methods seems to be the key to their best performance in practice.\nMinor: the bibliography should be uniformed.", " The paper investigates using input transformation techniques as a defence against adversarial examples. The authors evaluate a number of simple defences that are based on input transformations such TV minimization and image quilting and compare it against previously proposed ideas of JPEG compression and decompression and random crops. The authors have evaluated their defences against four main kinds of adversarial attacks.\n\nThe main takeaways of the paper are to incorporate transformations that are non-differentiable and randomised. Both TV minimisation and image quilting have that property and show good performance in withstanding adversarial attacks in various settings. \n\nOne argument that I am not sure would be applicable perhaps and could be used by adversarial attacks is as follows: If the defence uses image quilting for instance and obtains an image $P$ that approximates the original observation $X$, it could be possible to use a model based approach that obtains an observation $Q$ that is close to $P$ which can be attacked using adversarial attacks. Would this observation then be vulnerable to such attacks? This could perhaps be explored in future.\n\nThe paper provides useful contributions in forming model agnostic defences that could be further investigated. The authors show that the simple input transformations advocated work against the major kind of attacks. The input transformations of TV minimization and image quilting share varying characteristics in terms of being sensitive to various kinds of attacks and therefore can be combined. The evaluation is carried out on ImageNet dataset with large number of examples.", "Wow, that was a quick response. Thanks!", "Apologies for leaving out this detail! We used an overlap of 2 pixels between the 5x5 patches. We will update the paper with this information.", "I'm trying to reproduce the experimental results shown in this paper. Thank you for giving a detailed description of parameter settings in Section 5.1 -- this makes it a lot more straightforward.\n\nOne thing that I noticed was missing was the tile overlap for the image quilting defense. You mention that you use 5x5 tiles, with a database of 1M tiles, and randomly select from the 10 nearest neighbors. In Section 4.3 and Section 6, you mention using minimum graph cuts in boundary regions, but this only applies when you have overlapping tiles. What was the overlap parameter used in the experiments?", "Thank you for your insightful comments on our work, which have been very helpful in improving the paper!\n\n* The black-box versus white-box terminology is not appropriate...\n\nAs several public comments have pointed out, the white-box terminology can be misleading. Some of our experiments are performed in a \"gray-box\" setting in which the adversary has access to the network parameters, but not to the quilting database that acts as a kind of \"secret key\". We believe that this gray-box setting is of practical interest because the quilting process is stochastic and because the adversary never directly observes the quilted images themselves: this makes it very difficult for the adversary to exactly reproduce the quilted images that the defender produces. Per your suggestion, we have clarified the learning-setting terminology in the revised version of the paper.\n\n* Using block diagrams would definitely help in presenting the training/testing and attack/defense schemes investigated in Figure 3, 4, and 5.\n\nPer your suggestion, we have added block diagrams clarifying the workflow of our attack/defense schemes in the revised version of the paper.\n\n* The paper does not discuss the impact of the defense strategy on the classification performance in absence of adversity.\n\nThe first row of Tables 1 and 2 present the accuracy of various defenses on non-adversarial images (\"no attack\"). In Figures 3, 4 and 5, the y-axis value corresponding to normalized L2-dissimilarity of 0 corresponds to the accuracy on non-adversarial images. We have emphasized this point in the table and figure captions in the revised version of the paper.\n\n* The paper lacks of positioning with respect to recent related works, e.g. 'Adversary Resistant Deep Neural Networks with an Application to Malware Detection' in KDD 2017, or 'Building Adversary-Resistant Deep Neural Networks without Security through Obscurity' at https://arxiv.org/abs/1612.01401.\n\nThank you for pointing out these references, which we were unaware of at the time of submission. Both approaches are similar to our defenses in the sense that they focus on non-differentiable, stochastic transformations. Having said that, there are also substantial differences between our study and those related works. The first paper relies on LLE to represent data points as a linear combination of nearest neighbors: this approach may certainly be suitable for certain kinds of data, but is unlikely to work very well in extremely high-dimensional spaces such as the ImageNet pixel space. The second paper's approach of randomly removing blocks of pixels is related to our image-cropping baseline defense, which is one of our baselines. We have included positioning with respect to these works in the revised version of the paper.\n\n* In a white-box scenario, the adversary knows about the transformation and the classification model. Hence, an effective and realistic attack should exploit this knowledge...\n\nIn white-box settings, it may, indeed, be possible to devise attacks that are tailored towards a particular defense. In our work, we have tried to make the development of such attacks non-trivial by making our defenses non-differentiable and stochastic. Having said that, it may certainly be possible to devise attack strategies that are successful nevertheless (such as the strategy sketched in our response to AnonReviewer3). We leave the investigation of attacks that are tailored to our defenses to future work.\n\n* If I understand correctly, the classification model considered in Figure 3 has been trained on original images, while the one in Figure 4 has been trained on transformed images. However, in absence of attack, they both achieve 76% accuracy. Is it correct? Does it mean that the transformation does not affect the classification accuracy at all?\n\nThe 76% accuracy is obtained by a convolutional network that is trained and tested on images on which no defense (i.e., input transformation) is applied. The \"no defense\" baseline is this exactly the same in both Figures 3 and 4. For defenses such as TV minimization and quilting, the accuracy on non-adversarial images is lower (both in Figure 3 and 4), which shows that the transformations, indeed, do negatively impact classification accuracy on non-adversarial images.", "Thank you for your insightful comments, and positive evaluation of work! \n\nRegarding model-based approaches for attacking our quilting defense: we agree this may the most viable option for attacking our defense. As you suggest, it may be possible for the adversary to construct its own patch database, and use it to construct quilted images that may be sufficiently similar to the quilted image created using our \"secret database\". The remaining issue for the adversary is then to backpropagate gradients through the quilting transformation: the adversary may be able to do this by training a pixel-to-pixel network that learns to produce the quilted image given an original image, and using this network to approximate gradients. We intend to investigate such attack approaches in future work. We have updated our paragraph describing future work to reflect this.\n", "Thanks for your positive evaluation of our paper! Per your suggestion, we have updated the bibliography entries to make them uniform.", "This work considers ImageNet which is a far more challenging dataset. After reading the other submission (which is quite interesting too), I humbly think that this work is the most sensible ICLR submission on defending against adversarial attacks.", "You might want to check out https://openreview.net/forum?id=S18Su--CW where the authors show that simple quantization (depth reduction) leads to a loss of accuracy on clean examples, but you can get around it by discretizing your input. Also simple quantization maintains a roughly linear response of the classifier to it's input (see Figure 1), which is hypothesized as the cause of non-robustness to adversarial attacks (Goodfellow, 2014). The attacks proposed are also not \"grey-box\" like in this paper, i.e. the attacker has knowledge of the transformations and can attack using \"discretized\" versions of various iterated algorithms. The defense of this paper is essentially defense by \"obfuscation\", under the assumption that the attacker does not know what \"obfuscation\" has happened.", "The authors' result is not surprising at all. It is actually one of the few works on the defense side which makes sense to me.\nPersonally, I have not seen any strong evidence suggesting that we can generally maintain a high accuracy on clean and adversarial samples simultaneously. We have to take any such idealistic results on ImageNet with a grain of salt. In most of these works, either the defense is super weak (i.e., easily attackable) or something is hidden under the carpet.", "The transformations we studied, indeed, all have some hyper-parameter that controls how lossy the transformation is, and can be used to trade off clean accuracy and adversarial accuracy. These hyper-parameters are: crop ratio for the crop-rescale transform, pixel drop rate and regularization parameter for total variation minimization, and patch size for image quilting. For instance, using a larger patch size in image quilting will remove more of the adversarial perturbation (which likely leads to higher adversarial accuracy), but also affects clean images which deteriorates clean accuracy. \n\nWe selected hyper-parameter that achieve high adversarial accuracy, but one may choose to set them differently depending on the user's needs. We surmise it may be possible to achieve high clear and adversarial accuracy by ensembling predictions over multiple hyper-parameter settings, but further experimentation is needed to confirm this hypothesis.", "The proposed several transformation methods are quite effective in recognizing the adversarial examples. For most existing works, people are trying to increase the performance on adversarial examples while maintaining the accuracy of clean examples. However, in table 2, I found there is a huge drop in accuracy for all your methods on clean examples. \nI wonder can you solve this problem by tuning some hyper-parameters to balance the performance of no attack and with attack?", "Thank you for your comment! We were unaware of [1] at the time of submission. We will include it as a citation. Based on our understanding, [2] applies Carlini-Wagner's attack against a target loss that averages the prediction over multiple fixed models with random network weight dropout rather than random pixel dropout, but the two techniques are certainly related.\n\nIn regards to attacking the transformation defenses, independent of [1], we have observed that it is possible to produce adversarial examples that are invariant to crop location and scale. By randomly selecting a crop of size 135x135 each iteration to compute the loss function for CW-L2, the resulting adversarial examples can reduce the accuracy of crop defense to 30% at an average L2-dissimilarity of 0.045. \n\nEnhancing the attack with random pixel dropping can, indeed, reduce the effectiveness of TV minimization significantly. Using a pixel dropout mask with drop probability 0.1 each iteration, CW-L2 can reduce the accuracy of TV minimization defense to 9% at an average L2-dissimilarity of 0.06. However, we do not have a good idea of how to backpropagate through the quilting transformation, as the construction is stochastic and non-differentiable in nature. Nevertheless, section 5.5 of our paper does show that it is possible to successfully attack the quilting defense in some cases even without knowledge of this transformation; we presume because the convolutional filters reveal some information about the patch database used.\n\nThe attacks we use for the white-box setting do not have knowledge of the defense mechanism used. \n\nWe will clarify these points in the paper.", "For our experiments, we selected four attacks that we believe are representative of the large number of attacks that people have proposed. Since PGD is related to I-FGSM (the main difference between the two is the projection step after every iteration), we expect that our defenses will have similar performance against PGD. \n\nWe performed a small experiment with PGD to confirm this. We created attacks with an average L2-dissimilarity of 0.06, and find that the accuracy of our defenses against white-box I-FGSM/PGD attacks are: no defense 0%/0%, crop ensemble 44%/48%, TVM 29%/36%, and image quilting 35%/37%. These results suggest that the effectiveness of our defenses against PGD is similar to their effectiveness against I-FGSM. We will add these results to the paper.", "We agree that the terminology of white-box is ambiguous, in particular, in the context of randomized defenses (is the adversary allowed access to the random seed?). In cryptography, Kerckhoffs's principle prescribes the use of a \"secret key\". One could make the argument that the patch database used by image quilting is an implementation of such a secret key. To the best of our knowledge, there is no consensus yet in the community yet on what a \"secret key\" may and may not contain in the context of defenses against adversarial examples.\n\nIn our current paper, \"white-box\" refers to access the model parameters; other parameters (random seed, patch database) are considered to be part of the secret key. We will add a section to our paper clarifying the terminology.", "1) We have performed experiments with our defenses against PGD; see our previous comment for the results of those experiments, which we will add to the paper. Our results do not suggest substantial differences in the effectiveness of our defenses between PGD and I-FGSM. \n\n2) Our proposed defenses are not intended to compete with adversarial-training-based defenses: the two defenses can be used together and are likely to be complementary. We chose ImageNet to conduct our experiment for the following two reasons:\n\n- The interest in adversarial examples mainly stems from the concern of use of computer vision models in real-world applications such as self-driving cars and image-classification services. In these settings, the input to the model has high resolution and diverse content; ImageNet more closely resembles this scenario than MNIST or CIFAR.\n\n- Defending a model that performs classification on ImageNet is inherently more difficult than defending a MNIST or CIFAR classification model, since the model must output very diverse class labels and the model's prediction is often uncertain. Moreover, the input dimensionality for ImageNet is much higher (~150000 compared to 768 for MNIST and 3072 for CIFAR-10), which gives the attacker much more maneuverability.\n\n3) We agree that the terminology of white-box is ambiguous, in particular, in the context of randomized defenses (is the adversary allowed access to the random seed?). In cryptography, Kerckhoffs's principle prescribes the use of a \"secret key\". One could make the argument that the patch database used by image quilting is an implementation of such a secret key. To the best of our knowledge, there is no consensus yet in the community yet on what a \"secret key\" may and may not contain in the context of defenses against adversarial examples. In our current paper, \"white-box\" refers to access the model parameters; other parameters (random seed, patch database) are considered to be part of the secret key. We will add a section to our paper clarifying the terminology.", "1) Projecting after every step would make a lot of difference if the number of iterations and step length is high, since typically gradient signal from outside the epsilon ball is not that useful in finding adversarial examples.\n\n2) I am wondering what kind of accuracy your defense would get on MNIST/CIFAR-10, so one could compare to vanilla adversarial training as in Madry et al. It is not clear at the moment whether it would improve on adversarial training with PGD.\n\n3) Since some of the transformations are non-differentiable you could also have settings where the attacker knows your transformations and attack your transformed inputs (rather than the original image) - as the other commenter says, what you study is the \"grey-box\" case and not truly the \"white-box\" case. I would expect that in the truly \"white-box case\", it is no more robust against such attacks than just adversarial training with PGD.", "Thanks for these clarifications! It might make sense to introduce a new term to characterize the threat model here, as \"white-box\" typically refers to full information about the defense. I think \"grey-box\" has been used before although it isn't very evocative either.", "I am curious why the authors have not evaluated their defense methods against the PGD attack of Madry et al https://arxiv.org/abs/1706.06083, which is the strongest first order adversary possible.", "Interesting paper! It wasn't clear to me whether you evaluated attacks (white-box or black-box) with knowledge of the defensive technique being used? For JPEG or TV-minimization for instance, this would mean back-propagating through the transformation step. Analogously, one could incorporate randomized procedures such as cropping, pixel dropout or quilting into the adversarial example generation procedure, either in a white-box setting or in a black-box setting over a locally trained model with a similar defense.\n\nSome prior works [1,2] seem to indicate that various types of transformations do actually remain vulnerable to adversaries with such tailored attacks. ([1] considered random crops and found that an attack tailored to a particular cropping mechanism was still effective. [2] looked at random pixel dropout and found that computing white-box or black-box attacks for multiple randomly chosen dropout masks generalized well to unseen random masks).\n\n[1] Foveation-based Mechanisms Alleviate Adversarial Examples, https://arxiv.org/abs/1511.06292\n[2] Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods, https://arxiv.org/abs/1705.07263" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 8, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "HksQPiUVM", "SyYA9jsBz", "iclr_2018_SyJ7ClWCb", "iclr_2018_SyJ7ClWCb", "Bk1rU7YSz", "iclr_2018_SyJ7ClWCb", "B1gREqS4f", "HyulgtSVz", "iclr_2018_SyJ7ClWCb", "iclr_2018_SyJ7ClWCb", "iclr_2018_SyJ7ClWCb", "iclr_2018_SyJ7ClWCb", "HkKhk67Nf", "rkqFThmEM", "iclr_2018_SyJ7ClWCb", "S1wXIrVgM", "SJzYnEqef", "Sk47YIYlM", "SJp9ze61G", "H1mJgiqJG", "HyOUaNzkz", "HyOUaNzkz", "iclr_2018_SyJ7ClWCb", "BkDPS_tAb", "SJ67F79R-", "Hk1TeF9RZ", "ryuYj65C-", "HktFIuqC-", "H1dO8OqC-", "iclr_2018_SyJ7ClWCb", "iclr_2018_SyJ7ClWCb" ]
iclr_2018_HkwVAXyCW
Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks
Recurrent Neural Networks (RNNs) continue to show outstanding performance in sequence modeling tasks. However, training RNNs on long sequences often face challenges like slow inference, vanishing gradients and difficulty in capturing long term dependencies. In backpropagation through time settings, these issues are tightly coupled with the large, sequential computational graph resulting from unfolding the RNN in time. We introduce the Skip RNN model which extends existing RNN models by learning to skip state updates and shortens the effective size of the computational graph. This model can also be encouraged to perform fewer state updates through a budget constraint. We evaluate the proposed model on various tasks and show how it can reduce the number of required RNN updates while preserving, and sometimes even improving, the performance of the baseline RNN models. Source code is publicly available at https://imatge-upc.github.io/skiprnn-2017-telecombcn/.
accepted-poster-papers
This paper explores what might be characterized as an adaptive form of ZoneOut. With the improvements and clarifications added to the paper during the rebuttal the paper could be accepted.
train
[ "HkmeI2vxM", "rkKX3j7SM", "SyUH1Qjez", "HJ6Ve2MHz", "BkfgO-FgG", "BywMmCPzf", "SyThG0DMz", "HJwcGAPMG", "HJoVzADMM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "UPDATE: Following the author's response I've increased my score from 5 to 6. The revised paper includes many of the additional references that I suggested, and the author response clarified my confusion over the Charades experiments; their results are indeed close to state-of-the-art on Charades activity localization (slightly outperformed by [6]), which I had mistakenly confused with activity classification (from [5]).\n\nThe paper proposes the Skip RNN model which allows a recurrent network to selectively skip updating its hidden state for some inputs, leading to reduced computation at test-time. At each timestep the model emits an update probability; if this probability is over a threshold then the next input and state update will be skipped. The use of a straight-through estimator allows the model to be trained with standard backpropagation. The number of state updates that the model learns to use can be controlled with an auxiliary loss function. Experiments are performed on a variety of tasks, demonstrating that the Skip-RNN compares as well or better than baselines even when skipping nearly half its state updates.\n\nPros:\n- Task of reducing computation by skipping inputs is interesting\n- Model is novel and interesting\n- Experiments on multiple tasks and datasets confirm the efficacy of the method\n- Skipping behavior can be controlled via an auxiliary loss term\n- Paper is clearly written\n\nCons:\n- Missing comparison to prior work on sequential MNIST\n- Low performance on Charades dataset, no comparison to prior work\n- No comparison to prior work on IMDB Sentiment Analysis or UCF-101 activity classification\n\nThe task of reducing computation by skipping RNN inputs is interesting, and the proposed method is novel, interesting, and clearly explained. Experimental results across a variety of tasks are convincing; in all tasks the Skip-RNNs achieve their goal of performing as well or better than equivalent non-skipping variants. The use of an auxiliary loss to control the number of state updates is interesting; since it sometimes improves performance it appears to have some regularizing effect on the model in addition to controlling the trade-off between speed and accuracy.\n\nHowever, where possible experiments should compare directly with prior published results on these tasks; none of the experiments from the main paper or supplementary material report any numbers from any other published work.\n\nOn permuted MNIST, Table 2 could include results from [1-4]. Of particular interest is [3], which reports 98.9% accuracy with a 100-unit LSTM initialized with orthogonal and identity weight matrices; this is significantly higher than all reported results for the sequential MNIST task.\n\nFor Charades, all reported results appear significantly lower than the baseline methods reported in [5] and [6] with no explanation. All methods work on “fc7 features from the RGB stream of a two-stream CNN provided by the organizers of the [Charades] challenge”, and the best-performing method (Skip GRU) achieves 9.02 mAP. This is significantly lower than the two-stream results from [5] (11.9 mAP and 14.3 mAP) and also lower than pretrained AlexNet features averaged over 30 frames and classified with a linear SVM, which [5] reports as achieving 11.3 mAP. I don’t expect to see state-of-the-art performance on Charades; the point of the experiment is to demonstrate that Skip-RNNs perform as well or better than their non-skipping counterparts, which it does. However I am surprised at the low absolute performance of all reported results, and would appreciate if the authors could help to clarify whether this is due to differences in experimental setup or something else.\n\nIn a similar vein, from the supplementary material, sentiment analysis on IMDB and action classification on UCF-101 are well-studied problems, but the authors do not compare with any previously published results on these tasks.\n\nThough experiments may not show show state-of-the-art performance, I think that they still serve to demonstrate the utility of the Skip-RNN architecture when compared side-by-side with a similarly tuned non-skipping baseline. However I feel that the authors should include some discussion of other published results.\n\nOn the whole I believe that the task and method are interesting, and experiments convincingly demonstrate the utility of Skip-RNNs compared to the author’s own baselines. I will happily upgrade my rating of the paper if the authors can address my concerns over prior work in the experiments.\n\n\nReferences\n\n[1] Le et al, “A Simple Way to Initialize Recurrent Networks of Rectified Linear Units”, arXiv 2015\n[2] Arjovsky et al, “Unitary Evolution Recurrent Neural Networks”, ICML 2016\n[3] Cooijmans et al, “Recurrent Batch Normalization”, ICLR 2017\n[4] Zhang et al, “Architectural Complexity Measures of Recurrent Neural Networks”, NIPS 2016\n[5] Sigurdsson et al, “Hollywood in homes: Crowdsourcing data collection for activity understanding”, ECCV 2016\n[6] Sigurdsson et al, “Asynchronous temporal fields for action recognition”, CVPR 2017", "I appreciate the author's response and updates to the paper, and I apologize for my confusion over action classification vs action localization; this explanation makes the Charades experiments more convincing, and I've increased my score from 5 to 6 accordingly.", "The authors proposed a novel RNN model where both the input and the state update of the recurrent cells are skipped adaptively for some time steps. The proposed models are learned by imposing a soft constraint on the computational budget to encourage skipping redundant input time steps. The experiments in the paper demonstrated skip RNNs outperformed regular LSTMs and GRUs o thee addition, pixel MNIST and video action recognition tasks.\n\n\n\nStrength:\n- The experimental results on the simple skip RNNs have shown a good improvement over the previous results.\n\nWeakness:\n- Although the paper shows that skip RNN worked well, I found the appropriate baseline is lacking here. Comparable baselines, I believe, are regular LSTM/GRU whose inputs are randomly dropped out during training.\n\n- Most of the experiments in the main paper are on toy tasks with small LSTMs. I thought the main selling point of the method is the computational gain. Would it make more sense to show that on large RNNs with thousands of hidden units? After going over the additional experiments in the appendix, and I find the three results shown in the main paper seem cherry-picked, and it will be good to include more NLP tasks.", "I would like to thank the authors for their reply. The new experiments with randomly dropout baseline look compiling. My only concern is the performance of the random baseline in Table 3 is as good as the best skip GRU regarding mAP. The latest revision has cleared some of my concerns in the initial review. I decided to increase the review score from 5 to 6.", "This paper proposes an idea to do faster RNN inference via skip RNN state updates. \nI like the idea of the paper, in particular the design which enables calculating the number of steps to skip in advance. But the experiments are not convincing enough. First the tasks it was tested on are very simple -- 2 synthetic tasks plus 1 small-scaled task. I'd like to see the idea works on larger scale problems -- as that is where the computation/speed matters. Also besides the number of updates reported in table, I think the wall-clock time for inference should also be reported, to demonstrate what the paper is trying to claim.\n\nMinor -- \nCite Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation by Yoshua Bengio, Nicholas Leonard and Aaron Courville for straight-through estimator.", "Q: Include prior published results of the same tasks\n\nA: Following the suggestion from the reviewer, we added previously published results and their implementation context for all tasks mentioned by the reviewer: MNIST, Charades, UCF-101 and IMDB. They were added to the tables when possible, or to the discussion only otherwise (e.g. results from other works wouldn’t match the table layout for IMDB). \n\n\nQ: low performance on Charades dataset\n\nA: Regarding the performance in temporal action localization task on Charades, we would like to highlight that we report results for the action localization task (i.e. a many-to-many task where an output is emitted for each input frame). However, the results reported in [5] belong to the action classification task (i.e. a many-to-one task where a single output is emitted for the whole video). To the best of our knowledge, the best result for a CNN+LSTM architecture like ours on the localization task is 9.60% (Table 2 in [6], results without post-processing). However, in [6] they use both streams from a Two-Stream CNN (RGB+Flow) whereas we use the RGB stream only. Although this yields a 0.58% mAP increase with respect to our best performing model, using both streams results in approximately a 2x increase in computation (# FLOPs) and memory footprint, plus an additional pre-processing step to compute the optical flow from the input frames. We are not aware of any other work using RGB only with CNN+LSTM on this dataset and task. It is also interesting to notice that our models learn which frames need to be attended without being given explicit motion information as input.", "Q: Wall-clock time for inferencing should be reported.\n\nA: Wall-clock timing is highly dependent on factors such as hardware, framework and implementation, thus making it difficult to isolate the impact of the model. This is why we originally reported the number of sequential steps (i.e. state updates) performed by each model. As an alternative to wall-clock timing, we updated the manuscript to report the number of floating point operations (FLOPs) per sequence. This measure is also independent of external factors such as hardware/software while being representative of the computational load of each model. Although we believe that wall-clock time is not very informative, we are willing to report it if the reviewer still thinks that it will improve the quality of the paper.\n\n\n\nQ: Cite paper by Bengio et al.\n\nA: The paper by Bengio et al. is cited for the ST estimator in the updated version of the manuscript.", "Q: Comparison with baselines randomly dropping inputs is missing.\n\nA: We actually have reported the random input dropping baseline for the MNIST task. In the revised version of the paper, we have added results of random dropping baseline for the adding task and Charades. In all cases, the proposed model learns the best ways to skip states (instead randomly) and demonstrated clear performance gains over the random dropping baseline. Note here we assume when random dropping is done, both the input and the state update operation are skipped. We do not consider the option that only input is dropped and the state is still updated since it does not achieve the objective of saving computation. \n\n\n\nQ: the three experiments included in the main paper seemed cherry-picked. \n\nA: We have included a large diverse set of experiments in the Appendix including signal frequency discrimination, sentiment analysis on IMDB movie reviews (text), and video action classification. Far from cherry-picking test results, our goal is to demonstrate the general applicability of the proposed model in various tasks involving data of different modalities (signal, text, and video). We will be happy to move any of the experiments from Appendix to the main paper.\n", "We thank reviewers for their valuable comments. We respond to the main concerns below and in individual replies to each reviewer. \n\nR2 and R3 \nQ: Task scale and model size too small. Should run experiments with a large number of hidden units.\n\nA:\n- SkipRNN has indeed a larger advantage over regular RNNs when the model is larger, since the computational savings of skipped states are larger. However, this does not necessarily require RNNs with thousands of hidden units, as the main bulk of computation may come from other associated components of the full architecture, e.g. the CNN encoder in video tasks. It’s important to note when a RNN state update is skipped for a time step, all its preceding elements in the computation graph for that time step are also skipped. For example, in the case of CNN encoder in video tasks, the computational cost of these elements is typically very significant. The updated version of the manuscript reports the actual # of FLOPs/sequence and shows how SkipRNN models can result in large savings in computation-intensive tasks such as video action localization. We estimated the # of floating point operations based on the actual operations involved in the input encoder and the RNN model. \n\n- Please also note that the size of the studied RNNs in our paper is the same as or even larger than those reported in related methods, e.g. [1, 2, 3]. The largest model in these works is composed by 2 LSTM layers with 256 units each, while we have reported results for 2 layers with 512 units each in the appendix.\nWe believe that the size of the considered tasks is also comparable to those in [1, 2, 3]. Despite some of them using larger datasets in terms of number of examples, their inputs have low dimensionality (e.g. 300-d pre-trained word embeddings) compared to the ones in some of our experiments (e.g. up to 4096-d for video tasks). \n\n\n\nUpdates to the paper:\n\n- Add FLOPs to the tables\n- Moved the description of the random skipping baseline to the beginning of the experiments section\n- Add skipping baselines for the adding task (plus discussion)\n- Add skipping baselines for Charades (plus discussion)\n- Evaluate models on Charades sampling 100 frames/video instead of 25, which should be more accurate for studying models performing different number of state updates.\n- Add SOTA results for recurrent models on MNIST\n- Add comparison to Sigurdsson et al. (CVPR 2017) for Charades\n- Cite prior work and SOTA for IMDB\n- Add SOTA results on UCF-101 (Carreira & Zisserman, 2017)\n\n\n\n\n[1] Neil et al., Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences, NIPS 2017\n[2] Yu et al., Learning to Skim Text, ACL 2017\n[3] Anonymous authors, Neural Speed Reading via Skim-RNN, ICLR 2018 submission\n" ]
[ 6, -1, 6, -1, 6, -1, -1, -1, -1 ]
[ 4, -1, 4, -1, 4, -1, -1, -1, -1 ]
[ "iclr_2018_HkwVAXyCW", "BywMmCPzf", "iclr_2018_HkwVAXyCW", "HJwcGAPMG", "iclr_2018_HkwVAXyCW", "HkmeI2vxM", "BkfgO-FgG", "SyUH1Qjez", "iclr_2018_HkwVAXyCW" ]
iclr_2018_rkPLzgZAZ
Modular Continual Learning in a Unified Visual Environment
A core aspect of human intelligence is the ability to learn new tasks quickly and switch between them flexibly. Here, we describe a modular continual reinforcement learning paradigm inspired by these abilities. We first introduce a visual interaction environment that allows many types of tasks to be unified in a single framework. We then describe a reward map prediction scheme that learns new tasks robustly in the very large state and action spaces required by such an environment. We investigate how properties of module architecture influence efficiency of task learning, showing that a module motif incorporating specific design principles (e.g. early bottlenecks, low-order polynomial nonlinearities, and symmetry) significantly outperforms more standard neural network motifs, needing fewer training examples and fewer neurons to achieve high levels of performance. Finally, we present a meta-controller architecture for task switching based on a dynamic neural voting scheme, which allows new modules to use information learned from previously-seen tasks to substantially improve their own learning efficiency.
accepted-poster-papers
Important problem (modular continual RL) and novel contributions. The initial submission was judged to be a little dense and hard to read, but the authors have been responsive in responding and updating the paper. I support accepting this paper.
test
[ "rJHuDgAlz", "HkxSRZcxM", "BJQgR1aef", "BJPN4R3WM", "H1qarChZG", "ByXcBCnbf", "BJVOBA3Wz", "Bkht4Anbf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "The authors propose a kind of framework for learning to solve elemental tasks and then learning task switching in a multitask scenario. The individual tasks are inspired by a number of psychological tasks. Specifically, the authors use a pretrained convnet as raw statespace encoding together with previous actions and learn through stochastic optimization to predict future rewards for different actions. These constitute encapsulated modules for individual tasks. The authors test a number of different ways to construct the state representations as inputs to these module and report results from extensive simulations evaluating them. The policy is obtained through a heuristic, selecting actions with highest reward prediction variance across multiple steps of lookahead. Finally, two variants of networks are presented and evaluated, which have the purpose of selecting the appropriate module when a signal is provided to the system that a new task is starting.\n\nI find it particularly difficult to evaluate this manuscript. The presented simulation results are based on the described system, which is very complex and contains several, non-standard components and heuristic algorithms. \n\nIt would be good to motivate the action selection a bit further. E.g., the authors state that actions are sampled proportionally to the reward predictions and assure properties that are not necessarily intuitive, e.g. that a few reward in the future can should be equated to action values. It is also not clear under which conditions the proposed sampling of actions and the voting results in reward maximization. No statements are made on this other that empirically this worked best. \n\nIs the integral in eq. 1 appropriate or should it be a finite sum?\n\nIt seems, that the narrowing of the spatial distribution relative to an \\epsilon -greedy policy would highly depend on the actual reward landscape, no? Is the maximum variance as well suited for exploration as for exploitation and reward maximization?\n\nWhat I find a bit worrisome is the ease with which the manuscript switches between “inspiration” from psychology and neuroscience to plausibility of proposing algorithms to reinterpreting aimpoints as “salience” and feature extraction as “physical structure”. This necessarily introduces a number of \n\nOverall, I am not sure what I have learned with this paper. Is this about learning psychological tasks? New exploration policies? Arbitration in mixtures of experts? Or is the goal to engineer a network that can solve tasks that cannot be solved otherwise? I am a bit lost.\n\n\nMinor points:\n“can from”", "The paper comprises several ideas to study the continual learning problem. First, they show an ad-hoc designed environment, namely the Touchstream environment, in which both inputs and actions are represented in a huge space: as it happens with humans – for example when they are using a touch screen – the resolution of the input space, i.e. the images, is at least big as the resolution of the action space, i.e. where you click on the screen. This environment introduces the interesting problem of a direct mapping between input and actions. Second, they introduce an algorithm to solve this mapping problem in the Touchstream space. Specifically, the ReMaP algorithm learns to solve typical neuroscience tasks, by optimizing a computational module that facilitates the mapping in this space. The Early Bottleneck Multiplicative Symmetric (EMS) module extends the types of computation you might need to solve the tasks in the Touchstream space. Third, the authors introduce another module to learn how to switch from task to task in a dynamical way.\nThe main concern with this paper is about its length. While the conference does not provide any limits in terms of number of pages, the 13 pages for the main text plus other 8 for the supplementary material is probably too much. I am wondering if the paper just contains too much information for a single conference publication. \nAs a consequence, either the learning of the single tasks and the task switching experiments could have been addressed with further details. In case of single task learning, the tasks are relatively simple where k_beta = 1 (memory) and k_f = 2 (prediction) are sufficient to solve the tasks. It would have been interesting to evaluate the algorithm on more complex tasks, and how these parameters affect the learning. In case of the task switching a more interesting question would be how the network learns a sequence of tasks (more than one switch). \nOverall, the work is interesting, well described and the results are consistent. \nSome questions:\n- the paper starts with clear inspiration from Neuroscience, but nothing has been said on the biological plausibility of the ReMap, EMS and the recurrent neural voting;\n- animals usually learn in continuous-time, thus such tasks usually involve a time component, while this work is designed in a time step framework; could the author argument on that? (starting from Doya 2000)\n- specifically, the MTS and the Localization tasks involve working memory, thus a comparison with other working memory with reinforcement learning would make more sense that different degree of ablated modules. (for example Bakker 2002)\n\nMinor:\n- Fig 6 does not have letters\n- TouchStream is sometimes written as Touchstream\n\nRef.\nDoya, K. (2000). Reinforcement learning in continuous time and space. Neural Computation, 12(1), 219–245.\nBakker, B. (2002). Reinforcement Learning with Long Short-Term Memory. In T. G. Dietterich, S. Becker, & Z. Ghahramani (Eds.), (pp. 1475–1482). Presented at the Advances in Neural Information Processing Systems 14, MIT Press.\n\n\n\n", "Reading this paper feels like reading at least two closely-related papers compressed into one, with overflow into the appendix (e.g. one about the EMS module, one about the the recurrent voting, etc).\n\nThere were so many aspects/components, that I am not entirely confident I fully understood how they all work together, and in fact I am pretty confident there was at least some part of this that I definitely did not understand. Reading it 5-20 more times would most likely help.\n\nFor example, consider the opening example of Section 3. In principle, this kind of example is great, and more of these would be very useful in this paper. This particular one raises a few questions:\n-Eq 5 makes it so that $(W \\Psi)$ and $(a_x)$ need to be positive or negative together. Why use ReLu's here at all? Why not just $sign( (W \\Psi) a_x) $? Multiplying them will do the same thing, and is much simpler. I am probably missing something here, would like to know what it is... (Or, if the point of the artificial complexity is to give an example of the 3 basic principles, then perhaps point this out, or point out why the simpler version I just suggested would not scale up, etc)\n-what exactly, in this example, does $\\Psi$ correspond to? In prev discussion, $\\Psi$ is always written with subscripts to denote state history (I believe), so this is an opportunity to explain what is different here. \n-Nitpick: why is a vector written as $W$? (or rather, what is the point of bold vs non-bold here?)\n-a non-bold version of $Psi$, a few lines below, seems to correspond to the 4096 features of VGG's FC6, so I am still not sure what the bold version represents\n\n-The defs/eqns at the beginning of section 3.1 (Sc, CReLu, etc) were slightly hard to follow and I wonder whether there were any typos, e.g. was CReS meant to refer directly to Sc, but used the notation ${ReLu}^2$ instead? \n\nEach of these on its own would be easier to overlook, but there is a compounding effect here for me, as a reader, such that by further on in the paper, I am rather confused.\n\nI also wonder whether any of the elements described, have more \"standard\" interpretations/notations. For example, my slight confusion propagated further: after above point, I then did not have a clear intuition about $l_i$ in the EMS module. I get that symmetry has been built in, e.g. by the definitions of CReS and CReLu, etc, but I still don't see how it all works together, e.g. are late bottleneck architectures *exactly* the same as MLPs, but where inputs have simply been symmetrized, squared, etc? Nor do I have intuition about multiplicative symmetric interactions between visual features and actions, although I do get the sense that if I were to spend several hours implementing/writing out toy examples, it would clarify it significantly (in fact, I wouldn't be too surprised if it turns out to be fairly straightforward, as in my above comment indicating a seeming equivalence to simply multiplying two terms and taking the resulting sign). If the paper didn't need to be quite as dense, then I would suggest providing more elucidation for the reader, either with intuitions or examples or clearer relationships to more familiar formulations.\n\nLater, I did find that some of the info I *needed* in order to understand the results (e.g. exactly what is meant by a \"symmetry ablation\", how was that implemented?) was in fact in the appendices (of which there are over 8 pages).\n\nI do wonder how sensitive the performance of the overall system is to some of the details, like, e.g. the low-temp Boltzmann sampling rather than identity function, as described at the end of S2.\n\nMy confidence in this review is somewhere between 2 and 3.\n\nThe problem is an interesting one, the overall approach makes sense, it is clear the authors have done a very substantial amount of work, and very diligently so (well-done!), some of the ideas are interesting and seem creative, but I am not sure I understand the glue of the details, and that might be very important here in order to assess it effectively.", "First and foremost, we wish to thank all of the reviewers for taking the time to read and comment on the paper. Across all three reviewers, it appears that the main concern for the submission is its clarity of explanation. Indeed, following submission we realized that there were several places in the paper that could benefit from rewording to improve readability and reader comprehension. As some of the reviewers noted, since the system contains many complex and interacting components, we needed to strike a balance between brevity and clarity. We have revised the manuscript in an effort to strike this balance better than the original submission.\n\nMost of the reviewer comments focused on sections 2 and 3, and in these sections we've made a series of relatively minor changes to improve clarity at the sentence and wording level. We have attempted to go through the individual comments of the reviewers and use them to systematically improve the paper.\n\nWe also made more substantial changes in the exposition of the Neural Voting controller in section 4.1. While none of the reviewers commented on this section directly, we felt in retrospect that it was much less clear than we would have liked. We have improved the description of the mathematical formalism to use standard notation whenever possible. In addition, we added a diagram (Fig. 6) to illustrate how the voting controller works. \n\nFrom a substantive point of view, we made one small addition to the paper. Specifically, we added one control experiment for the switching section (Figure 7: Dynamic Neural Voting quickly corrects for “no-switch” switches). We added this control to ensure that the controller rejects a new module if it is unnecessary (i.e. the case where a 'new' task and previously learned task are identical).\n\nIf there are any remaining comments, questions, or confusions that we haven't addressed either in our revised mauscript or the responses below, we would be happy to respond to those throughout the course of the rebuttal period.", "> The main concern with this paper is about its length. While the conference does not provide any limits in terms of number of pages, the 13 pages for the main text plus other 8 for the supplementary material is probably too much. I am wondering if the paper just contains too much information for a single conference publication. \n\nYes, we were originally thinking about breaking the paper into two -- modules and switching. However, the results and conclusions drawn from the EMS/module experiments are useful for understanding the switching section and its associated results. Additionally, due to the novelty of the setup (TouchStream and ReMaP), there would have been substantial overlap between submissions if we were to break it up due to needing to explain each of these components to understand the system as a whole. Overall, these two points convinced us that it was important to create one combined paper with all components elaborated upon, albeit at the expense of having a longer submission.\n\n\n> As a consequence, either the learning of the single tasks and the task switching experiments could have been addressed with further details. In case of single task learning, the tasks are relatively simple where k_beta = 1 (memory) and k_f = 2 (prediction) are sufficient to solve the tasks. It would have been interesting to evaluate the algorithm on more complex tasks, and how these parameters affect the learning. In case of the task switching a more interesting question would be how the network learns a sequence of tasks (more than one switch). \n\nAll of these are very good points -- as it is shown in the paper, it is challenging to solve even these tasks and task switching scenarios. Going beyond the shown $k_b$/$k_f$ conditions to more complex temporally-extended tasks and single-task switching scenarios are certainly two of our next steps. We have tried to make it clear in the conclusion that we view these concerns as not completely solved in the present study.\n\n> the paper starts with clear inspiration from Neuroscience, but nothing has been said on the biological plausibility of the ReMap, EMS and the recurrent neural voting;\n\nWe ourselves are not yet sure as to whether any of these are indeed biologically plausible mechanisms and thus we refrained from making any claims. It was our primary concern to first design an agent that responds in a behaviorally realistic way to the presented tasks, at least to some extent. Our agent is able to efficiently solve various challenging visual tasks using a touchscreen, and is able to quickly transfer knowledge between what could be two qualitatively distinct tasks. That being said, we are working with experimental collaborators in the neuroscience community to see whether we can indeed find evidence (or lack thereof) for any of these mechanisms or their neural correlates. \n\n> animals usually learn in continuous-time, thus such tasks usually involve a time component, while this work is designed in a time step framework; could the author argument on that? (starting from Doya 2000)\n\nWe do not feel particularly strongly about the discrete time framework. Our timesteps were meant to reflect the trial-by-trial structure of typical animals experiments at the moment of decision making. As can be seen in the results, even formulating this problem inside this domain happens to present significant modeling challenges. However, creating a model that functions in continuous time is a good future direction, as there are things such as attention and feedback that are not naturally phrased in the discrete setting.\n\n> specifically, the MTS and the Localization tasks involve working memory, thus a comparison with other working memory with reinforcement learning would make more sense that different degree of ablated modules. (for example Bakker 2002)\n\nThe reviewer is correct in implying that the EMS module is not in itself a perfect solution to the problem of working memory (mainly due to fixed $k_b$). Indeed, one of our current focuses is to extend the EMS module motif with more sophisticated memory components and recurrent structures. \n", "> The defs/eqns at the beginning of section 3.1 (Sc, CReLu, etc) were slightly hard to follow and I wonder whether there were any typos, e.g. was CReS meant to refer directly to Sc, but used the notation ${ReLu}^2$ instead? \n\nWe assume the reviewer was referring to the squaring nonlinearity \"Sq\" in their comment when they said \"Sc\" (if not, please correct us). We do not believe that there are any typos in the nonlinearity definitions (although we realized that CReS was defined on te right side of the equation which is inconsistent with Sq & CReLu definitions). CReS as stated is the composition of Sq and CReLu, such that CReS(x) = Sq(CReLu(x)) which concatenates the square of both CReLu components to the CReLu function itself. In the revised manuscript, we have made the CReS defition consistent with sq/CreLu such that the operator appears on the left side of its definition.\n\n> Each of these on its own would be easier to overlook, but there is a compounding effect here for me, as a reader, such that by further on in the paper, I am rather confused.\n\nWe sincerely apologize for any lack of clarity and confusion incurred by the reader. We have put substantial effort into the new revision to correct this. And please feel free to tell us about anything else that you find unclear so that we can improve it.\n\n> I also wonder whether any of the elements described, have more \"standard\" interpretations/notations. For example, my slight confusion propagated further: after above point, I then did not have a clear intuition about $l_i$ in the EMS module. I get that symmetry has been built in, e.g. by the definitions of CReS and CReLu, etc, but I still don't see how it all works together, e.g. are late bottleneck architectures *exactly* the same as MLPs, but where inputs have simply been symmetrized, squared, etc? Nor do I have intuition about multiplicative symmetric interactions between visual features and actions, although I do get the sense that if I were to spend several hours implementing/writing out toy examples, it would clarify it significantly (in fact, I wouldn't be too surprised if it turns out to be fairly straightforward, as in my above comment indicating a seeming equivalence to simply multiplying two terms and taking the resulting sign). If the paper didn't need to be quite as dense, then I would suggest providing more elucidation for the reader, either with intuitions or examples or clearer relationships to more familiar formulations.\n\nAs the reviewer stated, a point that we took to heart in the revision was making sure that even though the material needed to be presented concisely, that it did not sacrifice clarity. This involved restating certain components to be more intuitive in e.g. how the intuition-building example relates to the EMS module (see end of section 3). To answer this question specifically: Yes, the Late-bottleneck architecture is the same as a standard MLP, where actions and visual features are directly concatenated as inputs to the first layer of the module (such that the inputs are $\\Psi \\oplus a$) without any additional preprocessing (see the last paragraph of section 3.1). Any multiplications and symmetries exist only as a result of the various hidden-layer nonlinearities used in the study. In the main text we show only a \"fully-ablated\" late bottleneck which uses only ReLu nonlinearities but in the Supplement we also show the case for a late bottleneck using CReLu and hence preserves the symmetry. \n\n\nAlthough some of the tasks we present do indeed have analytical solutions of the form of eq. 5, the main point we wanted to make was that the three concepts that arise from this example can be generalized across tasks.\n\n\n> Later, I did find that some of the info I *needed* in order to understand the results (e.g. exactly what is meant by a \"symmetry ablation\", how was that implemented?) was in fact in the appendices (of which there are over 8 pages).\n\nYes, in the revision process we did catch the fact that it wasn't entirely clear what we meant by the symmetry ablation (replacing CReLu with ReLu or CReS with just Sq(ReLu) for the main paper, or other standard nonlinearities in the Supplement). This has been clarified inside paragraph two of the \"Efficiency of the EMS Module\" subsection of 3.2.\n\n> I do wonder how sensitive the performance of the overall system is to some of the details, like, e.g. the low-temp Boltzmann sampling rather than identity function, as described at the end of S2.\n\nWe have run experiments that indicate that the final performance of any one module on any one task are only somewhat sensitive to changes in ReMaP sampling policy. However, the rank order of results between modules does not change for any task investigated.", "> There were so many aspects/components, that I am not entirely confident I fully understood how they all work together, and in fact I am pretty confident there was at least some part of this that I definitely did not understand. Reading it 5-20 more times would most likely help.\n\nWe greatly appreciate the effort that the reviewer took in reading the paper and providing feedback, though we recognize that it is solely our responsibility to present the information as clearly and intuitively as possible. In our revision, we have tried to increase readability and clarity such that the paper can be better understood from a single reading alone.\n\n> For example, consider the opening example of Section 3. In principle, this kind of example is great, and more of these would be very useful in this paper. This particular one raises a few questions:\n- Eq 5 makes it so that $(W \\Psi)$ and $(a_x)$ need to be positive or negative together. Why use ReLu's here at all? Why not just $sign((W \\Psi)a_x)$? Multiplying them will do the same thing, and is much simpler. I am probably missing something here, would like to know what it is... (Or, if the point of the artificial complexity is to give an example of the 3 basic principles, then perhaps point this out, or point out why the simpler version I just suggested would not scale up, etc)\n\nThe reviewer is correct in the assumption that the additional complexity is to show that starting from a rather specific example, there exist several general concepts which might prove useful in the TouchStream environment for many tasks. As the reviewer implies, it may be the case that there are many equivalent functional forms that can solve the binary SR task. However, by phrasing the solution in terms of standard neural network functions such as an affine transformation followed by a ReLu operation, we found a solution that suggested that a learnable and generic neural network architecture exists as well. This was an important step, because we sought a single module architecture that generalized across all tasks in the TouchStream -- including those without analytical solutions such as MS-COCO. This particular solution happened to display three properties which we then generalized through the CReS nonlinearity and an early bottleneck in the EMS module. \n\nIn addition, we have replaced the sign function with the Heaviside function in eq. 5. This does not actually change the value of the equation at all, because the domain is already restricted to nonnegative values, but it makes the mathematical intent clearer. \n\n> what exactly, in this example, does $\\Psi$ correspond to? In prev discussion, $\\Psi$ is always written with subscripts to denote state history (I believe), so this is an opportunity to explain what is different here. \n\nIn this example, it was supposed to refer to the visual encoding at the current timestep. We apologize for any confusion, and have added a time subscript to denote this (see eq. 5).\n\n> Nitpick: why is a vector written as $W$? (or rather, what is the point of bold vs non-bold here?)\n\nGood question. W is intended to represent a generic weight matrix. However, for the purpose of this example, we wanted the visual bottleneck ($W$$\\Psi$) to result in a single value indicating which of the two classes is being observed. Technically this means that W is not precisely a vector but a 1 X |$\\Psi$| matrix. This has been corrected below eq. 5.\n\n> a non-bold version of $Psi$, a few lines below, seems to correspond to the 4096 features of VGG's FC6, so I am still not sure what the bold version represents\n\nThis indeed corresponded to the VGG FC6 feature vector at the current timestep. We have revised for consistency where it appeared in the bulleted \"3-general principles\" of section 3 and EMS definition of 3.1.", "> It would be good to motivate the action selection a bit further. E.g., the authors state that actions are sampled proportionally to the reward predictions and assure properties that are not necessarily intuitive, e.g. that a few reward in the future can should be equated to action values. It is also not clear under which conditions the proposed sampling of actions and the voting results in reward maximization. No statements are made on this other that empirically this worked best. \n\nThe motivation behind the action selection strategy was one of the points we wished to further clarify in the revision. We agree that we do not offer any theoretical guarantees on the conditions that lead to optimal solutions. Such a theory might be possible, and we believe that in principle it is highly correlated to the geometry of the reward maps. Moving forward, it is certainly on our to-do list to formally prove such a theory. As it stands, the motivation is mostly a heuristic based both on intuition and what works well in practice. However, in our revised submission, we have attempted to provide a clearer intuition as to why it seems to work well in the presented study (see paragraph in section 2.1 beginning \"The sampling procedure...\"). \n\n> It seems, that the narrowing of the spatial distribution relative to an \\epsilon -greedy policy would highly depend on the actual reward landscape, no? Is the maximum variance as well suited for exploration as for exploitation and reward maximization?\n\nThis is an interesting question, and is related to the previous comment. Indeed, the reviewer is probably correct in the assumption that the utility of any action selection strategy is highly dependent upon the true reward function. \nOur claim is that realistic problems in the physical world usually have similar spatial reward structure to those presented here in that they are typically locally homogeneous. In such cases, an epsilon-greedy policy might focus on only a single point-estimate rather than discovering the entire local physical structure. The revision attempts to clarify how the Maximum-Variance policy works under such conditions, and why it is beneficial for balancing between exploration (choosing something with high uncertainty/variance to learn more about it) and exploitation (choosing something that has a large localized peak in an otherwise uniformly low reward map).\n\n> Is the integral in eq. 1 appropriate or should it be a finite sum?\n\nWe formalized the ReMaP section to reflect the general case of a continuous action and reward space, but the reviewer is correct that any particular implementation numerical integration must indeed be approximated by a finite sum.\n\n> What I find a bit worrisome is the ease with which the manuscript switches between “inspiration” from psychology and neuroscience to plausibility of proposing algorithms to reinterpreting aimpoints as “salience” and feature extraction as “physical structure”. This necessarily introduces a number of \n\nIt looks like there is a missing sentence or two for this question. If the reviewer would like to clarify, we're happy to answer afterwards.\n\n> Overall, I am not sure what I have learned with this paper. Is this about learning psychological tasks? New exploration policies? Arbitration in mixtures of experts? Or is the goal to engineer a network that can solve tasks that cannot be solved otherwise? I am a bit lost.\n\nIt is in fact about all of these things: our goal was to build and describe new exploration policies and architectural structures (modules and controllers) so as to engineer an agent that can solve and switch between behavorially interesting tasks that otherwise would be difficult to solve efficiently or indeed at all. This naturally involved several things which the reviewer mentioned in their synopsis of the work. First we needed to develop a framework (The TouchStream) to allow for a common phrasing of many interesting and behavorially relevant visual tasks. Next we needed an algorithm (ReMaP) that could explore and solve a large action space such as the TouchStream. Using ReMaP, we still needed to find a neural readout motif (The EMS module) that could learn tasks as efficiently as possible in the TouchStream while remaining lightweight. Finally, we needed a method of reusing our knowledge (Voting Controller) of old tasks for solving new tasks which might be quite qualitatively distinct. The end result of this is what we believe to be a coherent set of results, but we agree that are lot for one paper.\n\n" ]
[ 6, 8, 8, -1, -1, -1, -1, -1 ]
[ 2, 3, 2, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rkPLzgZAZ", "iclr_2018_rkPLzgZAZ", "iclr_2018_rkPLzgZAZ", "iclr_2018_rkPLzgZAZ", "HkxSRZcxM", "BJQgR1aef", "BJQgR1aef", "rJHuDgAlz" ]
iclr_2018_BydLzGb0Z
Twin Networks: Matching the Future for Sequence Generation
We propose a simple technique for encouraging generative RNNs to plan ahead. We train a ``backward'' recurrent network to generate a given sequence in reverse order, and we encourage states of the forward model to predict cotemporal states of the backward model. The backward network is used only during training, and plays no role during sampling or inference. We hypothesize that our approach eases modeling of long-term dependencies by implicitly forcing the forward states to hold information about the longer-term future (as contained in the backward states). We show empirically that our approach achieves 9% relative improvement for a speech recognition task, and achieves significant improvement on a COCO caption generation task.
accepted-poster-papers
Simple idea (which is a positive) to regularize RNNs, broad applicability, well-written paper. Initially, there were concerns about comparisons, but he authors have provided additional experiments that have made the paper stronger.
train
[ "HyciX9dxM", "B1Fe0Zqxz", "Hy2zdEuNz", "HJzYCPDlf", "B1VAfclVG", "HyCbe867z", "r1SMZBT7z", "BJ2TeHpXz", "rJob2Ep7z", "B1_9I7DzG", "BJMXIjbGM", "rk6_yF2-f", "r11N5P2Zf" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "public", "author", "author", "author", "author", "author", "public", "author", "author" ]
[ "\n1) Summary\nThis paper proposes a recurrent neural network (RNN) training formulation for encouraging RNN the hidden representations to contain information useful for predicting future timesteps reliably. The authors propose to train a forward and backward RNN in parallel. The forward RNN predicts forward in time and the backward RNN predicts backwards in time. While the forward RNN is trained to predict the next timestep, its hidden representation is forced to be similar to the representation of the backward RNN in the same optimization step. In experiments, it is shown that the proposed method improves training speed in terms of number of training iterations, achieves 0.8 CIDEr points improvement over baselines using the proposed training, and also achieves improved performance for the task of speech recognition.\n\n\n2) Pros:\n+ Novel idea that makes sense for learning a more robust representation for predicting the future and prevent only local temporal correlations learned.\n+ Informative analysis for clearly identifying the strengths of the proposed method and where it is failing to perform as expected.\n+ Improved performance in speech recognition task.\n+ The idea is clearly explained and well motivated.\n\n\n3) Cons:\nImage captioning experiment:\nIn the experimental section, there is an image captioning result in which the proposed method is used on top of two baselines. This experiment shows improvement over such baselines, however, the performance is still worse compared against baselines such as Lu et al, 2017 and Yao et al, 2016. It would be optimal if the authors can use their training method on such baselines and show improved performance, or explain why this cannot be done.\n\n\nUnconditioned generation experiments:\nIn these experiments, sequential pixel-by-pixel MNIST generation is performed in which the proposed method did not help. Because of this, two conditioned set ups are performed: 1) 25% of pixels are given before generation, and 2) 75% of pixels are given before generation. The proposed method performs similar to the baseline in the 25% case, and better than the baseline in the 75% case. For completeness, and to come to a stronger conclusion on how much uncertainty really affects the proposed method, this experiment needs a case in which 50% of the pixels are given. Observing 25% of the pixels gives almost no information about the identity of the digit and it makes sense that it’s hard to encode the future, however, 50% of the pixels give a good idea of what the digit identity is. If the authors believe that the 50% case is not necessary, please feel free to explain why.\n\n\nAdditional comments:\nThe method is shown to converge faster compared to the baselines, however, it is possible that the baseline may finish training faster (the authors do acknowledge the additional computation needed in the backward RNN).\nIt would be informative for the research community to see the relationship of training time (how long it takes in hours) versus how fast it learns (iterations taken to learn).\n\nExperiments on RL planning tasks would be interesting to see (Maybe on a simple/predictable environment).\n\n\n4) Conclusion\nThe paper proposes a method for training RNN architectures to better model the future in its internal state supervised by another RNN modeling the future in reverse. Correctly modeling the future is very important for tasks that require making decisions of what to do in the future based on what we predict from the past. The proposed method presents a possible way of better modeling the future, however, some the results do not clearly back up the claim yet. The given score will improve if the authors are able to address the stated issues.\n\n\nPOST REBUTTAL RESPONSE:\nThe authors have addressed the comments on the MNIST experiments and show better results, however, as far as I can see, they did not address my concern about the comparisons on the image captioning experiment. In the image captioning experiment the authors choose two networks (Show & Tell and Soft attention) that they improve using the proposed method that end up performing similar to the second best baseline (Yao et al. 2016) based on Table 3 and their response. I requested for the authors to use their method on the best performing baselines (i.e. Yao et al. 2016 or Liu et al. 2017) or explain why this cannot be done (maybe my request was not clearly stated). Applying the proposed method on the strong baselines would highlight the author's claims more strongly than just applying on the average performing chosen baselines. This request was not addressed and instead the authors just improved the average performing baselines in Table 3 to meet the best baselines. Given, that the authors were able to improve the results in the sequential MNIST and improve the average baselines, my rating improves one point. However, I still have concerns about this method not being shown to improve the best methods presented in Table 3 which would give a more solid result. My rating changes to marginally above threshold for acceptance.", "** post-rebuttal revision **\n\nI thank the authors for running the baseline experiments, especially for running the TwinNet to learn an agreement between two RNNs going forward in time. This raises my confidence that what is reported is better than mere distillation of an ensemble of rnns. I am raising the score.\n\n** original review **\n\n\nThe paper presents a way to regularize a sequence generator by making the hidden states also predict the hidden states of an RNN working backward.\n\nApplied to sequence-to-sequence networks, the approach requires training one encoder, and two separate decoders, that generate the target sequence in forward and reversed orders. A penalty term is added that forces an agreement between the hidden states of the two decoders. During model evaluation only the forward decoder is used, with the backward operating decoder discarded. The method can be interpreted to generalize other recurrent network regularizers, such as putting an L2 loss on the hidden states.\n\nExperiments indicate that the approach is most successful when the regularized RNNs are conditional generators, which emit sequences of low entropy, such as decoders of a seq2seq speech recognition network. Negative results were reported when the proposed regularization technique was applied to language models, whose output distribution has more entropy.\n\nThe proposed regularization is evaluated with positive results on a speech recognition task and on an image captioning task, and with negative results (no improvement, but also no deterioration) on a language modeling and sequential MNIST digit generation tasks.\n\nI have one question about baselines: is the proposed approach better than training to forward generators and force an agreement between them (in the spirit of the concurrent ICLR submission https://openreview.net/forum?id=rkr1UDeC-)? \n\nAlso, would using the backward RNN, e.g. for rescoring, bring another advantage? In other words, what is (and is there) a gap between an ensemble of a forward and backward rnn and the forward-rnn only, but trained with the state-matching penalty?\n\nQuality:\nThe proposed approach is well motivated and the experiments show the limits of applicability range of the technique.\n\nClarity:\nThe paper is clearly written.\n\nOriginality:\nThe presented idea seems novel.\n\nSignificance:\nThe method may prove to be useful to regularize recurrent networks, however I would like to see a comparison with ensemble methods. Also, as the authors note the method seems to be limited to conditional sequence generators.\n\nPros and cons:\nPros: the method is simple to implement, the paper lists for what kind of datasets it can be used.\nCons: the method needs to be compared with typical ensembles of models going only forward in time, it may turn that it using the backward RNN is not necessary\n", "We’d like to thank you again for your review and feedbacks! We have updated our paper with your suggestions (including significantly improved Image Captioning results, comparisons to the Yao et al 2016. model and training time). With more thorough experimentation, we have also shown that Twinnet works for unconditioned generation (as well as conditioned generation). \n\nWould you have any other questions regarding the rebuttal, especially regards to the image captioning experiments and unconditioned generation?", "Twin Networks: Using the Future as a Regularizer\n\n** PAPER SUMMARY **\n\nThe authors propose to regularize RNN for sequence prediction by forcing states of the main forward RNN to match the state of a secondary backward RNN. Both RNNs are trained jointly and only the forward model is used at test time. Experiments on conditional generation (speech recognition, image captioning), and unconditional generation (MNIST pixel RNN, language models) show the effectiveness of the regularizer.\n\n** REVIEW SUMMARY **\n\nThe paper reads well, has sufficient reference. The idea is simple and well explained. Positive empirial results support the proposed regularizer.\n\n** DETAILED REVIEW **\n\nOverall, this is a good paper. I have a few suggestions along the text but nothing major.\n\nIn related work, I would cite co-training approaches. In effect, you have two view of a point in time, its past and its future and you force these two views to agree, see (Blum and Mitchell, 1998) or Xu, Chang, Dacheng Tao, and Chao Xu. \"A survey on multi-view learning.\" arXiv preprint arXiv:1304.5634 (2013). I would also relate your work to distillation/model compression which tries to get one network to behave like another. On that point, is it important to train the forward and backward network jointly or could the backward network be pre-trained? \n\nIn section 2, it is not obvious to me that the regularizer (4) would not be ignored in absence of regularization on the output matrix. I mean, the regularizer could push h^b to small norm, compensating with higher norm for the output word embeddings. Could you comment why this would not happen?\n\nIn Section 4.2, you need to refer to Table 2 in the text. You also need to define the evaluation metrics used. In this section, why are you not reporting the results from the original Show&Tell paper? How does your implementation compare to the original work?\n\nOn unconditional generation, your hypothesis on uncertainty is interesting and could be tested. You could inject uncertainty in the captioning task for instance, e.g. consider that multiple version of each word e.g. dogA, dogB, docC which are alternatively used instead of dog with predefined substitution rates. Would your regularizer still be helpful there? At which point would it break?", "Thank you for addressing our concerns and providing feedback.\n\nWe will check the implementation details, run the experiments again and update the values in the report.", "We thank the reviewer for the feedback and comments.\n\nQ: “3) Cons: Image captioning experiment:\nIn the experimental section, there is an image captioning result in which the proposed method is used on top of two baselines. ”\n\nA: We acknowledge this, and we have significant improvements on our image captioning experiments, we have the following improvements, which we also summarized in the comments to all reviewers.\nWe run SAT models with Resnet 152 features. SAT with Twinnet achieved similar performance to (Yao et al, 2016). SAT with Twinnet vs Yao et al, 2016 performances are B1: 73.8 (vs 73.0 Yao), B2: 56.9 (vs 56.5 Yao), B3: 42.0 (vs 42.9 Yao), B4: 30.6 (vs 32.5 Yao), Meteor: 25.2 (vs 25.1 Yao), Cider: 97.3 (vs 98.6 Yao)\nWe have significant improvements on ST and SAT with Resnet 101 features compared to the baseline.\n\nQ: “Observing 25% of the pixels gives almost no information about the identity of the digit and it makes sense that it’s hard to encode the future, however, 50% of the pixels give a good idea of what the digit identity is. If the authors believe that the 50% case is not necessary, please feel free to explain why.”\n\nA: We have run more thorough examinations, with larger regularization hyperparameter values, for the unconditioned generation. We now have consistent improvements on both conditioned and unconditioned generation tasks. Please see our comment to all reviewers.\n\nQ: “It would be informative for the research community to see the relationship of training time (how long it takes in hours) versus how fast it learns (iterations taken to learn).”\n\nA: We are currently running the forward and backward RNN consecutively, therefore training Twinnet takes around twice the amount of time as training the forward alone. We measured the batch time for running the SAT baseline on Resnet152 feature takes 0.181s/minibatch, and SAT with Twinet is 0.378s/minibatch. Both experiments are run on TitanXP GPUs. We also measured convergence. For the ASR task, the convergence is the same in terms of number of epochs as it can be seen in the learning curve in the paper. For the image captioning, some TwinNet models converge faster, while others have similar convergence rate similar compared to the baseline.\n\n \nQ: “Experiments on RL planning tasks would be interesting to see (Maybe on a simple/predictable environment).”\n\nA: This is a very nice idea! We could see this being used for model-based RL (planning). However, we feel that this tasks deserves to be a separate paper on its own. It would require some amount of investigation to understand how forward and backward would interact in a RL setting. It would indeed be very interesting future work to see how Twinnet could be used in this setting.", "We thank the reviewer for your positive feedback and comments! \n\nQ: “On that point, is it important to train the forward and backward network jointly or could the backward network be pre-trained?”\n\nA: As the gradient of the regularization term is not backpropagated through the backward network, the backward model can indeed be pre-trained. \n\nQ: In section 2, it is not obvious to me that the regularizer (4) would not be ignored in absence of regularization on the output matrix. I mean, the regularizer could push h^b to small norm, compensating with higher norm for the output word embeddings. Could you comment why this would not happen?\n\nA: The L2 cost to match the forward and backward states (4) is not backpropagated to the backward model, i.e. the hidden states of the backward h^b are not optimized with respect to the twin cost. Therefore, the backward hidden states may be pushed to a small norm only if it’s beneficial for the reverse language modeling objective.\n\nQ: In this section, why are you not reporting the results from the original Show&Tell paper? How does your implementation compare to the original work?\n\nA: The original ShowTell uses the Inception v3 network for the feature extraction. Therefore the performance is not comparable to our baseline trained with features extracted with ResNet. This result is added to the table now.\n\nQ: dogA, dogB, docC which are alternatively used instead of dog with predefined substitution rates. Would your regularizer still be helpful there? At which point would it break?\nA: The experiment on multiple versions of each word is indeed very interesting, thanks for the suggestion. Our new results suggest that the method works also in unconditioned case. Please refer to the comment to all reviewers.\n", "We thank the reviewer for the feedback and comments. We now have improved results across all tasks (image captioning, sequential MNIST and language modelling), we included those in a brief summary in our comment that was addressed to all reviewers. \n\nQ: “I have one question about baselines: is the proposed approach better than training to forward generators and force an agreement between them, …”\n\nA: We run the following experiment to test out this hypothesis. We run 2 separate forward generators and force an agreement on those 2 generators. We tested out this model on WSJ dataset for speech recognition, and the model has a validation CER of 9.1% ( vs 9.0% for Baseline) and 8.4% for TwinNet. We conclude that TwinNet works better compared to running 2 forward models and force an agreement between them.\n\nQ: “Also, would using the backward RNN, e.g. for rescoring, bring another advantage? ...”\n\nA: This is an interesting question to think about the backward RNN doing rescoring. Although, we are not totally clear on what ensembles of forward model would be in this case, and we have not received a clarification on this based on our earlier comments.\n\nWe took a best guess of what the reviewer meant in this case. We assume that the reviewer means that we run multiple forward generators and use the backward generator to rescore the results of the forward generator and hence picking the best out of the forward generators. In general, this is an interesting idea, although it is different enough from our current idea that it would deserve credit to be a separate paper on its own.\n\nTwo things to note for the idea of running multiple forward generator and using the backward to rescore is \nInference is at least twice as expensive compared to our current setup, which does not use the backward model during test time.\n\nAs there are multiple choices for rescoring function, it is not trivial to decide which rescoring function to use. In some cases, the rescoring function could also be non-differentiable (for example WER or CER for speech recognition, or Bleu score for image captioning or machine translation).\n\nQ: “Cons: the method needs to be compared with typical ensembles of models going only forward in time, it may turn that it using the backward RNN is not necessary”\n\nA: We have indeed run the experiment with multiple (2 in our case) forward models and force an agreement between them, and we did not see improvements over the baseline. This supports our hypothesis that the backward RNN is indeed useful and necessary.", "More thorough experimentation allowed us to significantly improve Twinnet results for image captioning, sequential MNIST, and language modelling tasks. We summarize our new results here and update the paper.\n\n- Image Captioning: We have significantly improved results for ShowTell (ST) and ShowAttendTell (SAT) models with TwinNet (Table 2)\n 1. ResNet 101 features + ST + Twinnet \n Improvement by more than 3 Cider points\n B1: 71.8 (vs 69.,4), B2: 54.5 (vs 51.6), B3: 39.4 (vs 36.9), B4: 28.0 (vs 26.3), Meteor: 24.0 (vs 23.4), Cider: 87.7 (vs 84.3)\n 2. ResNet 101 features + SAT + Twinnet\n Improvement by more than 5 Cider points\n B1: 72.8 (vs 71.0), B2: 55.7 (vs 53.0), B3: 41.0(vs 39.0), B4: 29.7 (vs 28.1), Meteor: 25.2 (vs 25.0), Cider: 96.2 (vs 89.2)\n 3. ResNet 152 features + SAT + Twinnet \n Improvement by 0.7 Cider points\n B1: 73.8 (vs 73.2), B2: 56.9 (vs 56.3), B3: 42.0 (vs 41.4), B4: 30.6 (vs 30.1), Meteor: 25.2 (vs 25.3), Cider: 97.3 (vs 96.6)\n- Sequential MNIST (Table 3 (left))\n 1. LSTM with Dropout + Twinnet has NLL of 79.12 (vs 79.59 for Baseline)\n 2. LSTM + TwinNet has NLL of 79.35 (vs 79.87 for Baseline)\n- Wikitext 2 (Table 3 (right))\n - AWD + LSTM + TwinNet has valid perplexity: 68.0 (vs 68.7) and Test perplexity: 64.9 (vs 65.8)\n- PennTree Bank (Table 3 (right))\n - AWD + LSTM + TwinNet has valid perplexity: 61.0 (vs 61.2) and Test perplexity: 58.3 (vs 58.8)\n\nThe improvements in results for sequential MNIST and Wikitext-2 experiments show that TwinNet may also be effective for the case of unconditional generation. In MNIST, the use of a much larger regularization hyperparameter (1.5) was necessary. In PTB, the improvements are minor and more consistent in WikiText-2, which wasn’t included in our experiments. These experiments suggest that our method is applicable in for wider set of tasks comparing to the claim in the earlier version of our paper that TwinNet was better suited for conditional generation tasks.", "We thank you for your attempt in reproducing our paper, we appreciate the effort. In general, the reported results used non stable branches of our code. The full (cleaned) code will be released before the final version of the paper. In the meantime, we put some efforts in making available code that reproduce the numbers that we get with our model.\n1) The sequential MNIST baseline (85 nats) reported in the reproducibility report is significantly worse than our and \n previously published baselines (several papers report ~80 nats) \n - van den Oord et al., 2016 report 80.54 nats for the baseline\n - Lamb et al., 2016 reports 79.52 nats for Professor Forcing\n\t\n\tOur new results on MNIST are:\n\t Baseline: 79.87\n\t Twin: 79.35\n\t Baseline + dropout: 79.59\n\t Twin + dropout: 79.12\n\tReproducible using the master branch:\n\t python train_seqmnist_twin.py --twin 0.0 --nlayers 3 --dropout 0.0 --seed 1234\n\t python train_seqmnist_twin.py --twin 1.5 --nlayers 3 --dropout 0.0 --seed 1234\n\t python train_seqmnist_twin.py --twin 0.0 --nlayers 3 --dropout 0.3 --seed 1234\n\t python train_seqmnist_twin.py --twin 1.5 --nlayers 3 --dropout 0.3 --seed 1234\n\n2) The baseline for captioning results with resnet101 features seems to be unreasonably high, please refer to the \n following papers with published baseline with the same setup. \n - Rennie et al., 2016\n - https://openreview.net/forum?id=SJyVzQ-C-\nOur code has not been officially released, we will clean up our code and release it anonymously as soon as possible.\n", "\n\n** PAPER SUMMARY **\n\nThe paper describes an approach to facilitate modeling of long-term dependencies in RNNs by implicitly forcing the forward states to hold information about the longer-term future, ie, effectively regularizing RNNs. This is done by using an additional backward RNN and jointly training both the networks. However, only the forward network is used during testing. The results are provided on various tasks like speech recognition, image captioning, language modeling and pixel-by-pixel generation on MNIST.\n\n** TARGET QUESTIONS **\n\n- Verifying results for two tasks - image captioning on MSCOCO dataset and pixel-by-pixel generation for MNIST.\n- Implementation of baseline models to verify the scores mentioned in the paper.\n- Quantitative estimate of the training time taken by both networks.\n- Comparison of the nature of convergence of both networks.\n\n** EXPERIMENTAL METHODOLOGY **\n\nFor implementation purposes, we use the code available on github:\n- Image captioning : https://github.com/nke001/neuraltalk2.pytorch\n- MNIST : https://github.com/nke001/Twinnet\n\n** RESULTS **\n\nAll the models are trained and tested on NVIDIA GeForce GTX 1080 Ti.\n\nImage Captioning\nThe results are reported on Bleu-1 to Bleu-4, Meteor, Rogue-L and CIDEr scores using the code available - https://github.com/tylin/coco-caption. Also, time per iteration (in seconds) for batch size of 64 is given.\n\n- Show & Tell :\n Bleu-1 = 0.725, Bleu-2 = 0.555, Bleu-3 = 0.415, Bleu-4 = 0.310, Meteor = 0.249, Rogue-L = 0.529, CIDEr = 0.953, time/iter = 0.081\n- Show & Tell Twin :\n Bleu-1 = 0.705, Bleu-2 = 0.531, Bleu-3 = 0.390, Bleu-4 = 287, Meteor = 0.239, Rogue-L = 0.517, CIDEr = 0.884, time/iter = 0.261\n- Soft Attention :\n Bleu-1 = 0.730, Bleu-2 = 0.564, Bleu-3 = 0.423, Bleu-4 = 0.315, Meteor = 0.250, Rogue-L = 0.533, CIDEr = 0.976, time/iter = 0.782\n- Soft Attention Twin :\n Bleu-1 = 0.681, Bleu-2 = 0.317, Bleu-3 = 0.113, Bleu-4 = 0.032, Meteor = 0.187, Rogue-L = 0.458, CIDEr = 0.582, time/iter = 1.540 \n\nSequential MNIST\nThe results are reported in terms of negative log likelihood (NLL) and training time per iteration (in seconds) for both the baseline and twin networks.\n\n- Unconditional Generation :\n Baseline : NLL = 84.860, time/iter = 0.423\n Twin net : NLL = 85.520, time/iter = 0.862\n- Conditional Generation :\n Baseline : NLL = 83.710, time/iter = 0.423\n Twin net : NLL = 80.870, time/iter = 0.871 \n\n** ANALYSIS AND DISCUSSION **\n\nThe idea is intuitive and well motivated and the paper is explained in great detail. However, based on our results, we believe that there is need for extensive experimentation and testing.\n\n- For image captioning, we observe baseline values around 2-3 % higher whereas the twin net values show little change while using the same values of parameters as those mentioned in the paper. However, we believe that extensive hyperparameter search may lead to better results.\n\n- The observed values for the soft attention twin net are quite less than those reported in the paper. This indicates that the convergence is highly affected, probably due to increased parameters and non-convexity in the optimization surface. This is also evident in plotting train_loss vs iterations, where we observe increased fluctuations.\n\n- For unconditional generation of MNIST, we observe higher values of NLL which is probably because the parameters are not the same as mentioned in the paper. However, the values for baseline and twin net are almost the same in accordance with the paper.\n\n- We also report values for the conditional generation of MNIST. The improved value for the twin net reaffirms the notion that the proposed approach works better for the conditional case. Also, the values for conditional case show improvement over the unconditioned case as expected.\n\nWe would also like to point out few limitations:\n\n- Major downside of the approach is the cost in terms of resources. The twin model requires large memory and takes longer to train (~ 2-4 times) while providing little improvement over the baseline. \n\n- During evaluation we found that the attention twin model gives results like “a woman at table a with cake a”, where it forces the model to look like a sentence from the back side too. This might be the reason for low metric values observed in soft attention twin net model.\n\n- The effect of twin net as a regularizer can be examined against other regularization strategies for comparison purposes. \n\n** CONCLUSION **\n\nThe paper presents a novel approach to regularize RNNs and give results on different datasets indicating wide range of application. However, based on our results, we believe that further experimentation and extensive hyperparameter search is needed. Overall, the paper is detailed, simple to implement and positive empirical results support the described approach.\n\nLink to full report : https://drive.google.com/file/d/1NjAtAHrrY8CdeoykCp2IwxxY8slJOGgH/view?usp=sharing", "We thank you for your review, feedback and suggestions. \n\nWe would like to clarify your question about ensembles. Unlike ensemble methods, our method does not average over multiple predictive models at the evaluation time. Forward and backward networks are “coupled” during training and the backward predictions are discarded during testing. Would you like to see a comparison between ensembles of baselines compared to ensembles of TwinNets?", "We thank all the reviewers for your feedback and suggestions, it was very helpful and allowed us to gain more insights. We plan to run the requested experiments and update the experimental results in the upcoming weeks." ]
[ 6, 7, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BydLzGb0Z", "iclr_2018_BydLzGb0Z", "HyCbe867z", "iclr_2018_BydLzGb0Z", "B1_9I7DzG", "HyciX9dxM", "HJzYCPDlf", "B1Fe0Zqxz", "iclr_2018_BydLzGb0Z", "BJMXIjbGM", "iclr_2018_BydLzGb0Z", "B1Fe0Zqxz", "iclr_2018_BydLzGb0Z" ]
iclr_2018_S1J2ZyZ0Z
Interpretable Counting for Visual Question Answering
Questions that require counting a variety of objects in images remain a major challenge in visual question answering (VQA). The most common approaches to VQA involve either classifying answers based on fixed length representations of both the image and question or summing fractional counts estimated from each section of the image. In contrast, we treat counting as a sequential decision process and force our model to make discrete choices of what to count. Specifically, the model sequentially selects from detected objects and learns interactions between objects that influence subsequent selections. A distinction of our approach is its intuitive and interpretable output, as discrete counts are automatically grounded in the image. Furthermore, our method outperforms the state of the art architecture for VQA on multiple metrics that evaluate counting.
accepted-poster-papers
Important problem and all reviewers recommend acceptance. I agree.
train
[ "rkvdigi4f", "Byv9HGFEG", "SJdWxzoxz", "ryq-8Y_lG", "rJ1U7MKef", "B1bTymoGf", "S1iUyQjzM", "H1bYCGofM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "I'm satisfied with the authors' responses to the concerns raised by me and my fellow reviewers, I would recommend acceptance of the paper.", "After reading the authors' responses to the concerns raised by me and my fellow reviewers, I would recommend acceptance of the paper because it presents a novel, interesting and interpretable method for counting.", "Summary:\nThe paper presents a novel method for answering “How many …?” questions in the VQA datasets. Unlike previously proposed approaches, the proposed method uses an iterative sequential decision process for counting the relevant entity. The proposed model makes discrete choices about what to count at each time step. Another qualitative difference compared to existing approaches is that the proposed method returns bounding boxes for the counted object. The training and evaluation of the proposed model and baselines is done on a subset of the existing VQA dataset that consists of “How many …?” questions. The experimental results show that the proposed model outperforms the baselines discussed in the paper.\n\nStrengths:\n1.\tThe idea of sequential counting is novel and interesting.\n2.\tThe analysis of model performance by grouping the questions as per frequency with which the counting object appeared in the training data is insightful. \n \nWeaknesses:\n1.\tThe proposed dataset consists of 17,714 QA pairs in the dev set, whereas only 5,000 QA pairs in the test set. Such a 3.5:1 split of dev and test seems unconventional. Also, the size of the test set seems pretty small given the diversity of the questions in the VQA dataset.\n2.\tThe paper lacks quantitative comparison with existing models for counting such as with Chattopadhyay et al. This would require the authors to report the accuracies of existing models by training and evaluating on the same subset as that used for the proposed model. Absence of such a comparison makes it difficult to judge how well the proposed model is performing compared to existing models.\n3.\tThe paper lacks analysis on how much of performance improvement is due to visual genome data augmentation and pre-training? When comparing with existing models (as suggested in above), this analysis should be done, so as to identify the improvements coming from the proposed model alone.\n4.\tThe paper does not report the variation in model performance when changing the weights of the various terms involved in the loss function (equations 15 and 16).\n5.\tRegarding Chattopadhyay et al. the paper says that “However, their analysis was limited to the specific subset of examples where their approach was applicable.” It would be good it authors could elaborate on this a bit more.\n6.\tThe relation prediction part of the vision module in the proposed model seems quite similar to the Relation Networks, but the paper does not mention Relation Networks. It would be good to cite the Relation Networks paper and state clearly if the motivation is drawn from Relation Networks.\n7.\tIt is not clear what are the 6 common relationships that are being considered in equation 1. Could authors please specify these?\n8.\tIn equation 1, if only 6 relationships are being considered, then why does f^R map to R^7 instead of R^6?\n9.\tIn equations 4 and 5, it is not clarified what each symbol represents, making it difficult to understand.\n10.\tWhat is R in equation 15? Is it reward?\n\nOverall:\nThe paper proposes a novel and interesting idea for solving counting questions in the Visual Question Answering tasks. However, the writing of the paper needs to be improved to make is easier to follow. The experimental set-up – the size of the test dataset seems too small. And lastly, the paper needs to add comparisons with existing models on the same datasets as used for the proposed model. So, the paper seems to be not ready for the publication yet.", "\n------------------\nSummary:\n------------------\nThis work introduces a discrete and interpretable model for answering visually grounded counting questions. The proposed model executes a sequential decision process in which it 1) selects an image region to \"add to the count\" and then 2) updates the likelihood of selecting other regions based on their relationships (defined broadly) to the selected region. After substantial module pre-trianing, the model is trained end-to-end with the REINFORCE policy gradient method (with the recently proposed self-critical sequence training baseline). Compared to existing approaches for counting (or VQA in general), this approach not only produces lower error but also provides a more human-intuitive discrete, instance-pointing representation of counting. \n\n-----------------------\nPreliminary Evaluation:\n-----------------------\nThe paper presents an interesting approach that seems to outperform existing methods. More importantly in my view, the model treats counting as a discrete human-intuitive process. The presentation and experiments are okay overall but I have a few questions and requests below that I feel would strengthen the submission.\n\n------------------\nStrengths:\n------------------\n- I generally agree with the authors that approaching counting as a region-set selection problem provides an interpretable and human-intuitive methodology that seems more appropriate than attentional or monolithic approaches. \n\n- To the best of my knowledge, the writing does a good job of placing the work in the context of existing literature.\n\n- The dataset construction is given appropriate attention to restrict its instances to counting questions and will be made available to the public.\n\n- The model outperforms existing approaches given the same visual and linguistic inputs / encodings. While I find improvements in RMSE a bit underwhelming, I'm still generally positive about the results given the improved accuracy and human-intuitiveness of the grounded outputs.\n\n- I appreciated the analysis of the effect of \"commonness\" and think it provides interesting insight into the generalization of the proposed model.\n\n- Qualitative examples are interesting.\n\n------------------\nWeaknesses:\n------------------\n- There is a lot going on in this paper as far as model construction and training procedures go. In its current state, many of the details are pushed to the supplement such that the main paper would be insufficient for replication. The authors also do not promise code release. \n\n- Maybe it is just my unfamiliarity with it, but the caption grounding auxiliary-task feels insufficiently introduced in the main paper. I also find it a bit discouraging that the details of joint training is regulated to the supplementary material, especially given that the UpDown is not using it! I would like to see an ablation of the proposed model without joint training.\n\n- Both the IRLC and SoftCount models are trained with objectives that are aware of the ordinal nature of the output space (such that predicting 2 when the answer is 20 is worse than predicting 19). Unfortunately, the UpDown model is trained with cross-entropy and lacks access to this notion. I believe that this difference results in the large gap in RMSE between IRLC/SoftCount and UpDown. Ideally an updated version of UpDown trained under an order-aware loss would be presented during the rebuttal period. Barring that due to time constraints, I would otherwise like to see some analysis to explore this difference, maybe checking to see if UpDown is putting mass in smooth blobs around the predicted answer (though there may be better ways to see if UpDown has captured similar notions of output order as the other models).\n\n- I would like to see a couple of simple baselines evaluated on HowMany-QA. Specifically, I think the paper would be stronger if results were put in context with a question only model and a model which just outputs the mean training count. Inter-human agreement would also be interesting to discuss (especially for high counts).\n\n- The IRLC model has a significantly large (4x) capacity scoring function than the baseline methods. If this is restricted, do we see significant changes to the results?\n\n- This is a relatively mild complaint. This model is more human-intuitive than existing approaches, but when it does make an error by selecting incorrect objects or terminating early, it is no more transparent about the cause of these errors than any other approach. As such, claims about interpretability should be made cautiously. \n\n------------------\nCuriosities:\n------------------\n- In my experience, Visual Genome annotations are often noisy, with many different labels being applied to the same object in different images. For per-image counts, I don't imagine this will be too troubling but was curious if you ran into any challenges.\n\n- It looks like both IRLC and UpDown consistently either get the correct count (for small counts) or underestimate. This is not the Gaussian sort of regression error that we might expect from a counting problem. \n\n- Could you speak to the sensitivity of the proposed model with respect to different loss weightings? I saw the values used in Section B of the supplement and they seem somewhat specific. \n\n------------------\nMinor errors:\n------------------\n[5.1 end of paragraph 2] 'that accuracy and RSME and not' -> 'that accuracy and RSME are not'\n[Fig 9 caption] 'The initial scores are lack' -> 'The initial scores lack'", "This paper proposed a new approach for counting in VQA called Interpretable Counting in Visual Question Answering. The authors create a new dataset (HowMany-QA) by processing the VQA 2.0 and Visual Genome dataset. In the paper, the authors use object detection framework (R-FCN) to extract bounding boxes information as well as visual features and propose three different strategies for counting. 1: SoftCount; 2: UpDown; 3: IRLC. The authors show results on HowMany-QA dataset for the proposed methods, and the proposed IRLC method achieves the best performance among all the baselines. \n\n[Strenghts]\n\nThis paper first introduced a cleaned visual counting dataset by processing existing VQA 2.0 and Visual Genome dataset, which can filter out partial non-counting questions. The proposed split is a good testbed for counting in VQA. \n\nThe authors proposed 3 different methods for counting, which both use object detection feature trained on visual genome dataset. The object detector is trained with multiple objectives including object detection, relation detection, attribute classification and caption grounding to produce rich object representation. The author first proposed 2 baselines: SoftCount uses a Huber loss, UpDown uses a cross entropy loss. And further proposed interpretable RL counter which enumerates the object as a sequential decision process. The proposed IRLC more intuitive and outperform the previous VQA method (UpDown) on both accuracy and RMSE. \n\n[Weaknesses]\n\nThis paper proposed an interesting and intuitive counting model for VQA. However, there are several weaknesses existed:\n\n1: The object detector is pre-trained with multiple objectives. However, there is no ablation study to show the differences. Since the model only uses the object and relationship feature as input, the authors could show results on counting with different objects detector. For example, object detector trained using object + relation v.s. object + relation + attribute v.s. object + relation + attribute + caption. \n\n2: Figure 9 shows an impressive result of the proposed method. Given the detection result, there are a lot of repetitive candidates detection bounding boxes. Without any strong supervision, IRLC could select the correct bounding boxes associated with the different objects. This is interesting, however, the authors didn't show any quantitative results on this. One experiment could verify the performance on IRLC is to compute the IOU between the GT COCO bounding box annotation on a small validation set. The validation set could be obtained by comparing the number of the bounding box and VQA answer with respect to similar COCO categories and VQA entities. \n\n3: The proposed IRLC is not significantly outperform baseline method (SoftCount) with respect to RMSE (0.1). However, it would be interesting to see how the counting performance can change the result of object detection. As Chattopadhyay's CVPR2017 paper Sec 5.3 on the same subset as in point 2. \n\n[Summary]\n\nThis paper proposed an interesting and interpretable model for counting in VQA. It formulated the counting as a sequential decision process that enumerated the subset of target objects. The authors introduce several new techniques in the IRLC counter. However, there is a lack of ablation study on the proposed model. Taking all these into account, I suggest accepting this paper if the authors could provide more ablation study on the proposed methods. \n", "We would like to thank the reviewer for their thoughtful and constructive feedback. Before addressing the reviewer’s concerns, we should mention that since the original submission, we have observed superior performance (for all the models we consider) when using the visual features released with the original UpDown paper (those used by Anderson et al.). We believe this choice simplifies our work by focusing our contribution on the counting module and, by using publicly available visual features, facilitates future comparison.\n\nWe have revised our paper to provide more detail about the influence of joint-training and have incorporated more detail into the main text. We also now compare each model with and without joint training and make it the default for each model to include joint training (since, with the new visual features, each model benefits from the extra supervision). These changes appear in Model section 4.2, Table 2 of the Results section, and Appendix Section B.1 in the revised submission.\n\nUnfortunately, we did not have sufficient time to update the training procedure of the UpDown model. Instead, we follow the reviewer’s suggestion and include an analysis of the smoothness of the model’s predictions. These changes appear in Appendix Section C of the revised submission.\n\nWe have included the two simple baselines recommended by the reviewer. These changes appear in Results section 5.1 and Table 2 in the revised submission.\n\nAddressing the reviewer’s curiosities…\nThe only issue noisy annotations in Visual Genome might cause would be with pre-training the vision module. Since we have opted to use publicly available pre-trained features, we ultimately don’t deal with that noise ourselves. However, the new features were pre-trained with Visual Genome and produce better results than our original attempt. We could speculate that our original features were worse because we did not account for the noisy labels, but that’s hard to say for certain.\nWe have revised the paper to include the results of a small sweep over the loss weightings (Appendix Section C in revised submission).\n\t\t\t\t\t\nAnderson et al. Bottom-Up and Top-Down Attention for Image Captioning and VQA. In CVPR, 2017. \n", "We would like to thank the reviewer for their thoughtful and constructive feedback. Before addressing the reviewer’s concerns, we should mention that since the original submission, we have observed superior performance (for all the models we consider) when using the visual features released with the original UpDown paper (those used by Anderson et al.). We believe this choice simplifies our work by focusing our contribution on the counting module and, by using publicly available visual features, facilitates future comparison.\n\nOne consequence of this decision is that we forgo the option to do ablation studies on the vision module, as suggestion by the reviewer in comment 1. We very much agree that it will be interesting to see how these details of pre-training improve the model’s ability to map questions onto object representations. Therefore, in our revisions, we include a new ablation experiment on joint-training counting and caption grounding (changes appear in Table 2 of the Results section in the revised submission).\n\nIn comment 2, the reviewer mentions the possibility of comparing the counting objects during question answering to the ground truth objects from the COCO labels. We thank the reviewer for this insightful suggestion and have included a new analysis to perform this comparison (changes appear in the Results section 5.2 and Appendix Section D in the revised submission). The results demonstrate that, despite the similarity between these two models with respect to RMSE, the objects counted by IRLC are more relevant to the question than the objects counted by SoftCount.\n\nThe reviewer has also pointed out the possibility that our model could improve object detection. Because our model counts from questions, we believe our approach might be best suited to improve generalization of object detection to more diverse classes -- for example, to detect objects based not only on their class but also attributes and/or context in the scene. Time constraints prevent us from exploring this notion in the revised submission, but it is our goal to do so before a final submission.\n\t\t\t\t\t\nAnderson et al. Bottom-Up and Top-Down Attention for Image Captioning and VQA. In CVPR, 2017. \n\nChattopadhyay et al. Counting Everyday Objects in Everyday Scenes. In CVPR, 2017. \n", "We would like to thank the reviewer for their thoughtful and constructive feedback. Before addressing the reviewer’s concerns, we should mention that since the original submission, we have observed superior performance (for all the models we consider) when using the visual features released with the original UpDown paper (those used by Anderson et al.). We believe this choice simplifies our work by focusing our contribution on the counting module and, by using publicly available visual features, facilitates future comparison.\n\n\nRegarding comment 1: To examine the robustness of the test metrics, we re-computed the accuracy for the development and test splits after diverting 6500 randomly chosen QA pairs from dev to test (giving the adjusted dev/test splits 11k QA pairs each). We did this for the 8 IRLC models from the hyperparameter sweep whose penalty weights surrounded the optimum. On the original dev/test splits, those models have average accuracies of 56.21 & 57.06. In the adjusted split, the average accuracies are 56.18 & 56.64. This analysis suggests that the smaller test size does introduce some noise into the accuracy measurement, but the effect of that noise is small compared to the scale of the performance differences between SoftCount, UpDown and IRLC. \n\nRegarding comments 2, 3 and 5: The reviewer points out that we did not sufficiently place our work in the context of Chattopadhyay et al. We agree and have attempted to correct that mistake in our revised submission (changes appear in Related Works and Models [page 4] sections).\nTo further clarify our reasoning here, there are three main reasons we do not compare to their work. \n1) Our work and theirs both examine counting, but we use counting as a lens for exploring interpretability in visual question answering. This led to considerably different architectures as considered between our works.\n2) Our work examines generalization for unseen or few-shot classes. Since the question is a sentence, not a single word, there are a number of cases where the effective class is a combination of noun and adjective or position (eg. black birds, people sitting on the bench). Chattopadhyay et al. only handles counting a fixed set of object classes and lacks the flexibility required for question answering.\n3) The reviewer has suggested that we train and evaluate a model based on their proposals (i.e. “seq-sub”); however, to do so would make it very difficult to control for the quality/structure of visual features and the mechanism of visual-linguistic fusion. Additionally, the seq-sub architecture is not amenable to question answering or supervision from VQA data alone.\nAll in all, we believe that the issues described above makes quantitatively comparing our work to that of Chattopadhyay et al. overly complicated and we hope the reviewer will agree with our assessment.\n\nAs part of incorporating the new visual features, we have revised the model section describing the vision module. This revision has resulted in the removal of the confusing text that the reviewer mentioned in comments 6-8.\n\nFollowing this change, we cannot readily assess how details of pre-training affect ultimate performance, as recommended in comment 3. However, we have included an experiment to demonstrate the effect of data augmentation with Visual Genome (Appendix Section C in revised submission). We observe that removing the Visual Genome data reduces accuracy by 2.7% on average and increases RMSE by 0.12 on average and that IRLC is most robust to the decrease in training data.\n\nIn addition, we have incorporated the experiment suggested in comment 4 (Appendix Section C in revised submission). The results demonstrate the range of weight settings in which the penalties improve performance.\n\nWe have also clarified the model descriptions referred to in comments 9 and 10.\n\n\t\t\t\t\t\nAnderson et al. Bottom-Up and Top-Down Attention for Image Captioning and VQA. In CVPR, 2017. \n\nChattopadhyay et al. Counting Everyday Objects in Everyday Scenes. In CVPR, 2017. \n" ]
[ -1, -1, 6, 7, 7, -1, -1, -1 ]
[ -1, -1, 3, 4, 4, -1, -1, -1 ]
[ "S1iUyQjzM", "H1bYCGofM", "iclr_2018_S1J2ZyZ0Z", "iclr_2018_S1J2ZyZ0Z", "iclr_2018_S1J2ZyZ0Z", "ryq-8Y_lG", "rJ1U7MKef", "SJdWxzoxz" ]
iclr_2018_H1UOm4gA-
Interactive Grounded Language Acquisition and Generalization in a 2D World
We build a virtual agent for learning language in a 2D maze-like world. The agent sees images of the surrounding environment, listens to a virtual teacher, and takes actions to receive rewards. It interactively learns the teacher’s language from scratch based on two language use cases: sentence-directed navigation and question answering. It learns simultaneously the visual representations of the world, the language, and the action control. By disentangling language grounding from other computational routines and sharing a concept detection function between language grounding and prediction, the agent reliably interpolates and extrapolates to interpret sentences that contain new word combinations or new words missing from training sentences. The new words are transferred from the answers of language prediction. Such a language ability is trained and evaluated on a population of over 1.6 million distinct sentences consisting of 119 object words, 8 color words, 9 spatial-relation words, and 50 grammatical words. The proposed model significantly outperforms five comparison methods for interpreting zero-shot sentences. In addition, we demonstrate human-interpretable intermediate outputs of the model in the appendix.
accepted-poster-papers
This manuscript was reviewed by 3 expert reviewers and their evaluation is generally positive. The authors have responded to the questions asked and the reviewers are satisfied with the responses. Although the 2D environments are underwhelming (compared to 3D environments such as SUNCG, Doom, Thor, etc), one thing that distinguishes this paper from other concurrent submissions on the similar topics is the demonstration that "words learned only from a VQA-style supervision condition can be successfully interpreted in an instruction-following setting."
train
[ "B1JBH18gf", "B1KF2Z5xf", "HJLqaM7bz", "r1-gIAMNM", "SJGGiTcQM", "HJi7u9TWf", "Skq3gyaWf", "SJnXZJpWz", "HkQl1ypZf", "ryVnJ16Wz", "rkCvlKvkf", "r1a6A4PJz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "This paper introduces a new task that combines elements of instruction following\nand visual question answering: agents must accomplish particular tasks in an\ninteractive environment while providing one-word answers to questions about\nfeatures of the environment. To solve this task, the paper also presents a new\nmodel architecture that effectively computes a low-rank attention over both\npositions and feature indices in the input image. It uses this attention as a\ncommon bottleneck for downstream predictors that select actions and answers to\nquestions. The paper's main claim is that this model architecture enables strong\ngeneralization: it allows the model to succeed at the instruction following task\neven when given words it has only seen in QA contexts, and vice-versa.\nExperiments show that on the navigation task, the proposed approach outperforms\na variety of baselines under both a normal data condition and one requiring\nstrong generalization.\n\nOn the whole, I think this paper does paper does a good job of motivating the\nproposed modeling decisions. The approach is likely to be useful for other\nresearchers working on related problems. I have a few questions about the\nevaluation, but most of my comments are about presentation.\n\nEVALUATION\n\nIs it really the case that no results are presented for the QA task, or am I\nmisreading one of the charts here? Given that this paper spends a lot of time\nmotivating the QA task as part of the training scenario, I was surprised not to\nsee it evaluated. \n\nAdditionally, when I first read the paper I thought that the ZS1 experiments\nfeatured no QA training at all. However, your response to one of the sibling\ncomments suggests that it's still a \"mixed\" training setting where the sampled\nQA and NAV instances happen to cover the full space. This should be made more\nclear in the paper. It would be nice to know (1) how the various models perform\nat QA in both ZS1 and ZS2 settings, and (2) what the actual performance is NAV\nalone (even if the results are terrible).\n\nMODEL PRESENTATION\n\nI found section 2 difficult to read: in particular, the overloading of \\Phi\nwith different subscripts for different output types, the general fact that\ne.g. x and \\Phi_x are used interchangeably, and the large number of different\nvariables. My best suggestions are to drop the \\Phis altogether and consider\nusing text subscripts rather than coming up with a new name for every variable,\nbut there are probably other things that will also help.\n\nOTHER NOTES\n\n- This paper needs serious proofreading---just in the first few pages the errors\n I noticed were \"in 2D environment\" (in the title!), \"such capability\", \"this\n characteristics\", \"such language generalization problem\", \"the agent need to\",\n \"some early pioneering system\", \"commands is\". I gave up on keeping track at\n this point but there are many more.\n\n- \\phi in Fig 2 should be explained by the caption.\n\n- Here's another good paper to cite for the end of 2.2.1:\n https://arxiv.org/pdf/1707.00683.pdf.\n\n- The mechanism in 2.2.4 feels a little like\n http://aclweb.org/anthology/D17-1015\n\n- I don't think the content on pages 12, 13, and 14 adds much to the\n paper---consider moving these to an appendix.", "[Overview]\nIn this paper, the authors proposed a unified model for combining vision, language, and action. It is aimed at controlling an agent in a virtual environment to move to a specified location in a 2D map, and answer user's questions as well. To address this problem, the authors proposed an explicit grounding way to connect the words in a sentence and spatial regions in the images. Specifically, By this way, the model could exploit the outputs of concept detection module to perform the actions and question answering as well jointly. In the experiments, the authors compared with several previous attention methods to show the effectiveness of the proposed concept detection module and demonstrated its superiority on several configurations, including in-domain and out-of-domain cases.\n\n[Strengths]\n\n1. I think this paper proposed interesting tasks to combine the vision, language, and actions. As we know, in a realistic environment, all three components are necessary to complete a complex tasks which need the interactions with the physical environments. The authors should release the dataset to prompt the research in this area.\n\n2. The authors proposed a simple method to ground the language on visual input. Specifically, the authors grounded each word in a sentence to all locations of the visual map, and then perform a simple concept detection upon it. Then, the model used this intermediate representation to guide the navigation of agent in the 2D map and visual question answering as well.\n\n3. From the experiments, it is shown that the proposed model outperforms several baseline methods in both normal tasks and out-of-domain ones. According to the visualizations, the interpreter could generate meaningful attention map given a textual query.\n\n[Weakness]\n\n1. The definition of explicit grounding is a bit misleading. Though the grounding or attention is performed for each word at each location of the visual map. It is a still kind of soft-attention, except that is performed for each word in a sentence. As far as I know, this has been done in several previous works, such as: (a). Hierarchical question-image co-attention for visual question answering (https://scholar.google.com/scholar?oi=bibs&cluster=15146345852176060026&btnI=1&hl=en). Lu et al. NIPS 2016. (b). Graph-Structured Representations for Visual Question Answering. Teney et al. arXiv 2016. At most recent, we have seen some more explicit way for visual grounding like: (c). Bottom-up and top-down attention for image captioning and VQA (https://arxiv.org/abs/1707.07998). Anderson et al. arXiv 2017.\n\n2. Since the model is aimed at grounding the language on the vision based on interactions, it is worth to show how well the final model could ground the text words to each of the visual objects. Say, show the affinity matrix between the words and the objects to indicate the correlations.\n\n[Summary]\n\nI think this is a good paper which integrates vision, language, and actions in a virtual environment. I would foresee more and more works will be devoted to this area, considering its close connection to our daily life. To address this problem, the authors proposed a simple model to ground words on visual signals, which prove to outperform previous methods, such as CA, SAN, etc. According to the visualization, the model could attend the right region of the image for finishing a navigation and QA task. As I said, the authors should rephrase the definition of explicit grounding, to make it clearly distinguished with the previous work I listed above. Also, the authors should definitely show the grounding attention results of words and visual signal jointly, i.e., showing them together in one figure instead of separately in Figure 9 and Figure 10.\n", "The paper introduces XWORLD, a 2D virtual environment with which an agent can constantly interact via navigation commands and question answering tasks. Agents working in this setting therefore, learn the language of the \"teacher\" and efficiently ground words to their respective concepts in the environment. The work also propose a neat model motivated by the environment and outperform various baselines. \n\nFurther, the paper evaluates the language acquisition aspect via two zero-shot learning tasks -- ZS1) A setting consisting of previously seen concepts in unseen configurations ZS2) Contains new words that did not appear in the training phase. \n\nThe robustness to navigation commands in Section 4.5 is very forced and incorrect -- randomly inserting unseen words at crucial points might lead to totally different original navigation commands right? As the paper says, a difference of one word can lead to completely different goals and so, the noise robustness experiments seem to test for the biases learned by the agent in some sense (which is not desirable). Is there any justification for why this method of injecting noise was chosen ? Is it possible to use hard negatives as noisy / trick commands and evaluate against them for robustness ? \n\nOverall, I think the paper proposes an interesting environment and task that is of interest to the community in general. The modes and its evaluation are relevant and intuitions can be made use for evaluating other similar tasks (in 3D, say). ", "Thank you for the reply. I have looked at the revised draft and stick to my current assessment. ", "For AC, reviewers, and others: to check the revisions, please compare the original version (modified: 27 Oct 2017, 13:14) and the latest version (modified: 02 Jan 2018, 10:46). Most changes were made according to the reviewers' comments. Some minor changes were made to improve the presentation. ", "Thanks for the extra experiments! And you're right that I misunderstood what was held out in ZS1. This revision looks good and my score is the same. ", "Thanks for your comments! They really help a lot.\n\nFirst, thanks for suggesting adding the results for QA. Originally we intended to use QA as an auxiliary task to help train NAV. We didn't think of adding results for it (although we indeed had some records showing how well different methods perform in QA during the training). In the revised paper, we have included the QA classification accuracies in the normal, ZS1 and ZS2 settings (Figure 6 c, Figure 7 c and f). We believe that this addition actually demonstrates the generalization ability of our model even better (not only in NAV but also in QA). Because now we also evaluate QA in the test, we modify all the related paragraphs across the paper to emphasize this addition.\n\nWe believe that the original text already clarifies (section 4.4 when defining ZS1) that ZS1 is about excluding word pairs from both NAV commands and QA questions, but not about training NAV alone. Note that training both NAV and QA together does not necessarily imply that the sampled NAV and QA instances cover the full space. For ZS1, a subspace of sentences (containing certain word pairs) is not covered. For ZS2, a different subspace of sentences (containing certain new words) is not covered. In other words, our zero-shot setting is not achieved by turning off either NAV or QA, but instead is by excluding certain sentence patterns from the training (for both NAV and QA).\n\nAs requested, we also added the performance of training NAV alone without QA in the normal language setting (Figure 6). This ablation is called NAVA in the revised experiment section. An analysis of this ablation was also added (section 4.3).\n\nThanks for suggesting citing [de Vries et al 2017] and [Kitaev and Klein 2017]. We find that they are indeed closely related to our work. We have cited and discussed them at the end of section 2.2.1 (-> 2.2.2) and section 2.2.4 (-> 2.2.5), respectively.\n\nWe have simplified the notations in section 2 to keep the presentation concise as suggested. We moved the content of pages 12, 13, and 14 to Appendix A. We went through a careful round of proofreading of the revised paper. While we are still trying to get others into the proofreading process, we have uploaded the second version of the paper to facilitate possible discussions on the OpenView.\n", "Hi, we have updated our paper and added the details about our RL approach in Appendix E. Also, as R2 requested, we added the experiment results for training NAV alone. Please take a look at the revision if interested. Thank you!", "Thanks for your comments.\n\nThe experiment of robustness is aimed at testing the agent in a\nscenario out of our control, such as executing navigation commands\nelicited by human after the training is done. In such case, we simply\nassume that the evaluator does not have any knowledge of the training\nprocess. A natural-language sentence elicited by the human evaluator\nmight convey a meaning that is similar or same to a sentence generated\nby our grammar, however, it might not be that well-formed (e.g.,\ncontaining extra irrelevant words). One simple way of simulating this\nscenario (incompletely) is to insert noisy word embeddings into the original sentence.\n\nThis preliminary experiment serves to provide some numbers to let the\nreaders have a rough idea about how well the agent will perform in an\nuncontrollable setting. However, because of its minor significance and\na possible misunderstanding, we have removed this section (4.5) from\nthe original paper.\n", "Thanks for your comments!\n\nWe agree with the reviewer that our original definition of explicit\ngrounding had some ambiguity. Thus we added several paragraphs to\nelaborate on this. Because then the original section 2.2.1 became so long that we divided it into two (2.2.1 and 2.2.2). Specifically, we rephrased section 2.2.1 (-> 2.2.2) by giving a detailed definition about what it means for a framework to have an explicit grounding strategy. We also discussed the similarities and differences of our grounding with the related work pointed out by the reviewer at the end of section 2.2.1 (-> 2.2.2).\n\nIn summary, our explicit grounding requires two extra properties on top\nof the soft attention mechanism:\n\n1) the grounding (image attention) of a sentence is computed based on\nthe grounding results of the individual words in that sentence (i.e., compositionality);\n\n2) in the framework, there are no other types of language-vision\nfusions besides this kind of groundings by 1)\n\nOne benefit of such an explicit grounding is that Eq 2 achieves a\nlanguage \"bottleneck\" for downstream predictors (as Reviewer 2 pointed\nout in the comment). This bottleneck is used for both NAV and QA. It\nimplies an \"independence of path\" property because given the image all\nthat matters for the NAV and QA tasks is the attention $x$ (Eq 2). It\nguarantees, via the model architecture, that the agent will perform\ncompletely in the same way on the same image even given different\nsentences as long as their $x$ are the same. Also because $x$ is\nexplicit, the roles played by the individual words of $s$ in\ngenerating $x$ are interpretable. This is in contrast to Eq 1 where\nthe roles of individual words are unclear. The interpretability\nprovides a possibility of establishing a link between language\ngrounding and prediction. We argue that these are crucial reasons that\naccount for our strong generalization in both ZS1 and ZS2 settings.\n\nWe have modified the original Figure 10 so that the image attention is\nvisualized jointly with the word attention. More examples are shown\nnow after the modification. Because of a limited space, we moved this\npart to Appendix A and divided it into six figures (Figure 10 - Figure 15).\n", "1. The formula of the module A was not contained in the paper,\nprimarily due to the cut down of pages. It is just a feedforward\nsub-network that approximates the value function and generates the action\ndistribution, given the representation q (Fig. 2). However, as you\nhave asked, we now think it might be a good idea to add it back in the\npaper.\n\n2. Continuing on your first question, our RL method is simply\ncombining AC and Experience Replay (ER). ER is mainly used to\nstabilize AC while maintaining sample efficient. This might result in\nsome conflict between off-policy and on-policy, since the experiences\nsampled from the replay buffer were not generated by the current\npolicy. However, we find that this works well in practice (perhaps\nbecause of the small replay buffer). Similar work was also proposed\nrecently:\n\n\tSample Efficient Actor-Critic With Experience Replay, Wang et al, ICLR 2017.\n\nwhich is more sophisticated compared to ours.\n\nMore specifically, for every minibatch sampled from the replay buffer,\nwe have the following gradient:\n\n-\\sum_{k=0}^K(\\nabla_{\\theta}\\log\\pi_{\\theta}(a_k|x_k)+\\nabla_{\\theta}v_{\\theta}(x_k))(r+\\gamma\n v_{\\theta'}(x_k')-v_{\\theta}(x_k))\n\nwhere k is the sample index in the batch, (x_k, a_k, x_k') is the\nsampled transition, v is the value function to learn, \\theta is the\ncurrent parameters, \\theta' is the target parameters that have update\ndelay as in ER, and \\gamma is the discount factor. This gradient\nmaximizes the expected reward while minimizes the TD error.\n\n3. We did try only training NAV without QA. However, it only worked to\nsome extent for small-size maps like 3x3 or 5x5, with a much smaller\namount of object classes. It was difficult to converge on the current\nsetting of 7x7 maps with 119 object classes. Thus QA is important for\nthe learning. It is not uncommon to see some auxiliary tasks used for\na better convergence in RL problems, for example, language prediction\nand some other cost functions were used in parallel with RL:\n\n\tGrounded language learning in a simulated 3d world, Hermann et al,\n\tarxiv 1706.06551, 2017\n\nNote that QA only helps understanding of questions. The NAV commands\nand action control still need to be learned from RL. Questions and\ncommands are disjoint sets of sentences. The only common part is some\nlocal word or phrase patterns. More importantly, QA offers the\nopportunity to assess the transferring ability of the model across\ntasks, denoted as ZS2 in the paper.\n\nWe believe that such task of jointly learning language and vision is\nchallenging. Even for children, it is very likely that they learn from\na mixture of signals of the environment instead from a single task.\n", "Nice work!\n\nI have a few questions about the technical details of your reinforcement learning experiments:\n\n- I did not find the formula for the module A of the model which is mentioned in Equation (1). Is it contained in the paper?\n- According to Section 3 you are using an actor-critic method. Meanwhile, your Appendix D says that you are using \"Experience Replay\". Can you please provide more details on what specific RL approach was used?\n- According to Appendix D you train NAV and VQA pathways in parallel. Did you try training NAV only? Does you RL approach to grounding work in the absence of the additional signal from VQA?" ]
[ 7, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1UOm4gA-", "iclr_2018_H1UOm4gA-", "iclr_2018_H1UOm4gA-", "HkQl1ypZf", "iclr_2018_H1UOm4gA-", "Skq3gyaWf", "B1JBH18gf", "r1a6A4PJz", "HJLqaM7bz", "B1KF2Z5xf", "r1a6A4PJz", "iclr_2018_H1UOm4gA-" ]
iclr_2018_Sy0GnUxCb
Emergent Complexity via Multi-Agent Competition
Reinforcement learning algorithms can train agents that solve problems in complex, interesting environments. Normally, the complexity of the trained agent is closely related to the complexity of the environment. This suggests that a highly capable agent requires a complex environment for training. In this paper, we point out that a competitive multi-agent environment trained with self-play can produce behaviors that are far more complex than the environment itself. We also point out that such environments come with a natural curriculum, because for any skill level, an environment full of agents of this level will have the right level of difficulty. This work introduces several competitive multi-agent environments where agents compete in a 3D world with simulated physics. The trained agents learn a wide variety of complex and interesting skills, even though the environment themselves are relatively simple. The skills include behaviors such as running, blocking, ducking, tackling, fooling opponents, kicking, and defending using both arms and legs. A highlight of the learned behaviors can be found here: https://goo.gl/eR7fbX
accepted-poster-papers
This paper received divergent reviews (7, 3, 9). The main contributions of the paper -- that multi-agent competition serves as a natural curriculum, opponent sampling strategies, and the characterization of emergent complex strategies -- are certainly of broad interest (although the first is essentially the same observation as AlphaZero, but the different environment makes this of broader interest). In the discussion between R2 and the authors, I am sympathetic to (a subset of) both viewpoints. To be fair to the authors, discovery (in this case, characterization of emergent behavior) can be often difficult to quantify. R2's initial review was unnecessary harsh and combative. The points presented by R2 as evidence of poor evaluation have clear answers by the authors. It would have been better to provide suggestions for what the authors could try, rather than raise philosophical objections that the authors cannot experimentally rebut. On the other hand, I am disappointed that the authors were asked a reasonable, specific, quantifiable request by R2 -- "By the end of Section 5.2, you allude to transfer learning phenomena. It would be nice to study these transfer effects in your results with a quantitative methodology.” -- and they chose to respond with informal and qualitative assessments. It doesn't matter if the results are obvious visually, why not provide quantitative evaluation when it is specifically asked? Overall, we recommend this paper for acceptance, and ask the authors to incorporate feedback from R2.
train
[ "BJjEeunNf", "SkFemC-lz", "By9EwRPxG", "SyCKd4clM", "rJI9veRQz", "B1XRb_6mG", "r1gtW_p7G", "S1VyeOTQG" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "We respond to the three points about the paper raised by the reviewer:\n\n1) There are two questions here, one about games being zero sum and another about the plots in Figure 3 being symmetric about 50%. For the first question the answer is games are not zero sum and a draw results in a “negative” reward for both agents not 0 (considering it as a loss for both) and these rewards are very clearly defined in section 3 where the environments are introduced. Now, for the question about plots being symmetric, let's consider a simple example to understand why plot need not be symmetric. Say we are taking averages over 10 games, in the first set agent 1 wins 5 and 5 are draws (considered as losses for both), in the second set agent 1 wins 6 and and rest are draws. Then a curve representing average win-rates of agents over these sets of games will have two points for agent 1 at 50% and 60%, thus giving an increasing curve for agent 1 whereas for agent 2 the curve will be 0. It is easily seen in this synthetic example that the curve for two agents is not symmetric.\n\n2) The legend for the plots clarifies this, where the curves are labelled as kicker:no-curriculum, keeper: curriculum and kicker: curriculum, keeper:no-curriculum respectively for the two plots. Since only one of the reviewer seemed to have a problem understanding this plot we didn't add any additional clarification. We will add a line clarifying this further.\n\n3) Note that we have not highlighted this as a contribution of the paper anywhere in the abstract or introduction. This is also in section 5.2 which we have explained earlier is the qualitative evaluation part of the paper. The evaluation of transfer success (falling or not) is performed by computer code that does returns objective result of the episode. We do have quantitative evaluation for experiment reviewer asks, but do not report the numbers because the difference between using our approach and not is very stark. If other reviewers agree, we are happy to include these numbers in the final manuscript.\n", "In this paper, the authors produced quite cool videos showing the acquisition of highly complex skills, and they are happy about it. If you read the conclusion, this is the only message they put forward, and to me this is not a scientific message.\n\nA more classical summary is that the authors use PPO, a state-of-the-art deep RL method, in a context where two agents are trained to perform competitive games against each other. They reuse a very recent \"dense reward\" technique to bootstrap the agent skills, and then anneal it to zero so that the competitive rewards obtained from defeating the opponent takes the lead. They study the effect of this annealing process (considered as a curriculum) and of various strategies for sampling the opponents. The main outcome is the acquisition of a large variety of useful skills, just observed from videos of the competitions.\n\nThe main issue with this paper is the lack of scientific analysis of the results, together with many local issues in the presentation of these results.\nBelow, I talk directly to the authors.\n\n---------------------------------\n\nThe related work subsection is just a list of works, it should explain how the proposed work position itself with respect to these works.\n\n\nIn Section 5.2, you are just describing \"cool\" behaviors observed from your videos.\nScience is about producing quantitative results, analyzing them and discussing them.\nI would be glad to read more science about these cool behaviors. Can you define a repertoire of such behaviors?\nDetermine how often they are discovered? Study how the are represented in the networks?\nAnything beyond \"look, that's great!\" would make the paper better...\n\nBy the end of Section 5.2, you allude to transfer learning phenomena.\nIt would be nice to study these transfer effects in your results with a quantitative methodology.\n\nSection 5.3 is more scientific, but it has serious issues.\n\nIn all subfigures in Figure 3, the performance of opponents should be symmetric around 50%. This is not the case for subfigures (b) and (c-1). Why?\nDo they correspond to non-zero sum game? The x-label is \"version\". Don't you mean \"number of epochs\", or something like this? Why do the last 2 images\nshare the same caption?\n\nI had a hard time understanding the message from Table 1. It really needs a line before the last row and a more explicative caption.\n\nStill in 5.3, \"These results echo\"...: can you characterize this echo? What is the relationship to this other work?\n\nAgain, \"These results shed further light\": further with respect to what? Can you be more explicit about what we learn?\n\nAlso, I find that annealing a kind of reward with respect to another is a weak form of curriculum learning. This should be further discussed.\n\nIn Section 5.4, the idea of using many opponents from many stages of learning in not new.\nIf I'm correct, the same was done in evolutionary method to escape the \"arms race\" dead-end in prey-predator races quite a while ago (see e.g. \"Coevolving predator and prey robots: Do “arms races” arise in artificial evolution?\" Nolfi and Floreano, 1998)\n\nSection 5.5.1 would deserve a more quantitative presentation of the effect of randomization.\nActually, in Fig5: the axes are not labelled. I don't believe it shows a win-rate. So probably the caption (or the image) is wrong.\n\nIn Section 5.5.2, you \"suspect this is because...\".\nThe role of a scientific paper is to clearly establish results and explanation from solid quantitative analysis. \n\n-------------------------------------------\nMore local comments:\n\nAbstract:\n\n\"Normally, the complexity of the trained agent is closely related to the complexity of the environment.\" Here you could cite Herbert Simon (1962).\n\n\"In this paper, we point out that a competitive multi-agent environment trained with self-play can produce behaviors that are far more complex than the environment itself.\"\nWell, for an agent, the other agent(s) are part of its environment, aren't they? So I don't like this perspective that the environment itself is \"simple\".\n\nIntro:\n\n\"RL is exciting because good RL exists.\" I don't believe this is a strong argument. There are many good things that exist which are not exciting.\n\n\"In general, training an agent to perform a highly complex task requires a highly complex environment, and these can be difficult to create.\" Well, the standard perspective is the other way round: in general, you face a complex problem, then you need to design a complex agent to solve it, and this is difficult. \n\n\"This happens because no matter how weak or strong an agent is, an environment populated with other agents of comparable strength provides the right challenge to the agent, facilitating maximally rapid learning and avoiding getting stuck.\" This is not always true. The literature is full of examples where two-players competition end-up with oscillations between to solutions rather than ever-increasing skill performance. See the prey-predator literature pointed above.\n\n\"in the domain of continuous control, where balance, dexterity, and manipulation are the key skills.\" In robotics, dexterity, and manipulation usually refer to using the robot's hand(s), a capability which is not shown here.\n\nIn preliminaries, notation, what you describe corresponds to the framework of Dec-POMDPs, you should position yourself with respect to this framework (see e.g. Memory-Bounded Dynamic Programming for DEC-POMDPs. S Seuken, S Zilberstein)\n\nIn PPO description : Let l_t(\\theta) ... denote the likelihood ratio: of what?\n\np5:\nwould train on the dense reward for about 10-15% of the trainig epochs. So how much is \\alpha_t? How did you tune it? Was it hard?\n\np6:\n\nyou give to the agent the mass: does the mass change over time???\n\nIn observations: Are both agents given different observations? Could you specify which is given what?\n\nIn Algorithms parameters: why do you have to anneal longer for kick-and-defend? What is the underlying phenomenon?\n\nIn Section 5, the text mentions Fig5 before Fig4.\n\n-------------------------------------------------\nTypos:\n\np4:\nresearch(Andrychowicz => missing space\nstraight forward => straightforward\n\np5:\nagent like humanoid(s)\nfrom exi(s)ting work\n\np6:\neq. 1 => Eq. (1) (you should use \\eqref{})\nIn section 4.1 => In Section 4.1 (same p7 for Section 4.2)\n\n\"One question that arises is the extent to which the outcome of learning is affected by this exploration reward and to explore the benefit of this exploration reward. As already argued, we found the exploration reward to be crucial for learning as otherwise the agents are unable to explore the sparse competition reward.\" => One question that arises is the extent to which the outcome of learning is affected by this exploration reward and to explore its benefit. As already argued, we found it to be crucial for learning as otherwise the agents are unable to explore the sparse competition reward.\n\np8:\nin a local minima => minimum\n\np9:\nin references, you have Jakob Foerster and Jakob N Foerster => try to be more consistent.\n\np10, In Laetitia Matignon et al. ... markov => Markov\n\np11, I would rename C_{alive} as C_{standing}", "Understanding how-and-why complex motion skills emerge is an complex and interesting problem.\nThe method and results of this paper demonstrate some good progress on this problem, and focus on\nthe key point that competition introduces a natural learning curriculum.\nMulti-agent competitive learning has seen some previous work in setting involving physics-based skills\nor actual robots. However, the results in this paper are compelling in taking this another good step forward.\nOverall the paper is clearly written and I believe that it will have impact.\n\nlist of pros & cons\n+ informative and unique experiments that demonstrate emergent complexity coming from the natural curriculum\n provided by competitive play, for physics-based settings\n+ likely to be of broad interest\n- likely large compute resources needed to replicate or build on the results\n- paper is not anonymous to this reviewer, given the advance publicity for this work when it was released\n==> overall this paper will have impact and advances the state of the art, particular wrt to curriculums\n In many ways, it is what one might expect. But executing on the idea is very much non-trivial.\n\nother comments\n\nCan you comment on the difficulty of designing the \"games\" themselves?\nIt is often difficult to decide apriori when a game is balanced; game designers of any kind\nspend significant time on this. Perhaps it is easier for some of the types of games investigated in\nthis paper, but if you did have any issues with games becoming unbalanced, that would be worthwhile commenting on.\nGame design is also the next level of learning in many ways. :-)\n\nThe opponent sampling strategy is one of the key results of the paper.\nIt could be brought to the fore earlier, i.e., in the abstract.\n\nHow much do the exploration rewards matter?\nIf two classes of agents are bootstrapped with different flavours of exploration rewards, how much would it matter?\n\nIt would be generally interesting to describe when during the learning various \"strategies\" emerged,\nand in what order.\n\nAdding sensory delays might enable richer decoy strategies.\n\nThe paper could comment on the further additional complexity that might result from situations\nthat allow for collaboration as well as competition. (ok, I now see that this is mentioned in the conclusions)\n\nThe Robocup tournaments for robot soccer (real and simulated) have for a long time provided\na path to growing skills and complexity, although under different constraints, and perhaps less interesting\nin terms of one-on-one movement skills.\n\nSection 2, \"Notation\"\nwhy are the actions described as being discrete here, when the paper uses continuous actions?\nAlso, \"$\\pi_{\\theta}$ can be Gaussian\": better to say that it *is* Gaussian in this paper.\n\n\"lead to different algorithm*s*\"\n\nAre there torque limits, and if so, what are they?\n\nsec 4: \"We do multiple rollouts for each agent *pair*\" (?)\n\n\"Such rewards have been previously researched for simple tasks like walking forward and standing up\"\nGiven the rather low visual quality and overly-powerful humanoids of the many of the published \"solutions\", \nperhaps \"simple\" is the wrong qualifer.\n\nFigure 2: curve legend?\n\n\"exiting work\" (sic)\n\n4.2 Opponent sampling:\n\"simultaneously training\" should add \"in opponent pairs\" (?)\n\n5.1 \"We use both MLP and LSTM\"\nshould be \"We compare MLP and LSTM ...\" (?)\n\nFor \"kick and defend\" and \"you shall not pass\", are there separate attack and defend policies?\nIt seems that these are unique in that the goals are not symmetric, whereas for the other tasks they are.\nWould be worthwhile to comment on this aspect.\n\nepisodic length T, eqn (1)\nIt's not clear at this point in the paper if T is constant or not.\n\nObservations: \"we also give the centre-of-mass based inertia *tensor*\" (?)\n\n\"distance from the edge of the ring\"\nHow is this defined?\n\n\"none of the agents observe the complete global state\"\nDoes this really make much of a difference? Most of the state seems visible.\n\n\"But these movement strategies\" -> \"These movement strategies ...\"\n\nsec 5.4 suggest to use $\\mathrm{Uniform}(...)$\n\n\"looses by default\" (sic)\n", "This paper demonstrates that a competitive multi-agent environment trained with self-play can produce behaviors that are far more complex than the environment itself and such environments come with a natural curriculum by introducing several multi-agent tasks with competing goals in a 3D world with simulated physics. It utilizes a decentralized training approach and use distributed implementation of PPO for very large scale multiagent training. This paper addresses the challenges in applying distributed PPO to train multiple competitive agents, including the problem of exploration with sparse reward by using full roll-outs and use the dense exploration reward which is gradually annealed to zero in favor of the sparse competition reward. It makes training more stable by selecting random old parameters for the opponent. \n \nAlthough the technical contributions seem to be not quite significant, this paper is well written and introduces a few new domains which are useful for studying problems in multiagent reinforcement learning. The paper also makes it clear regarding the connections and distinctions to many existing work. \n\nMinor issues:\n\nE[Loss] in table 1 is undefined.\n\nIn the notation section, the observation model is missing, and the policy is restricted to be reactive.\n \nUniform (v, \\deta v) -> Uniform (\\deta v, v)\n", "There is a strong disagreement among reviewers (top paper according to one, clear rejection according to me), so I believe some discussion is necessary.\n\nI completely agree that the videos are awesome, but in itself a nice video does not make a strong scientific paper. Engineering (and most of time a good deal of science) is about producing new interesting phenomena, but science is also about rigorously analyzing these results (rigorously generally means in a quantitative way) and extracting scientific messages from these analyses.\n\nQuoting this excellent blog post \nhttp://www.inference.vc/my-thoughts-on-alchemy/\n\"It's Okay to use non-rigorous methods, but... it's not okay to use non-rigorous evaluation\".\n\nLet me just take 3 examples to show you that this paper lacks rigor everywhere (the first two are admittedly minor points, but they feed my general feeling about this paper. Most other points are present in my review below, and the authors did not make the effort to address all of them):\n\nTaken from their reply to my review:\n\n1) \"“In all subfigures in Figure 3, the performance of opponents should be symmetric around 50%. This is not the case for subfigures (b) and (c-1). Why? Do they correspond to non-zero sum game?”\nReply:\nThe games are not zero-sum, there is some chance of a draw as described in Section 3.\"\n\nWell, in case of a draw, both players should get 0, so this cannot be a correct explanation of why the game is not zero sum. If I cannot trust the answer, can I trust the results themselves?\n\n2) 4. \"“Why do the last 2 images share the same caption?”\nReply:\nBecause the kick-and-defend game is asymmetric, so there are two plots -- one where keeper is trained with curriculum and another where kicker is trained with curriculum. \"\n\nFine. I checked in the new version. The only sentence about Fig3 is \"We plot the average win-rates over 800 games at various intervals during training in Fig. 3, for the sumo and kick-and-defend environments.\" Not a word about the above fact. The reader has to guess that. By the way, why don't we get any result for the \"run to goal\" case? What are the \"various intervals\"? Etc.\n\n3) This one is much more serious, about transfer learning. Any rigorous transfer learning paper will precisely define a source domain and a target domain and will measure (quantitatively) the performance in the target domain with and without training first in the source domain. Here we just get \"The agent trained in a non competitive setting falls over immediately (with episodes lasting less than 10 steps on average), whereas the competitive agent is able to withstand strong forces for hundreds of steps. This difference (in terms of length of episode till agent falls over) can be quantified easily over many random episodes, we only included the qualitative results as the difference is huge and easily visible in the videos.\"\nOK, this is awesome. But to me, the evaluation methodology cannot just be \"visual\". This would be perfectly okay for the \"scientific american\" magazine, but this is not OK for a strong scientific conference. The evaluation itself has to be performed by the computer, because this makes it necessary to rigorously define all the conditions of this evaluation (define fall and stand, define the initial posture, define the number of steps you evaluate, define the wind variations, define everything). The computer will also be able to perform a statistical analysis, which is completely lacking here: did this phenomenon appear every time? If not, what are the chances? For instance, given the lack of quantitative analysis, one cannot determine if another method to come will perform better or worse. \n\nTo me, this paper is representative of what Ali Rahimi's talk was about (the talk was given after I wrote my review). So sorry guys, but though I like the videos and I must admit that my review is a little too harsh because the paper irritated me, I'm still considering that this paper should be rejected.\n\n\n\n\n", "We thank the reviewer for carefully reading the paper and their positive feedback. We have taken care of the appropriate minor changes in the updated draft. ", "We thank the reviewer for carefully reading the paper and their encouraging feedback. We have taken care of the appropriate minor changes in the updated draft. We answer specific questions below.\n\n`1. \"Can you comment on the difficulty of designing the \"games\" themselves?\"\nReply:\nWe picked a few simple competitive games for these results. It is important to appropriately set termination conditions for the games. We did observe some difficulty in games becoming unbalanced, for example in kick-and-defend it was important to use a longer horizon for annealing exploration reward as the defender takes a longer time to learn and adding termination penalties for moving beyond the goal area as well as additional rewards for defending and not falling over on termination lead to more balanced win-rates. In future we will explore more complex game designs where the agents can make use of environmental resources to compete or have to overcome environmental obstacles in addition to competing. \n\n2. \"How much do the exploration rewards matter?\"\nReply:\nWe analyzed the extreme cases where agents have or do not have an exploration reward, as well as the case when the exploration reward is never annealed. The summary is to use exploration reward but anneal it. We also experimented with additional reward terms for interaction with opponent in Sumo environment initially but didn’t observe any significant benefits and chose the simplest form of exploration rewards. \n\n3. \"Are there torque limits, and if so, what are they?\"\nReply:\nWe used the default limits in the gym environment for ant and humanoid body, which is bounded between [-0.4, 0.4]\n\n4. \"For \"kick and defend\" and \"you shall not pass\", are there separate attack and defend policies? It seems that these are unique in that the goals are not symmetric, whereas for the other tasks they are.\"\nReply:\nYes, that is correct, we have noted this is section 5.1\n\n5. \"distance from the edge of the ring. How is this defined?\"\nReply:\nIt is the radial distance of the agent from the edge. So if R is ring radius and r is the agent's distance from center then we give (R-r) as input.\n", "Thank you for taking the time to review our work. We have taken care of the typos and appropriate minor changes in the updated draft. We answer specific questions below.\n\n1. “In Section 5.2, you are just describing \"cool\" behaviors observed from your videos ..... Study how the are represented in the networks? Anything beyond \"look, that's great!\" would make the paper better…”\nReply:\nThere are four main contributions put forth in this paper: (1) Identifying that a competitive multi-agent interaction can serve as a proxy for complexity of the environment and allows for a natural curriculum for training RL agents (2) Developing 4 new simulated environments which can serve as a test-bed for future research on training competitive agents (Section 3) (3) Developing opponent-sampling strategies and a simple (yet effective) strategy for dealing with sparse rewards (Section 4) -- both of which lead to effective training in the competitive environments (as evaluated through quantitative ablation studies in section 5.3 and 5.4) (4) Demonstrating compelling results on the four competitive 3D environments with two different agent morphologies -- showing remarkable dexterity in the agents without explicit supervision.\nWe believe qualitative evaluation through observing agents’ behavior is an important part of evaluating the success of the proposed contributions, and in section 5.2 we analyze qualitatively many random episodes. Section 4 discusses the parallel implementation of PPO (a recent policy optimization algorithm, see section 2), opponent sampling and exploration curriculum which are crucial technical ideas making the results possible. Section 5.3, 5.4, 5.5 contain rigorous quantitative analysis of the main ideas which make these results possible.\nThe results might be what one may expect, but executing on the idea is very much non-trivial -- as also noted by other reviewers. This paper is an exposé of the potential of competitive multi-agent training and we agree there is a lot of potential for more future work in this area. \n\n2. “By the end of Section 5.2, you allude to transfer learning phenomena. It would be nice to study these transfer effects in your results with a quantitative methodology.”\nReply:\nWe studied a particular case of transfer in the sumo environment, where an agent trained in a competitive setting demonstrated robust standing behavior in a single-agent setting without any modifications or fine-tuning of the policies. The agent trained in a non competitive setting falls over immediately (with episodes lasting less than 10 steps on average), whereas the competitive agent is able to withstand strong forces for hundreds of steps. This difference (in terms of length of episode till agent falls over) can be quantified easily over many random episodes, we only included the qualitative results as the difference is huge and easily visible in the videos.\n\n3. “In all subfigures in Figure 3, the performance of opponents should be symmetric around 50%. This is not the case for subfigures (b) and (c-1). Why? Do they correspond to non-zero sum game?”\nReply:\nThe games are not zero-sum, there is some chance of a draw as described in Section 3.\n\n4. “Why do the last 2 images share the same caption?”\nReply:\nBecause the kick-and-defend game is asymmetric, so there are two plots -- one where keeper is trained with curriculum and another where kicker is trained with curriculum. \n\n5. “I had a hard time understanding the message from Table 1. It really needs a line before the last row and a more explicative caption.”\nReply:\nAdded. It is also described in detail in Section 5.4\n\n6. “Also, I find that annealing a kind of reward with respect to another is a weak form of curriculum learning. This should be further discussed.”\nThis is discussed in section 4.1\n\n7. “Actually, in Fig5: the axes are not labelled. I don't believe it shows a win-rate. So probably the caption (or the image) is wrong.”\nReply:\nThe y-label is the fractional win-rate (and not %), we have clarified this.\n\n8. “would train on the dense reward for about 10-15% of the training epochs. So how much is \\alpha_t? How did you tune it? Was it hard?”\nReply:\nNote that \\alpha_t is an annealing factor, so it’s value starts from 1 and is annealed to 0 over some number of epochs. 10-15% of training epochs is the typical window for annealing \\alpha_t and the exact values are given in the experiments section 5.1. There is no direct tuning of \\alpha_t required, instead the horizon for annealing was tuned in {250, 500, 750, 1000} epochs and the value giving highest win-rate was selected. More quantitative analysis of annealing is in section 5.3\n" ]
[ -1, 3, 9, 7, -1, -1, -1, -1 ]
[ -1, 3, 5, 4, -1, -1, -1, -1 ]
[ "rJI9veRQz", "iclr_2018_Sy0GnUxCb", "iclr_2018_Sy0GnUxCb", "iclr_2018_Sy0GnUxCb", "iclr_2018_Sy0GnUxCb", "SyCKd4clM", "By9EwRPxG", "SkFemC-lz" ]
iclr_2018_B1mvVm-C-
Universal Agent for Disentangling Environments and Tasks
Recent state-of-the-art reinforcement learning algorithms are trained under the goal of excelling in one specific task. Hence, both environment and task specific knowledge are entangled into one framework. However, there are often scenarios where the environment (e.g. the physical world) is fixed while only the target task changes. Hence, borrowing the idea from hierarchical reinforcement learning, we propose a framework that disentangles task and environment specific knowledge by separating them into two units. The environment-specific unit handles how to move from one state to the target state; and the task-specific unit plans for the next target state given a specific task. The extensive results in simulators indicate that our method can efficiently separate and learn two independent units, and also adapt to a new task more efficiently than the state-of-the-art methods.
accepted-poster-papers
All reviewers recommend accepting this paper, and this AC agrees.
train
[ "rkHr_WFlz", "rkx8qW9ez", "H1wc2j2lM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose to decompose reinforcement learning into a PATH function that can learn how to solve reusable sub-goals an agent might have in a specific environment and a GOAL function that chooses subgoals in order to solve a specific task in the environment using path segments. So I guess it can be thought of as a kind of hierarchical RL.\nThe exposition of the model architecture could use some additional detail to clarify some steps and possibly fix some minor errors (see below). I would prefer less material but better explained. I had to read a lot of sections more than once and use details across sections to fill in gaps. The paper could be more focused around a single scientific question: does the PATH function as formulated help?\n\nThe authors do provide a novel formulation and demonstrate the gains on a variety of concrete problems taken form the literature. I also like that they try to design experiments to understand the role of specific parts of the proposed architecture.\n\nThe graphs are WAY TOO SMALL to read. Figure #s are missing off several figures.\n\n\nMODEL & ARCHITECTURE\n\nThe PATH function given a current state s and a goal state s', returns a distribution over the best first action to take to get to the goal P(A). ( If the goal state s’ was just the next state, then this would just be a dynamics model and this would be model-based learning? So I assume there are multiple steps between s and s’?). \n\nAt the beginning of section 2.1, I think the authors suggest the PATH function could be pre-trained independently by sampling a random state in the state space to be the initial state and a second random state to be the goal state and then using an RL algorithm to find a path. \n\nPresumably, once one had found a path ( (s, a0), (s1, a1), (s2, a2), …, (sn-1,an-1), s’ ) one could then train the PATH policy on the triple (s, s’, a0) ? This seems like a pretty intense process: solving some representative subset of all possible RL problems for a particular environment … Maybe one choses s and s’ so they are not too far away from each other (the experimental section later confirms this distance is >= 7. Maybe bring this detail forward)?\n\nThe expression Trans’( (s,s’), a) = (Trans(s,a), s’) was confusing. I think the idea here is that the expression \nTrans’( (s,s’) , a ) represents the n-step transition function and ‘a' represents the first action?\n\nThe second step is to train the goal function for a specific task. So I gather our policy takes the form of a composed function and the chain rule gives close to their expression in 2.2\n\n PATH( s, Tau( s, th^g ), a ; th^p )\n\n d / { d th^g } PATH( s, Tau( s, th^g ), a ; th^p ) \n\n = {d / d {s’ } PATH } ( s, Tau( s, th^g ), a ) d / {d th^g} Tau( s, th^g) \n\nWhat is confusing is that they define\n\n A( s, a, th^p, th^g, th^v ) = sum_i gamma^i r_{t+1} + gamma^k V( s_{t+k} ; th^v ) - V( s_t ; th^v )\n\nThe left side contains th^p and th^g, but the right side does not. Should these parameters be take out of the n-step advantage function A?\n\nThe second alternative for training the goal function tau seems confusing. I get that tau is going to be constrained by whatever representation PATH function was trained on and that this representation might affect the overall performance - performance. I didn’t get the contrast with method one. How do we treat the output of Tau as an action? Are you thinking of the gradient coming back through PATH as a reward signal? More detail here would be helpful.\n\n\nEXPERIMENTS:\n\nLavaworld: authors show that pretraining the PATH function on longer 7-11 step policies leads to better performance\nwhen given a specific Lava world problem to solve. So the PATH function helps and longer paths are better. This seems reasonable. What is the upper bound on the size of PATH lengths you can train? \n\nReachability: authors show that different ways of abstracting the state s into a vector encoding affect the performance of the system. From a scientific point of view, this seems orthogonal to the point of the paper, though is relevant if you were trying to build a system. \n\nTaxi: the authors train the PATH problem on reachability and then show that it works for TAXI. This isn’t too surprising. Both picking up the passenger (reachability) and dropping them off somewhere are essentially the same task: moving to a point. It is interesting that the Task function is able to encode the higher level structure of the TAXI problem’s two phases.\n\nAnother task you could try is to learn to perform the same task in two different environments. Perhaps the TAXI problem, but you have two different taxis that require different actions in order to execute the same path in state space. This would require a phi(s) function that is trained in a way that doesn’t depend on the action a.\n\nATARI 2600 games: I am not sure what state restoration is. Is this where you artificially return an agent to a state that would normally be hard to reach? The authors show that UA results in gains on several of the games.\n\nThe authors also demonstrate that using multiple agents with different policies can be used to collect training examples for the PATH function that improve its utility over training examples collected by a single agent policy.\n\nRELATED WORK:\n\nGood contrast to hierarchical learning: we don’t have switching regimes here between high-level options\n\nI don’t understand why the authors say the PATH function can be viewed as an inverse? Oh - now I get it.\nBecause it takes an extended n-step transition and generates an action. \n\n\n\n\n\n", "In this paper a modular architecture is proposed with the aim of separating environment specific (dynamics) knowledge and task-specific knowledge into different modules. Several complex but discrete control tasks, with relatively small action spaces, are cast as continuous control problems, and the task specific module is trained to produce non-linear representations of goals in the domain of transformed high-dimensional inputs.\n\nPros\n- “Monolithic” policy representations can make it difficult to reuse or jointly represent policies for related tasks in the same environment; a modular architecture is hence desirable.\n- An extensive study of methods for dimensionality reduction is performed for a task with sparse rewards.\n- Despite all the suggestions and questions below, the method is clearly on par with standard A3C across a wide range of tasks, which makes it an attractive architecture to explore further.\n\nCons\n- In general, learning a Path function could very well turn out to be no simpler than learning a good policy for the task at hand. I have 2 main concerns:\nThe data required for learning a good Path function may include similar states to those visited by some optimal policy. However, there is no such guarantee for random walks; indeed, for most Atari games which have several levels, random policies don’t reach beyond the first level, so I don’t see how a Path function would be informative beyond the ‘portions’ of the state space which were visited by policies used to collect data.\nHence, several policies which are better than random are likely to be required for sampling this data, in general. In my mind this creates a chicken-and-egg issue: how to get the data, to learn the right Path function which does not make it impossible to still reach optimal performance on the task at hand? How can we ensure that some optimal policy can still be represented using appropriate Goal function outputs? I don’t see this as a given in the current formulation.\n- Although the method is atypical compared to standard HRL approaches, the same pitfalls may apply, especially that of ‘option collapse’: given a fixed Path function, the Goal function need only figure out which goal state outputs almost always lead to the same output action in the original action space, irrespective of the current state input phi(s), and hence bypass the Path function altogether; then, the role of phi(s) could be taken by tau(s), and we would end up with the original RL problem but in an arguably noisier (and continuous) action space. I recommend comparing the Jacobian w.r.t the phi(s) and tau(s) inputs to the Path function using saliency maps [1, 2]; alternatively, evaluating final policies with out of date input states s to phi, and the correct tau(s) inputs to Path function should degrade performance severely if it playing the role assumed. Same goes for using a running average of phi(s) and the correct tau(s) in final policies.\n- The ability to use state restoration for Path function learning is actually introducing a strong extra assumption compared to standard A3C, which does not technically require it. For cheap emulators and fully deterministic games (Atari) this assumption holds, but in general restoring expensive, stochastic environments to some state is hard (e.g. robot arms playing ping-pong, ball at given x, y, z above the table, with given velocity vector).\n- If reported results are single runs, please replace with averages over several runs, e.g. a few random seeds. Given the variance in deep RL training curves, it is hard to make definitive claims from single runs. If curves are already averages over several experiment repeats, some form of error bars or variance plot would also be informative.\n- How much data was actually used to learn the Path function in each case? If the amount is significant compared to task-specific training, then UA/A3C-L curves should start later than standard A3C curves, by that amount of data.\n\n\nReferences\n[1] Simonyan, K., Vedaldi, A., and Zisserman, A. Deep inside\nconvolutional networks: Visualising image classification\nmodels and saliency maps. arXiv preprint arXiv:1312.6034, 2013.\n[2] Z Wang, T Schaul, M Hessel, H Van Hasselt, M Lanctot, N De Freitas, Dueling network architectures for deep reinforcement learning arXiv preprint arXiv:1511.06581\n", "Thank you for the submission. It was an interesting read. Here are a few comments: \n\nI think when talking about modelling the dynamics of the world, it is natural to discuss world models and model based RL, which also tries to explicitly take advantage of the separation between the dynamics of the world and the reward scheme. Granted, most world model also try to predict the reward. I’m not sure there is something specific I’m proposing here, I do understand the value of the formulation given in the work, I just find it strange that model based RL is not mention at all in the paper. \n\nI think reading the paper, it should be much clearer how the embedding is computed for Atari, and how this choice was made. Going through the paper I’m not sure I know how this latent space is constructed. This however should be quite important. The goal function tries to predict states in this latent space. So the simpler the structure of this latent space, the easier it should be to train a goal function, and hence quickly adapt to the current reward scheme. \n\nIn complex environments learning the PATH network is far from easy. I.e. random walks will not expose the model to most states of the environment (and dynamics). Curiosity-driven RL can be quite inefficient at exploring the space. If the focus is transfer, one could argue that another way of training the PATH net could be by training jointly the PATH net and goal net, with the intend of then transferring to another reward scheme.\n\nA3C is known to be quite high variance. I think there are a lot of little details that don’t seem that explicit to me. How many seeds are run for each curve (are the results an average over multiple seeds). What hyper-parameters are used. What is the variance between the seeds. I feel that while the proposed solution is very intuitive, and probably works as described, the paper does not do a great job at properly comparing with baseline and make sure the results are solid. In particular looking at Riverraid-new is the advantage you have there significant? How does the game do on the original task? \n\nThe plots could also use a bit of help. Lines should be thicker. Even when zooming, distinguishing between colors is not easy. Because there are more than two lines in some plots, it can also hurt people that can’t distinguish colors easily. \n\n" ]
[ 6, 7, 6 ]
[ 3, 4, 3 ]
[ "iclr_2018_B1mvVm-C-", "iclr_2018_B1mvVm-C-", "iclr_2018_B1mvVm-C-" ]
iclr_2018_SJa9iHgAZ
Residual Connections Encourage Iterative Inference
Residual networks (Resnets) have become a prominent architecture in deep learning. However, a comprehensive understanding of Resnets is still a topic of ongoing research. A recent view argues that Resnets perform iterative refinement of features. We attempt to further expose properties of this aspect. To this end, we study Resnets both analytically and empirically. We formalize the notion of iterative refinement in Resnets by showing that residual architectures naturally encourage features to move along the negative gradient of loss during the feedforward phase. In addition, our empirical analysis suggests that Resnets are able to perform both representation learning and iterative refinement. In general, a Resnet block tends to concentrate representation learning behavior in the first few layers while higher layers perform iterative refinement of features. Finally we observe that sharing residual layers naively leads to representation explosion and hurts generalization performance, and show that simple existing strategies can help alleviating this problem.
accepted-poster-papers
The paper presents an interesting view of ResNets and the findings should be of broad interest. R1 did not update their score/review, but I am satisfied with the author response, and recommend this paper for acceptance.
train
[ "H1EPgaweG", "HkeOU0qgf", "HJyUi3sez", "SkXbC02XM", "BJyaPFbff", "SyGjDt-fM", "HkFKPt-Gf", "HkzBwFWzM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper investigates residual networks (ResNets) in an empirical way. The authors argue that shallow layers are responsible for learning important feature representations, while deeper layers focus on refining the features. They validate this point by performing a series of lesion study on ResNet.\n\nOverall, the experiments and discussions in the first part of Section 4.2 and 4.3 appears to be interesting, while other observations are not quite surprising. I have two questions:\n1)\tWhat is the different between the layer-dropping experiment in sec 4.2 and that in [Veit, et al, Residual networks are exponential ensembles of relatively shallow networks] ? What is the main point here? \n2)\tI don't quite understand the first paragraph of sec 4.5. Could you elaborate more on this?\n", "The author unveils some properties of the resnets, for example, the cosine loss and l2 ratio of the layers. \nI think the author should place more focus to study \"real\" iterative inference with shared parameters rather than analyzing original resnets.\n\nIn resnet without sharing parameters, it is quite ambiguous to say whether it is doing representation learning or iterative refinement.\n\n1. The cosine loss is not meaningful in the sense that the classification layer is trained on the output of the last residual block and fixed. Moving the classification layer to early layers will definitely result in accuracy loss. Even in non-residual network, we can always say that the vector h_{i+1} - h_i is refining h_i towards the negative gradient direction. The motivation of iterative inference would be to generate a feature that is easier to classify rather than to match the current fixed classifier. Thus the final classification layer should be retrained for every addition or removal of residual blocks.\n\n2. The l2 ratio. The l2 ratio is small for higher residual layers, I'm not sure how much this phenomenon can prove that resnet is actually doing iterative inference.\n\n3. In section 4.4 it is shown that unrolling the layers can improve the performance of the network. However, the same can be achieved by adding more unshared layers. I think the study should focus more on whether shared or unshared is better.\n\n4. Section 4.5 is a bit weak in experiments, my conclusion is that currently it is still limited by batch normalization and optimization, the evidence is still not strong enough to show that iterative inference is advantageous / disadvantageous.\n\nThe the above said, I think the more important thing is how we can benefit from iterative inference interpretation, which is relatively weak in this paper.", "\nThis paper shows that residual networks can be viewed as doing a sort of iterative inference, where each layer is trained to use its “nonlinear part” to push its values in the negative direction of the loss gradient. The authors demonstrate this using a Taylor expansion of a standard residual block first, then follow up with several experiments that corroborate this interpretation of iterative inference. Overall the strength of this paper is that the main insight is quite interesting — though many people have informally thought of residual networks as having this interpretation — this paper is the first one to my knowledge to explain the intuition in a more precise way. \n\nSome weaknesses of the paper on the other hand — some of the parts of the paper (e.g. on weight sharing) are only somewhat related to the main topic of the paper. In fact, the authors moved the connection to SGD to the appendix, which I thought would be *more* related. Additionally, parts of the paper are not as clearly written as they could be and lack rigor. This includes the mathematical derivation of the main insight — some of the steps should be spelled out more explicitly. The explanation following is also handwavey despite claims to being formal. \n\nSome other lower level thoughts:\n* Regarding weight sharing for residual layers, I don’t understand why we can draw the conclusion that the initial gradient explosion is responsible for the lower generalization capability of the model with shared weights. Are there other papers in literature that have shown this connection?\n* The name “cosine loss” suggests that this function is actually being minimized by a training procedure, but it is just a value that is being plotted… perhaps just call it the cosine?\n* I recommend that the authors also check out Figurnov et al CVPR 2017 (\"Spatially Adaptive Computation Time for Residual Networks\") which proposes an “adaptive” version of ResNet based on the intuition of adaptive inference.\n* The plots in the later parts of the paper are quite small and hard to read. They are also spaced together too tightly (horizontally), making it difficult to immediately see what each plot is supposed to represent via the y-axis label.\n* Finally, the citations need to be fixed (use \\citep{} instead of \\cite{})\n\n", "To address reviewers comments we introduced following changes to the submission:\n* We clarified derivation of main result, as suggested by Reviewer 1\n* Clarified 4.3 and 4.5 to address reviewers various comments, including clearer description of overfitting when sharing Resnet blocks, rewording of first paragraph of 4.5, clarifying definition of borderline examples, and mentioned additional results in Appendix\n* Clarified 4.4, underlying the goal was to show Resnet can generalize to more steps (on train), mentioned additional results in Appendix\n* Clarified conclusions\n* To address Reviewer 2 comment on iterative inference in shared Resnet, we added two sections in Appendix reporting metrics (cosine loss, accuracy, l1 ratio) on shared Resnet, and on the unrolled to more steps Resnet.\n* Fixed some typos, and added some minor clarifications of other experiments\n", "We thank reviewer for his remarks, and positive assessment. \n\nIn his first point, reviewer asks what is difference between our 4.2 and Veit et al. We cite Veit et al, and extend his observations. Our novel observation is that blocks in residual network have different function, and only subset of blocks focus on iterative inference. More specifically, some blocks have large l2 ratio (ratio of output to input norms for given block), and cannot be dropped without drastic effect on performance. This allows us to specify concretely in what sense residual network performs iterative inference. We made edits in text to clarify this.\n\nIn his second point reviewer requests clarification on first paragraph of 4.5. First paragraph of 4.5 reads: “Given the iterative inference view, we now study the effects of sharing residual blocks. Contrary to (Liao & Poggio, 2016) we observe that naively sharing the higher (iterative refinement) residual blocks of a Resnets in general leads to overfitting (especially for deeper Resnets).”.First, we say that our results suggest that residual network perform iterative inference, and that top blocks are performing similar function (feature refinement), there it is plausible that top blocks in residual network should be shareable. However, during this investigation, we report a surprising observation that has not been made before (Liao & Poggio tested relatively small ResNets) that when we share layers of residual network, it leads to drastic overfitting. In Fig.8 we compare Resnet-110-shared and Resnet-32, where Resnet-110-shared has same number of parameters as Resnet-32. We observe strong overfitting (train accuracy remains the same, while validation accuracy is much lower for Resnet110). We made edits in text to clarify this first paragraph.\n", "We thank reviewer for his positive assessment and useful feedbacks.\n\nIn his first point reviewer remarks that sharing residual network is less interesting than studying SGD connection. We agree with reviewer 1 and will move back the connection to SGD in the main text in camera ready version. We do believe that studying weight sharing in residual network is important as well, because it implements ‘true’ iterative inference (i.e. where same function is applied). \n\nNext, reviewer suggests we should improve writing of some parts of paper including mathematical derivation. We address this remark in revision of the paper, by making the derivation more explicit (step from (3) to (4)).\n\nIn the next point reviewer asks if interpretation in 4.5 that gradient explosion leads to overfitting is justified. We would like to clarify we observe both underfitting (worse training accuracy of the shared model compared to the unshared model) and overfitting (given similar training accuracy, the Resnet with shared parameter has significantly lower validation performances). We also observe that the activations explode during the forward propagation of the Resnet with shared parameters due to the the repeated application of the same layer. By better controlling the activation norm increase using unshared batch normalization, we show that we can reduce the performance gap between the Resnet with shared parameters and the Resnet with unshared parameters, for both training accuracy and validation accuracy. We have updated the text in section 4.5 to clarify this point.\n\nWe thank for reference. We introduced suggested edits in revision. We agree that “cosine loss” name can be misleading. For camera ready we will change it.\n", "We thank reviewer 2 for his thoughtful review. To address the comments we added new plots to paper (2 sections in Appendix with references in main text), and revised text to clarify mentioned points.\n\nWe would like to open by clarifying our definition of iterative inference. Iterative inference (as specified in “Formalizing iterative inference section”) is defined in our paper as descending the loss surface of the network that is specified by current set of parameters, rather than finding generally better features. Our contribution is showing that residual network maximizes the alignment of block output with steepest direction in the aforementioned loss surface, specified by the network current parameters. The usage of fixed classifier is therefore justifiedby our Taylor expansion. We made these points clearer in revised version, and also added more detailed math derivation.\n\nThe central objection of reviewer is our focus on iterative inference without shared weights. Our objective is to study a form of iterative inference (as specified above) implemented by regular residual networks. However, we devoted large section of paper to shared residual network. We fully agree there is room for doubt if some of the results transfer to true iterative (shared) residual network, and we thank reviewer for this remark. To address this, we plot cosine loss, l2 ratio, and intermediate accuracy on shared residual networks, on which we observe very similar trends to unshared residual network. We posted them anonymously here: https://ibb.co/hd1F9m (red is close to output, dotted is on validation set), and also included them in the revision in appendix.\n\nNow, we respond in detail to each point in turn.\n\nIn his first point, reviewer raises objection to using fixed classifier. We would like to stress that our definition of iterative inference is descending down the network loss surface here defined by fixed classifier. We made this point clearer in the revision. We also include plot of all the metrics for “true” iterative inference in shared residual network, https://ibb.co/hd1F9m (red is close to output, dotted is on validation set), also added to Appendix.\n\nNext objection is that trivially non-residual network could be seen as revising h_{t+1} towards good accuracy, and therefore cosine loss is not a meaningful metric. We are in agreement that in a non-residual network we do expect increase in accuracy. However, we do not expect from non-residual network: \nLayer output to be aligned with gradient of loss with respect of hidden state, and perform small iterative refinement. Layer output always should increase accuracy as stated by reviewer, but this is very different from small iterative steps that are aligned with dL/dh.\nLayer to generalize when applied to more steps than it was trained (Sec 4.4), both on train and test distribution\nFinal layers to focus only on borderline examples (as specified in text, examples that are close to being either correctly classified, or misclassified)\n, which are non-trivial things we report. To further support generalization to more steps we revised text to highlight maintained negative cos loss and reduction of loss on training set. We plot evolution of cosine loss for unrolled steps in appendix: https://ibb.co/f64YC6. To further support claim about iterative refinement, we plot l2 ratio for experiments in 4.3, 4.4 (also included in https://ibb.co/f64YC6). ", "In his second point, reviewer asks why small l2 ratio in higher layers can be seen as a proof for iterative inference. Small l2 ratio suggests that 1st order taylor expansion is accurate. Decreasing l2 ratio from lower to higher layers further supports iterative inference, because model descending down the loss surface should converge towards the end. We made edits in revision to clarify these points.\n\nIn his third point, reviewer mentions that it is not clear from 4.4 if adding shared or unshared layers is better. Section 4.5 is devoted to this question and concludes that shared layers do not lead to same gains as unshared. In particular, we observe that naively sharing parameters of the top residual blocks leads both to underfitting (worse training accuracy than its unshared counterpart) and overfitting (given similar training accuracy, the model with shared parameters model has significantly lower validation performances).\nThis issue appears to be related to the activation explosion we observe in the shared Resnet model. We believe that developing better strategy to control the activation norm increase through the layer could help addressing this issue.\n\nIn his fourth point, reviewer remarks results of 4.5 are limited by optimization and using batch normalization. We agree in this sense, that optimization of shared residual network seems intrinsically difficult. Given our results from previous sections, we believe that bridging these difficulties in training, will result in a strong shared residual network.\n\nOverall, our novelty is in showing one specific, formal way in which residual networks perform iterative inference, and then exploring it in both unshared, and shared case (both by training shared network, as well as unrolling unshared network to more steps than it was trained on).\n" ]
[ 6, 5, 7, -1, -1, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJa9iHgAZ", "iclr_2018_SJa9iHgAZ", "iclr_2018_SJa9iHgAZ", "iclr_2018_SJa9iHgAZ", "H1EPgaweG", "HJyUi3sez", "HkeOU0qgf", "HkeOU0qgf" ]
iclr_2018_Hk6WhagRW
Emergent Communication through Negotiation
Multi-agent reinforcement learning offers a way to study how communication could emerge in communities of agents needing to solve specific problems. In this paper, we study the emergence of communication in the negotiation environment, a semi-cooperative model of agent interaction. We introduce two communication protocols - one grounded in the semantics of the game, and one which is a priori ungrounded. We show that self-interested agents can use the pre-grounded communication channel to negotiate fairly, but are unable to effectively use the ungrounded, cheap talk channel to do the same. However, prosocial agents do learn to use cheap talk to find an optimal negotiating strategy, suggesting that cooperation is necessary for language to emerge. We also study communication behaviour in a setting where one agent interacts with agents in a community with different levels of prosociality and show how agent identifiability can aid negotiation.
accepted-poster-papers
All reviewers agree the paper proposes an interesting setup and the main finding that "prosocial agents are able to learn to ground symbols using RL, but self-interested agents are not" progresses work in this area. R3 asked a number of detail-oriented questions and while they did not update their review based on the author response, I am satisfied by the answers.
train
[ "B1nNZKU4M", "SJGBLcYxG", "Bk7S9ZclM", "BkoPxj3xz", "ByxfIR9Xf", "HyQaw09QG", "B1RQDA9Qz", "BkGlPCqQz", "ryQO17PyM", "HJ9FF6EAZ", "r1Nwr02RW", "HydzSWqCW", "SyodmvDC-", "SkM4N2SCW", "SkW8mhN0-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "public", "public", "public", "public" ]
[ "Thank you for responding to the mentioned concerns and addressing those in your latest revision. The topic is interesting and deserves visibility.", "The authors describe a variant of the negotiation game in which agents of different type, selfish or prosocial, and with different preferences. The central feature is the consideration of a secondary communication (linguistic) channel for the purpose of cheap talk, i.e. talk whose semantics are not laid out a priori. \n\nThe essential findings include that prosociality is a prerequisite for effective communication (i.e. formation of meaningful communication on the linguistic channel), and furthermore, that the secondary channel helps improve the negotiation outcomes.\n\nThe paper is well-structured and incrementally introduces the added features and includes staged evaluations for the individual additions, starting with the differentiation of agent characteristics, explored with combination of linguistic and proposal channel. Finally, agent societies are represented by injecting individuals' ID into the input representation.\n\nThe positive:\n- The authors attack the challenging task of given agents a means to develop communication patterns without apriori knowledge.\n- The paper presents the problem in a well-structured manner and sufficient clarity to retrace the essential contribution (minor points for improvement).\n- The quality of the text is very high and error-free.\n- The background and results are well-contextualised with relevant related work. \n\nThe problematic:\n- By the very nature of the employed learning mechanisms, the provided solution provides little insight into what the emerging communication is really about. In my view, the lack of interpretable semantics hardly warrants a reference to 'cheap talk'. As such the expectations set by the well-developed introduction and background sections are moderated over the course of the paper.\n- The goal of providing agents with richer communicative ability without providing prior grounding is challenging, since agents need to learn about communication partners at runtime. But it appears as of the main contribution of the paper can be reduced to the decomposition of the learnable feature space into two communication channels. The implicit relationship of linguistic channel on proposal channel input based on the time information (Page 4, top) provides agents with extended inputs, thus enabling a more nuanced learning based on the relationship of proposal and linguistic channel. As such the well-defined semantics of the proposal channel effectively act as the grounding for the linguistic channel. This, then, could have been equally achieved by providing agents with a richer input structure mediated by a single channel. From this perspective, the solution offers limited surprises. The improvement of accuracy in the context of agent societies based on provided ID follows the same pattern of extending the input features.\n- One of the motivating factors of using cheap talk is the exploitation of lying on the part of the agents. However, apart from this initial statement, this feature is not explicitly picked up. In combination with the previous point, the necessity/value of the additional communication channel is unclear.\n\nConcrete suggestions for improvement:\n\n- Providing exemplified communication traces would help the reader appreciate the complexity of the problem addressed by the paper.\n- Figure 3 is really hard to read/interpret. The same applies to Figure 4 (although less critical in this case).\n- Input parameters could have been made explicit in order to facilitate a more comprehensive understanding of technicalities (e.g. in appendix).\n- Emergent communication is effectively unidirectional, with one agent as listener. Have you observed other outcomes in your evaluation?\n\nIn summary, the paper presents an interesting approach to combine unsupervised learning with multiple communication channels to improve learning of preferences in a well-established negotiation game. The problem is addressed systematically and well-presented, but can leave the reader with the impression that the secondary channel, apart from decomposing the model, does not provide conceptual benefit over introducing a richer feature space that can be exploited by the learning mechanisms. Combined with the lack of specific cheap talk features, the use of actual cheap talk is rather abstract. Those aspects warrant justification.", "This paper explores how agents can learn to communicate to solve a negotiation task. They explore several settings: grounded vs. ungrounded communication, and self-interested vs. prosocial agents. The main findings are that prosocial agents are able to learn to ground symbols using RL, but self-interested agents are not. The work is interesting and clearly described, and I think this is an interesting setting for studying emergent communication.\n\nMy only major comment is that I’m a bit skeptical about the claim that “self-interested agents cannot ground cheap talk to exchange meaningful information”. Given that the agents’ rewards would be improved if they were able to make agreements, and humans can use ‘cheap talk’ to negotiate, surely the inability to do so here shows a failure of the learning algorithm (rather than a general property of self-interested agents)?\n\nI am also concerned about the dangers posed by robots inventing their own language, perhap the authors should shut this down :-)\n", "The experimental setup is clear, although the length of the utterances and the number of symbols in them is not explicitly stated in the text (only the diagrams).\n\nExperiment 1 confirms that agents who seek only to maximise their own rewards fail to coordinate over a non-binding communication channel. The exposition of the experiments, however, is unclear.\nIn Fig 1, it is not clear what Agent 1 and Agent 2 are. Do they correspond to arbitrary labels or the turns that the agent takes in the game?\nWhy is Agent 1 the one who triumphs in the no-communication channel game? Is there any advantage to going first generally? Where are the tests of robustness on the curves demonstrated in Figure 2a?\nHas figure 2b been cherry picked? This should be demonstrated over many different negotiations with error bars.\nIn the discussion of the agents being unable to ground cheap talk, the symbolic nature of the linguistic channel clouds the fact that it is not the symbolic, ungrounded aspect but the non-binding nature of communication on this channel. This would be more clearly demonstrated and parsimonious by using a non-binding version of the proposal channel and saving the linguistic discussion for later.\n\nExperiment 2 shows that by making the agents prosocial, they are able to learn to communicate on the linguistic channel to achieve pretty much optimal rewards, a very nice result.\nThe agents are not able to reach the same levels of cooperation on the proposal channel, in fact performing worse than the no-communication baseline. Protocols could be designed that would allow the agents to communicate their utilities over this channel (within 4 turns), so the fact they don't suggests it is the learning procedure that is not able to find this optimum. Presenting this as a result about the superiority of communication over the linguistic channel is not well supported.\nWhy do they do worse with random termination than 10 turns in the proposal channel? 4 proposals should contain enough information to determine the utilities.\nWhy are the 10 turn games even included in this table? It seems that this was dismissed in the environment setup section, due to the first mover advantage.\nWhy do no-communication baselines change so much between random termination and 10 turns in the prosocial case?\nWhy do self-interested agents for 10 turns on the linguistic channel terminate early?\nTable 1 might be better represented using the median and quartiles, since the data is skewed.\n\nAnalysis of the communication, i.e. what is actually sent, is interesting and the division into speaker and listener suggests that this is a simple protocol that is easy for agents to learn.\n\nExperiment 3 aims to determine whether an agent is able to negotiate against a community of other agents with mixed levels of prosociality. It is shown that if the fixed agent is able to identify who they are playing against they can do better than not knowing, in the case where the fixed agent is self interested.\nThe pca plot of agent id embeddings related is good.\nBoth Figure 4 and Table 3 use Agent 1 and Agent 2 rather than Agent A and Agent B and is not clear whether this is a mistake or Agent 1 is different from Agent A.\nThe no-communication baseline is referred to in the text but the results are not shown in the table.\nThere are no estimates of the uncertainty of the results in table 3, how robust are these results to different initial conditions?\nThis section seems like a bit of an add-on to address criticisms that might arise about the initial experiment being only two agents.\n\nOverall, the paper has some nice results and an interesting ideas but could do with some tightening up of the results to make it really good.\n", "Thank you to all reviewers for their thoughtful feedback. We particularly thank the reviewers for agreeing with us that the negotiation task is an interesting task to study emergent communication. We would also like to thank the reviewers for their compliments regarding the clarity of the writing - we hope our responses carry on this theme.\n\nWe have uploaded a revised version of the manuscript to address the concerns raised by the reviewers. The most significant changes are:\n - We have added interquartile ranges to Table 1, and split out the 10 turn results to the appendix. We also carried out additional experiments with 20 different initial seeds for the agent weights and added this to Table 1.\n - We have added uncertainty estimates to Table 3\n - We added uncertainty estimates to Figures 2a and 2b.\n - We added transcripts of the negotiation setup for various setups to the main body of the paper, as the new Tables 2 and 3.\n", "We would like to emphasize that the linguistic/utterance channel and the proposal channel are completely separate, and there is no a priori link in the messages in the proposal channel and the messages in the linguistic channel. When the agents are forced to use the linguistic channel exclusively, they must learn from scratch how to use the linguistic channel to communicate effectively to solve the negotiation task and thus the proposal channel is never communicated to the opponent. In short, LINGUISTIC refers to negotiating using ONLY the linguistic channel, PROPOSAL only the preference and BOTH using the combination.\n\n>By the very nature of the employed learning mechanisms, the provided solution provides little insight into what the emerging communication is really about. \n\nIt is difficult to quantify precisely what communication is about, especially in our bottom-up approach starting from arbitrary symbols. Despite this, in our post-processing analyses of the communication analyses, we show: (i) agents partition themselves into speaker and listener, (ii) elements of natural language are found in the protocols that emerged, and (iii) the content of the messages indicate that the agents are encoding their utilities in the language channel. \n\n>In my view, the lack of interpretable semantics hardly warrants a reference to 'cheap talk'. As such the expectations set by the well-developed introduction and background sections are moderated over the course of the paper.\n\nWe did not intend to suggest that the lack of interpretable semantics warrants a reference to cheap talk. We refer to cheap talk due to the fact that the exchanges in the linguistic channel have no effect on the resulting payoff, which follows directly from the definition.\nThe lack of interpretable semantics is orthogonal to any references to cheap talk: it simply motivates the research question of whether communication can emerge among learning agents. We will clarify this.\n\n>Providing exemplified communication traces would help the reader appreciate the complexity of the problem addressed by the paper.\n\nIn our most recent revision, we have added an appendix showing what sample games with each of the communication channels open look like.\n\n>Figure 3 is really hard to read/interpret. The same applies to Figure 4 (although less critical in this case).\n\nWe have made the figures larger, and added more explanation in the text.\n\n>Input parameters could have been made explicit in order to facilitate a more comprehensive understanding of technicalities (e.g. in appendix).\n\nWe have added an appendix showing the values of the hyperparameters we used. We would also like to thank the public comments that acted as additional motivation to help reproducibility.\n\n>Emergent communication is effectively unidirectional, with one agent as listener. Have you observed other outcomes in your evaluation?\n\nIn our experiments, we consistently see the agents separating into speaker-listener roles, as mentioned in the paper. \n", ">My only major comment is that I’m a bit skeptical about the claim that “self-interested agents cannot ground cheap talk to exchange meaningful information”. Given that the agents’ rewards would be improved if they were able to make agreements, and humans can use ‘cheap talk’ to negotiate, surely the inability to do so here shows a failure of the learning algorithm (rather than a general property of self-interested agents)?\n\nWe agree; we believe this is the the main reason that the bottom-up approach is particularly challenging. Humans (and to a lesser extent, the demonstration data in top-down approaches) benefit by having a priori semantics on the symbols in the linguistic channel. We used the term ‘self-interested agents’ mainly to separate them from the prosocial ones, but we do indeed mean in the context of the standard RL learning algorithms used in the paper, not more generally to mean ‘any possible self-interested agent’. We will clarify this.\nIn future work, we will explore more sophisticated RL learning techniques that allow self-interested to negotiate using a ‘cheap-talk’ channel (which due to its unbinding and unverifiable nature poses a challenge for the current RL algorithms).\n", ">In Fig 1, it is not clear what Agent 1 and Agent 2 are. Do they correspond to arbitrary labels or the turns that the agent takes in the game?\n\nWe have clarified that Agent 1 is the agent who consistently goes first in negotiation.\n\n>Why is Agent 1 the one who triumphs in the no-communication channel game? Is there any advantage to going first generally?\n\nActually, Agent 2 typically triumphs in the no-communication setup. We do not believe there is any significant advantages to either going first or second, as demonstrated by the fact that self-interested agents seem to be able to negotiate fairly in this environment.\n\n>Where are the tests of robustness on the curves demonstrated in Figure 2a?\n\nWe have re-run the experiments with 20 different random seeds, and have updated Figure 2a to show their averaged results. The uncertainty estimates are unfortunately only visible in the linguistic communication setup, as the other communication protocols seem to give rise to very stable training.\n\n>Has figure 2b been cherry picked? This should be demonstrated over many different negotiations with error bars.\n\nFigure 2b shows the average reward per turn over 1000 different negotiations. We present results from training 20 seeds (1280 test negotiations per seed). We have added interquartile ranges at every timestep, and clarified that they are averaged results in the text.\n\n>Protocols could be designed that would allow the agents to communicate their utilities over this channel (within 4 turns), so the fact they don't suggests it is the learning procedure that is not able to find this optimum.\n\nWe agree. However, discovering the optimal protocol through self-play RL is a significantly harder problem than designing one with knowledge of the optimal structure. For example, the pre-grounded nature of the proposal channel, combined with its lower information bandwidth, means that random exploration is less likely to find the optimal communication protocol.\n\n>Why are the 10 turn games even included in this table? It seems that this was dismissed in the environment setup section, due to the first mover advantage.\n\nWe include the 10 turn results so that we can include the key self-interested and prosocial results in the same table. We also wanted to demonstrate how strong the first mover advantage was. We have moved these results to the appendix however in the new draft.\nWe have also changed both tables to return mean and interquartile range as suggested.\n\n>Why do self-interested agents for 10 turns on the linguistic channel terminate early?\nWe believe that when enough information is exchanged, the agents make effective proposals and thus do not need to negotiate further.\n\n>Both Figure 4 and Table 3 use Agent 1 and Agent 2 rather than Agent A and Agent B and is not clear whether this is a mistake or Agent 1 is different from Agent A.\n\nWe have corrected this.\n\n>The no-communication baseline is referred to in the text but the results are not shown in the table.\n\nWe have added the no-communication baseline figure to the text.\n\n>There are no estimates of the uncertainty of the results in table 3, how robust are these results to different initial conditions?\n\nThese results are averaged across 10 batches of 128 games in each batch. We have clarified this, and added standard deviations to the table.", "Hi Hugh\n>- hold out set of 5 batches of 128 test games:\n> - presumably this is disjoint from the training set?\n> - what is then the algorithm for generating the training set, to ensure disjointness?\n\nWe generated the test games by fixing the seed of the RNG, generating 5 batches from our game environment generator, and then discarding any batches of games after this which overlapped with any of the test games.\n\n>- given that training seems to plateau for extended periods of time, eg see https://github.com/ASAPPinc/emergent_comms_negotiation/blob/master/images/20171104_192343gpu2_proposal_social_comms.png?raw=true up to first 100 thousand episodes, how did you decide when training was finished?\n\nWe basically just extended training until we saw no further improvements.\n\n>- you report the standard deviation across samples in a single batch of 128 games. To what extent did the fraction of reward vary across entire training runs?\n\nOur results were stable across training runs - we did extensive prototyping in a development environment and swept across random seeds, and the results were identical. We then moved onto the environment presented in the paper, and the results carried over. \n\n>Thoughts? What things should I consider checking in order to improve on this?\n\nThe hyperparameter we found with the biggest impact on the final outcome was the strength of the entropy regularization, as this controls the tradeoff between exploration and exploitation: too low and our agents didn't explore enough to find the optimal negotiation policy, too high and our agents just didn't learn anything. We found a useful debugging technique to be to monitor the proportion of actions taken equal to the argmax action, to ensure that the agents are still exploring during periods when the reward has plateaued. We also change the test time policy from the training time policy - during test time, we just deterministically take the action with the highest probability.\n\n>- how were the entropy regularization weight hyperparameters determined?\n> - using gridsearch?\n> - what was the search space of the grid search?\n> - was grid search run for both prosocial and not prosocial? or just for one particular set of settings?\n\nWe found the optimal hyperparameter values with grid search, using values in {1e-4, 5e-4, 1e-3, 5e-3, 1e-2, 5e-2} for the termination and proposal policies, and {1e-5, 5e-5, 1e-4, 5e-4, 1e-3, 5e-3} for the utterance policy. We ran this for all agent reward schemes, and the same hyperparameter values worked regardless of the agent reward scheme.\n\n> I wonder whether it might be useful to have similar curves for the prosocial agents, so we can easily see how the behavior of prosocial and not prosocial agents compares?\n\nWe have the training graphs for prosocial agents, but we did not include them in this revision due to space constraints. They mainly show the joint reward going fairly smoothly upwards as training goes on, and then plateauing at the end. We can add them to the next revision if necessary.\n\n> - disparity between number of symbols in utterance vocab (10), and the possible utilities (11)? (in an earlier reply you say there are 11 utterance tokens in fact?)\n\nThe item utilities range between 0 and 5 inclusive, not 0-10. The symbols the agents can communicate with range between 0-10. We also ran an experiment on how the bandwidth of the utterance channel affected prosocial agent negotiation success, and essentially i- disparity between number of symbols in utterance vocab (10), and the possible utilities (11)? (in an earlier reply you say there are 11 utterance tokens in fact?)t seems that as long as there is plenty of spare capacity in the utterance channel then the agents can learn to negotiate fine.\n\nHope this helps\nThe authors", "Hi Hugh\n\nGlad you liked the paper. Here's the answers:\n\n1) Yeah, it means that the number should be multiplied by 10 to get the number of training steps.\n2) We'll definitely consider this when revisions come around.\n3) Each step is a training round, consisting of a batch of 128 games.\n4) We didn't include training curves for prosocial agents - from memory, it took around 200k rounds to make use of the linguistic channel fully, and even more to use the proposal channel.\n5) This is just the hidden state - the cell state is initialised to zero.\n6) There are no termination tokens at all. There are 11 possible tokens, and each utterance is exactly of length 6.\n\nHope this helps the replication.", "Hi,\n\nTraining is gradually progressing, though I'm possibly missing some tricks currently. Some questions on reproducibility:\n- hold out set of 5 batches of 128 test games:\n - presumably this is disjoint from the training set?\n - what is then the algorithm for generating the training set, to ensure disjointness?\n- given that training seems to plateau for extended periods of time, eg see https://github.com/ASAPPinc/emergent_comms_negotiation/blob/master/images/20171104_192343gpu2_proposal_social_comms.png?raw=true up to first 100 thousand episodes, how did you decide when training was finished?\n- you report the fraction of reward obtained in table 1. Some questions on this:\n - you report the standard deviation across samples in a single batch of 128 games. To what extent did the fraction of reward vary across entire training runs?\n - you state that for prosocial, proposal channel only, the reward fraction was 0.92. However, in my own training run, the reward plateau's at ~0.81, after ~50,000 episodes of 128 games, and stays at ~0.81 for the remaining ~800,000 episodes, https://github.com/ASAPPinc/emergent_comms_negotiation/raw/master/images/20171104_144936_proposal_social_nocomms_b.png?raw=true Thoughts? What things should I consider checking in order to improve on this?\n\nSome other questions, not strictly necessary for reproducibility:\n- how were the entropy regularization weight hyperparameters determined?\n - using gridsearch?\n - what was the search space of the grid search?\n - was grid search run for both prosocial and not prosocial? or just for one particular set of settings?\n- It looks to me like the main thesis of the paper is the behavior of selfish vs pro-social agents, is this an approximately fair impression? For example, the paper states 'self-interested agents cannot ground cheap talk. When using the linguistic channel, agents do not negotiate optimally. Instead, the agents randomly alternate taking all the items, which is borne out by the oscillations in Figure 2a.' I wonder whether it might be useful to have similar curves for the prosocial agents, so we can easily see how the behavior of prosocial and not prosocial agents compares?\n- why have length 6 for the comms channel? Presumably it mostly just needs to be able to represent the hidden utility, which is sufficient for the other agent to have complete world state information? (Edit: oh, I guess the additional 3 length is necessary in the 'no proposal' case, so that the agent can communicate its proposal?)\n- disparity between number of symbols in utterance vocab (10), and the possible utilities (11)? (in an earlier reply you say there are 11 utterance tokens in fact?)", "Just for info, replication so far: https://github.com/ASAPPinc/emergent_comms_negotiation/blob/master/ecn.py\n\nExperiment log: https://github.com/ASAPPinc/emergent_comms_negotiation/blob/master/explog.md\n\n(stuck in a local minimum for now. digging a bit...)", "Opinion: might be worth including the reference for Entropy Regularization? I think it is 'Asynchronous methods for deep reinforcement learning', perhaps? (this itself cites Williams and Peng, but I think the Mnih et al reference is likely more useable?).", "Thanks! :)", "Nice paper :) So I'm trying to reproduce it :)\n\nSome very detailed questions, for trying to reproduce it:\n- in figure 2, what does 'x 10' mean? does it mean eg '10000' means '100000' steps?\n - opinion: maybe putting eg 'x 100,000' or 'thousands', could be clearer, easier to read, remove some zeros from the graph?\n - is a 'step' an episode (each episode comprising multiple turns/timesteps?), or is a step a timestep?\n- training curves for prosocial agents? how long does it take to learn to use the communications channel?\n- you refer to the 'hidden state of the lstm'. Does this include the cell? just the hidden state? If the latter, does this mean that the cell is initialized to zero at the start of each generated utterance? If the former, how are you handling the associated doubling of embedding size?\n- you say the vocabulary has 10 tokens. Does this include a termination token? or there are 11 possible tokens, 10 of which are part of the vocab, and one of which is termination?\n - similarly, does the utterance length 6 include the termination token, or is the length 6 + termination token?\n" ]
[ -1, 6, 7, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "HyQaw09QG", "iclr_2018_Hk6WhagRW", "iclr_2018_Hk6WhagRW", "iclr_2018_Hk6WhagRW", "iclr_2018_Hk6WhagRW", "SJGBLcYxG", "Bk7S9ZclM", "BkoPxj3xz", "r1Nwr02RW", "SkW8mhN0-", "iclr_2018_Hk6WhagRW", "HJ9FF6EAZ", "SkM4N2SCW", "HJ9FF6EAZ", "iclr_2018_Hk6WhagRW" ]
iclr_2018_SygwwGbRW
Semi-parametric topological memory for navigation
We introduce a new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals. The proposed semi-parametric topological memory (SPTM) consists of a (non-parametric) graph with nodes corresponding to locations in the environment and a (parametric) deep network capable of retrieving nodes from the graph based on observations. The graph stores no metric information, only connectivity of locations corresponding to the nodes. We use SPTM as a planning module in a navigation system. Given only 5 minutes of footage of a previously unseen maze, an SPTM-based navigation agent can build a topological map of the environment and use it to confidently navigate towards goals. The average success rate of the SPTM agent in goal-directed navigation across test environments is higher than the best-performing baseline by a factor of three.
accepted-poster-papers
Important problem (navigation in unseen 3D environments, Doom in this case), interesting hybrid approach (mixing neural networks and path-planning). Initially, there were concerns about evaluation (proper baselines, ambiguous environments, etc). The authors have responded with updated experiments that are convincing to the reviewers. R1 did not participate in the discussion and their review has been ignored. I am supportive of this paper.
train
[ "BJIvZ_H4M", "BkOtNnKxM", "S1H89etgf", "Hy3ONt2eM", "S1qTBd6XM", "ByrCE_aXf", "HJsFdVbQG", "SJMgFkxQG", "rJPdc9JmM", "rkVudelff", "rkU3lxgff", "ByFzpZxzf", "SJzHaWlfG", "ryI0-glfM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "Taking into account the revision, this is an interesting idea whose limitations have been properly investigated.", "*** Revision: based on the author's work, we have switched the score to accept (7) ***\n\nClever ideas but not end-to-end navigation.\n\nThis paper presents a hybrid architecture that mixes parametric (neural) and non-parametric (Dijkstra's path planning on a graph of image embeddings) elements and applies it to navigation in unseen 3D environments (Doom). The path planning in unseen environments is done in the following way: first a human operator traverses the entire environment by controlling the agent and collecting a long episode of 10k frames that are put into a chain graph. Then loop closures are automatically detected using image similarity in feature space, using a localization feed-forward ResNet (trained using a DrLIM-like triplet loss on time-similar images), resulting in a topological graph where edges correspond to similar viewpoints or similar time points. For a given target position image and agent start position image, a nearest neighbor search-powered Dijkstra path planning is done on the graph to create a list of waypoint images. The pairs of (current image, next waypoint) images are then fed to a feed-forward locomotion (policy) network, trained in a supervised manner.\n\nThe paper does not discuss at all the problems arising when the images are ambiguous: since the localisation network is feed-forward, surely there must be images that are ambiguously mapped to different graph areas and are closing loops erroneously? The problem is mitigated by the fact that a human operator controls the agent, making sure that the agent's viewpoint is clear, but the method will probably fail if the agent is learning to explore the maze autonomously, bumping into walls and facing walls. The screenshots on Figure 3 suggest that the walls have a large variety of textures and decorations, making each viewpoint potentially unique, unlike the environments in (Mirowski et al, 2017), (Jaderberg et al, 2017) and (Mnih et al, 2016).\n\nMost importantly, the navigation is not based on RL at all, and ignores the problem of exploration altogether. A human operator labels 10k frames by playing the game and controlling the agent, to show it how the maze looks like and what are the paths to be taken. As a consequence, comparison to end-to-end RL navigation methods is unclear, and should be stressed upon in the manuscript. This is NOT a proper navigation agent.\n\nAdditional baselines should be evaluated: 1) a fully Dijkstra-based baseline where the direction of motion of the agent along the edge is retrieved and used to guide the agent (i.e., the policy becomes a lookup table on image pairs) and 2) the same but the localization network replaced by image similarities in pixel space or some image descriptor space (e.g., SURF, ORB, etc…). It seems to me that those baselines would be very strong.\n\nAnother baseline is missing: (Oh et al, 2016, “Control of Memory, Active Perception, and Action in Minecraft”).\n\nThe paper is not without merit: the idea of storing experiences in a graph and in using landmark similarity rather than metric embeddings is interesting. Unfortunately, that episodic memory is not learned (e.g., Neural Turing Machines or Memory Networks).\n\nIn summary, just like the early paper released in 2010 about Kinect-based RGBD SLAM: lots of excitement but potential disappointment when the method is applied on an actual mobile robot, navigating in normal environments with visual ambiguity and white walls. The paper should ultimately be accepted to this conference to provide a baseline for the community (once the claims are revised), but I street that the claims of learning to navigate in unseen environments are unsubstantiated, as the method is neither end-to-end learned (as it relies on human input and heuristic path planning) nor capable of exploring unseen environments with visual ambiguity.", "The paper introduces a graph based memory for navigation agents. The memory graph is constructed using nearest neighbor heuristics based on temporal adjacency and visual similarity. The agent uses Dijkstra's algorithm to plan a a path through the graph in order to solve the navigation task. \n\nThere are several major problems with this paper. My overall impression is that the the proposed agent is a nearly hard-coded solution (which I think might be the correct approach to such problems), but a poorly implemented one. Specific points: 1-There are only 5 test mazes, and the proposed agent doesn't even solve all of them. 2-The way in which the maze is traversed in the exploration phase determines the accuracy of the graph that is constructed (i.e. traversing each location exactly once using a space-filling curve). 3-Of the two heuristics used in Equation 1 how many edges are actually constructed using the visual similarity heuristic? 4-How does the visual similarity heuristic handle visually similar map elements that correspond to distinct locations? 5- The success criteria of solving a maze is arbitrarily defined -- why exactly 2.4 min? ", "The paper presents for learning to visually navigate using a topological map is presented. The method combines an algorithmic approach to generate topological graphs, which is indexed by observations with Dijkstra's algorithm to determine a global path from which a waypoint observation is selected. The waypoint along with the current observation is fed to a learned local planner that transitions the agent to the target waypoint. A learned observation similarity mapping localizes the agent and indexes targets in the graph.\n\nThe novelty of the approach in context of prior visual navigation and landmark-based robotics research is the hybrid algorithmic and learned approach that builds a graph purely from observations without ego motion or direct state estimation. Most of the individual components have previously appeared in the literature including graph search (also roadmap methods), learned similarity metrics, and learned observation-based local planners. However, the combination of these approaches along with some of the presented nuanced enhancements including the connectivity shortcuts (a simple form of loop closure) to visual topological navigation and are compelling contributions. The results while not thorough do support the ability of the method effectively plan on novel and unseen environments.\n\nThe approach has potential limitations including the linear growth in the size of the memory, and it is unclear how the method handles degenerate observations (e.g., similar looking hallways on opposite sides of the environment). The authors should consider including examples or analysis illustrating the method’s performance in such scenarios.\n\nThe impact of the proposed approach would be better supported if compared against stronger baselines including recent research that address issues with learning long sequences in RNNs. Furthermore, additional experiments over a greater number and more diverse set of environments along with additional results showing the performance of the method based on variation environment parameters including number of exploration steps and heuristic values would help the reader understand the sensitivity and stability of the method.\n\nThe work in “Control of Memory, Active Perception, and Action in Minecraft” by Oh et al. have a learned memory that is used recall previous observations for planning. This method’s memory architecture can be considered to be nonparametric, and, while different from the proposed method, this similarity merits additional discussion and consideration for empirical comparison. \n\nSome of the the details for the baseline methods are unclear. The reviewer assumes that when the authors state they use the same architecture as Mnih et al. and Mirowski et al. that this also includes all hyper parameters including the size of each layer and training method. However, Mirowski et al. use RGB while the stated baseline is grayscale. \n\nThe reviewer wonders whether the baselines may be disadvantaged compared to the proposed method. The input for the baselines are restricted to a 84x84 input image in addition to being grayscale vs the proposed methods 160x120 RGB image. It appears the proposed method is endowed with a much greater capacity with a ResNet-18 in the retrieval network compared to the visual feature layers (two layers of CNNs) of the baseline networks. Finally the proposed method is provided with demonstrations (the human exploration) that effectively explore the environment. The authors should consider bootstrapping the baseline methods with similar experience. ", "We have performed the requested experiments and described them in the rebuttal, please see at the top of the comment section. In brief, both with ambiguous DMLab-like textures and with non-human exploration the method still works well, although somewhat worse compared to visually rich environments with human exploration.", "To address the reviewers’ concerns, we performed multiple additional experiments. First, as described in our previous comment, we ran an extensive evaluation in five more mazes and with variable hyperparameters, clearly demonstrating the advantage of our method over baselines. We also introduced an improved localization method which can better deal with perceptual aliasing. All these results are already shown and discussed in the revised paper.\n\nMore recently, we performed two more series of experiments requested by the reviewers: on navigation in DMLab-like environments with ambiguous textures, and on non-human exploration. These experiments show that our method can work even with sparse landmarks and non-human exploration. These results are not yet discussed in the paper, but we will add them to the final version. Note that operation in the ambiguous textures scenario is made possible by the simple improvement to agent’s localization: instead of localizing independently per frame, we smooth the self-localization of the agent by biasing it towards localization from the previous step (see Sec. 3.1, “Finding the waypoint”, for a description). We now describe these experiments in more detail.\n\nIn the first series of experiments, we re-created in Vizdoom the environments from DMLab used by Mirowski et al. with exactly the same layout and similar sparsity of landmarks. We additionally used 2 mazes from our own set with DMLab-like texture distribution, bringing the number of mazes to a total of 4. The walkthroughs for those environments are shown in the following videos: https://youtu.be/MS8ReBruns4 , https://youtu.be/Jlhkc-9hZdo , https://youtu.be/Ko5-SB6CpyQ , https://youtu.be/Rlgl01S7PQQ . Those environments are highly ambiguous: walls are mostly covered with a single texture, with occasional distinct textures.\nResults: we report success rate @5000 steps, higher is better. In the mazes DMLab small (Test-1 in our notation), DMLab large (Val-3 in our notation), Test-5, Test-4 the method achieves the following results (respectively):\n89%, 44%, 68%, 48%.\nThis can be compared to the results of our method in the corresponding visually rich environments, as reported in the paper:\n100%, 97%, 76%, 89%.\nThe results of best RL-based baselines are (respectively):\n53%, 21%, 32%, 25%.\nClearly, in ambiguous environments the performance of the proposed method drops, but the method does not break down completely and still outperforms the baselines by a large margin. Videos of navigating agent are available at https://youtu.be/NpfAF6LILXc , https://youtu.be/cZ5RMGs4LE8 , https://youtu.be/8ehxeeHiK-E . We point out that it is incorrect to assume that ambiguous environments are more realistic than visually rich environments from our previous experiments: both types of environments exist in real world. Yet, we admit that these visually ambiguous environments remain challenging for our method, and in future work we plan to further improve the model so as to better deal with these.\n\nIn the second experiment, we aimed to verify that our method still works even when given inefficient non-human exploration. For that purpose, we used traversals from the RL-based baseline with LSTM memory (see the paper). Of course, this only works for simple environments. Even for these, most traversals don't go through all the goals. To avoid this problem, we sampled the traversals until they covered all the goals. It took roughly 20 attempts for the mazes we tried. Here we report the results on 2 mazes: Test-1, Test-4. Those were chosen as the simplest so that the exploration heuristic works. The results are:\n63%, 52%\nFor human traversal before:\n100%, 89%\nThe best baselines were:\n53%, 25%\nAs can be seen, our method does not break here either, still beats the baselines, although by a smaller margin. Obviously, the agent's tracks get longer as the inefficient exploration wastes time instead of moving between different locations quickly (for example, see exploration tracks here: https://youtu.be/NHjqQS-d_ko , https://youtu.be/cEaaPVUu17I ).\n\nWe hope these results persuade the reviewers that the proposed method is generally applicable and promising. We will continue working on the remaining experiments proposed by the reviewers.\n", "Thanks for your response! For us to address the reviewer’s concerns properly, it would be helpful if the reviewer could clarify a bit.\n\nWe are still unsure: what exactly is the unfairness of using human demonstration the reviewer is mentioning? At training time none of the methods has access to human demonstrations. At test time all methods do: for the baselines, we feed the exploration sequence before starting the navigation trial, and the LSTM can potentially remember the maze layout. What exactly is unfair about this setup? Which experiment would get rid of this unfairness? \n\nNote that reliable autonomous exploration of a previously unseen environment is, to our knowledge, an unsolved problem. A random agent, an untrained RL agent, or even an RL agent trained in other environments, would not fit, since they will typically fail to explore the whole maze. Our contribution is in memory, not exploration. We compare different types of memory by providing them with the same (human) walkthrough exploration sequence. We believe that proper exploration requires memory, and now SPTM can be used to start attacking this problem.\n\nWe are working on experiments in DMLab-like environments.", "Following the rebuttal, we are waiting on additional experiments with DMLab as well as non-human maze exploration of the maze (while the sequence of observations from human demonstration is not explicitly \"labelled\" information, it is nevertheless privileged information that an untrained RL agent or a random agent do not possess). I apologize if I have missed these additional results in the revision from 26 December.", "We have submitted a revision of the paper. Key changes:\n- Added experiments in 5 more environments. There are now 3 validation and 7 test mazes. The proposed agent outperforms the baselines by a large margin in all of them.\n- Added an improved localization technique with temporal smoothing, partially addressing the perceptual aliasing problem.\n- Added an analysis of hyperparameter importance. The method is generally robust to hyperparameter values.\n- Improved the evaluation (16 trials per maze -> 96 trials per maze).\n- Demonstrated that SPTM agent performs well when the memory is temporally downsampled by a factor up to 4.\n\nWe continue working on other experiments requested by the reviewers, and will add them to the later revisions of the paper.", "> My overall impression is that the the proposed agent is a nearly hard-coded solution (which I think might be the correct approach to such problems), \n\nThe method uses more hand design than most end-to-end RL methods, but also includes crucial learned components. We believe that exploring the tradeoff between design and learning is important for solving complex problems such as mapping and navigation.\n \n> but a poorly implemented one. \n\nIt would be helpful if the reviewer could be more specific here.\n\n> There are only 5 test mazes, \n\nThis number of environments is similar to previous works on navigation, such as (Mirowski et al. 2017). Nevertheless, we will evaluate the methods in more environments.\n\n> the proposed agent doesn't even solve all of them. \n\nTypically a method does not have to “solve” all test cases it is applied to in order to be useful, but rather has to outperform relevant baselines. This is what we demonstrate. We are personally skeptical when a method works perfectly on all examples shown in a paper, as this typically indicates that either the problem was easy (not the case in our setting) or the examples are cherry-picked (likewise).\n\n> The way in which the maze is traversed in the exploration phase determines the accuracy of the graph that is constructed (i.e. traversing each location exactly once using a space-filling curve).\n\nIndeed the properties of the traversal affect the algorithm performance. This is, to our knowledge, the case for all mapping methods. We are not using space-filling curves, but rather simple human traversals.\n\n> Of the two heuristics used in Equation 1 how many edges are actually constructed using the visual similarity heuristic?\n\nApproximately 4000 shortcuts were made based on visual similarity. After introducing these shortcut connections, the average length of the shortest path to the goal, computed over all nodes in the graph, drops from 2500 to 772 steps. This indicates that the created shortcuts contribute significantly to the success of navigation. We will evaluate the method without these shortcuts and include the results in the paper.\n\n> How does the visual similarity heuristic handle visually similar map elements that correspond to distinct locations?\n\nThis issue, known as perceptual aliasing, is a fundamental problem for all mapping and navigation algorithms. The basic version of our approach, presented in the paper, is indeed unable to disambiguate two identical observations. We implemented a simple modification of the approach: when localizing itself in the graph, the agent first searches in a neighborhood of the previous location, and only resorts for localizing itself in the whole graph if no confident match is found in this first step. This introduces temporal smoothness to localization of the agent, partially addresses the perceptual aliasing problem, and improves the performance of the method, especially in complex mazes. We will add a comparison of the naive and the advanced versions of the method to the paper. More principled treatment of perceptual aliasing is an important direction of future work.\n\n > The success criteria of solving a maze is arbitrarily defined -- why exactly 2.4 min? \n\nThere has to be some upper limit on the duration of a navigation trial, and we chose 5000 simulation steps in this work. Plots in Figure 5 show the success rate as a function of episode duration for durations less than this maximum threshold.\n", "We thank the reviewers for their work and their valuable comments. We are happy that the reviewers find the idea of topological memory exciting and compelling (AR2, AR3), note that this might be the right approach to the problem (AR1), and argue that the paper should be ultimately accepted for publication after revising the text (AR2, AR3). \n\nWe respond in detail to each of the reviewers individually in comments to their reviews. We will work on performing the proposed experiments and updating the paper accordingly.\n\nWe realized that the paper may not give a good impression of the appearance of the mazes and the difficulty of the task. This link https://youtu.be/4QxO8mdOf3M shows the human walkthrough sequence for the Test-Difficult maze. Note that the textures are diverse, but repetitive. Other mazes have similar texture distribution.\n", "We thank the reviewer for the thoughtful review and useful comments. We will update the paper accordingly.\n\n> feed-forward locomotion (policy) network, trained in a supervised manner.\n\nWe would like to highlight that the locomotion policy is trained in self-supervised fashion, without any human labels or demonstration.\n\n> the problems arising when the images are ambiguous\n\nThis issue, known as perceptual aliasing, is a fundamental problem for all mapping and navigation algorithms. The basic version of our approach, presented in the paper, is indeed unable to disambiguate two identical observations. We implemented a simple modification of the approach: when localizing itself in the graph, the agent first searches in a neighborhood of the previous location, and only resorts for localizing itself in the whole graph if no confident match is found in this first step. This introduces temporal smoothness to localization of the agent, partially addresses the perceptual aliasing problem, and improves the performance of the method, especially in complex mazes. We will add a comparison of the naive and the advanced versions of the method to the paper. More principled treatment of perceptual aliasing is an important direction of future work.\n\n> The problem is mitigated by the fact that a human operator controls the agent, making sure that the agent's viewpoint is clear, but the method will probably fail if the agent is learning to explore the maze autonomously, bumping into walls and facing walls\n\nIt is true that the method can be sensitive to the exploration sequence, and there is room for improvement. However, we would like to point out that 1) in this paper we focus on the memory architecture, 2) any mapping method would be sensitive to the properties of the walkthrough sequence, 3) availability of a short human exploration sequence is a reasonable assumption in many practical settings, for instance for a household robot, and 4) the baselines get access to exactly the same walkthrough sequence.\n\nTo analyze the robustness of the method to the properties of the walkthrough sequence, we will perform experiments with non-human exploration and include the results in the paper.\n\n> The screenshots on Figure 3 suggest that the walls have a large variety of textures and decorations, making each viewpoint potentially unique, unlike the environments in (Mirowski et al, 2017), (Jaderberg et al, 2017) and (Mnih et al, 2016).\n\nIt is true that our environments are different from the aforementioned DM Lab ones in terms of the texture distribution. This video https://youtu.be/4QxO8mdOf3M shows a walkthrough sequence of the Test-Difficult maze (other mazes are similar in texture distribution). Indeed the textures are diverse, but they are also repetitive, so the same seemingly discriminative texture appears in multiple locations in the maze. Moreover, the floor and the ceiling textures are uniform. This makes localization challenging. In the DM Lab environments, on the other hand, the floor textures vary, and walls are populated with unique “markers”. \n\nThe textures we used were procedurally and randomly placed on the walls. We agree that additional evaluation on DM Lab environments or qualitatively similar ones in ViZDoom would be interesting and will work on adding these results to the paper.\n\n\n", "> Most importantly, the navigation is not based on RL at all, and ignores the problem of exploration altogether. \n\nIn this work we focus on memory, not exploration. In fact, exploration of a previously unseen environment is a very difficult problem of its own, which itself needs memory: in order to know which parts of the environment to explore, the agent needs to know where it is currently located and which parts had already been explored. In this work, we make a useful contribution by designing the memory module. Using it for exploration (potentially in combination with RL) is a very exciting avenue for future work.\n\nThe proposed method can be seen as an advanced version of visual imitation learning (the agent does not have access to the expert’s actions!) combined with mapping. Interestingly, a concurrent submission to ICLR proposes to use a method very similar to our locomotion network for visual imitation learning: https://openreview.net/forum?id=BkisuzWRW .\n\n> A human operator labels 10k frames by playing the game and controlling the agent, to show it how the maze looks like and what are the paths to be taken. As a consequence, comparison to end-to-end RL navigation methods is unclear, and should be stressed upon in the manuscript. \n\nThere seems to be some misunderstanding here. The operator does not “label” the images: the actions taken by the human are not available to the agent, only the observed image sequence. This image sequence is provided both to our agent and the baseline agents at test time. At training time, none of the agents have access to human demonstrations. Therefore, the comparison is fair. \n\n> a fully Dijkstra-based baseline where the direction of motion of the agent along the edge is retrieved and used to guide the agent\n\nIf we understand the suggestion correctly, it requires recording the expert’s actions during the walkthrough and then repeating them at test time. As noted above, in our setup the actions from the walkthrough sequence *are not available* to the agents. Therefore this baseline has access to more information than the methods evaluated in the paper. Nevertheless, we will work on evaluating this baseline and including it in the paper. \n\n> the localization network replaced by image similarities in pixel space or some image descriptor space (e.g., SURF, ORB, etc…). \n\nThanks for this suggestion! These are indeed very meaningful baselines. We will evaluate and add the results to the paper.\n\n> Oh et al, 2016, “Control of Memory, Active Perception, and Action in Minecraft”).\n\nThanks for pointing this out, the work of Oh et al. is definitely very relevant. We will add a discussion to the paper. However, note that experiments by Oh et al. are performed in gridworld-like environments, in contrast with large mazes with continuous state space in our work. We expect that scaling the method of Oh et al. to these environments will be challenging. We will compare to the work of Oh et al. and will include the results in the paper.\n\n> Unfortunately, that episodic memory is not learned (e.g., Neural Turing Machines or Memory Networks).\n\nIndeed the memory is not learned end-to-end via RL, but both the embedding and the locomotion network are learned in self-supervised fashion. We strongly believe that both approaches have value, since RL alone provides only a relatively weak training signal. Combining the two is an exciting avenue for future work.\n\n> potential disappointment when the method is applied on an actual mobile robot, navigating in normal environments with visual ambiguity and white walls. \n\nWe agree that deployment on a real robot would be challenging, which almost certainly would also be the case for any other navigation algorithm developed in a simulated environment. However, we are happy to see that the independent concurrent approach mentioned earlier (https://openreview.net/forum?id=BkisuzWRW), which is similar to our locomotion network, can be deployed on a real robot.\n\n> but I street that the claims of learning to navigate in unseen environments are unsubstantiated, as the method is neither end-to-end learned (as it relies on human input and heuristic path planning) nor capable of exploring unseen environments with visual ambiguity.\n\nWe never aimed to state that the proposed method learns navigation and exploration end-to-end. In fact, the availability of a walkthrough video, as well as our focus on the memory module, are mentioned both in the abstract and in the introduction. We will carefully review the paper to make sure no inappropriate claims are made. If there are any specific parts the reviewer would like us to revise, it would be very helpful to point those out.", "We thank the reviewer for the thoughtful review and the useful comments. We will update the paper accordingly.\n\n> the proposed method is provided with demonstrations (the human exploration) that effectively explore the environment. The authors should consider bootstrapping the baseline methods with similar experience.\n\nWe believe there might be a misunderstanding of our experimental setup here. When testing the baselines, we feed them the walkthrough sequence (no expert actions, only the observations!) before each navigation trial. LSTM could remember the environment layout based on this sequence. Both the proposed method and the baselines have access to the walkthrough sequence at test time, and none of the methods have access to them at training time. Therefore, the comparison is fair.\n\n> the linear growth in the size of the memory\n\nThis is indeed an undesirable property, and future work will have to address this. As a first step, we will include in the paper experiments with memory sub-sampled by a constant factor. \n\n> handling degenerate observations (e.g., similar looking hallways on opposite sides of the environment)\n\nThis issue, known as perceptual aliasing, is a fundamental problem for all mapping and navigation algorithms. The basic version of our approach, presented in the paper, is indeed unable to disambiguate two identical observations. We implemented a simple modification of the approach: when localizing itself in the graph, the agent first searches in a neighborhood of the previous location, and only resorts for localizing itself in the whole graph if no confident match is found in this first step. This introduces temporal smoothness to localization of the agent, partially addresses the perceptual aliasing problem, and improves the performance of the method, especially in complex mazes. We will add a comparison of the naive and the advanced versions of the method to the paper. More principled treatment of perceptual aliasing is an important direction for future work.\n\n> Discuss and evaluate “Control of Memory, Active Perception, and Action in Minecraft” by Oh et al. \n\nThanks for pointing this out, the work of Oh et al. is definitely very relevant. We will add a discussion to the paper. However, note that experiments by Oh et al. are performed in gridworld-like environments, in contrast with large mazes with continuous state space in our work. We expect that scaling the method of Oh et al. to these environments will be challenging. We will compare to the work of Oh et al. and will include the results in the paper.\n\n> compare against stronger baselines including recent research that address issues with learning long sequences in RNNs\n\nWe would gladly compare to more elaborate memory architectures, if the reviewer could point at existing implementations that use these memory architectures for navigation in environments with continuous state spaces. Implementing and tuning such a navigation system from scratch would be a complex undertaking of its own.\n\n> Furthermore, additional experiments over a greater number and more diverse set of environments\n\nWe will evaluate the methods in additional environments and will include the results in the paper.\n\n> results showing the performance of the method based on variation environment parameters and heuristic values\n\nWe agree that such an analysis would be very useful. We will run corresponding experiments and include the results in the paper.\n\n> Hyperparameters of baselines\n\nFor the baseline evaluation, we kept the hyperparameters as similar as possible to Mirowski et al.\n\n> Grayscale for the baselines, RGB for the proposed method\n\nIndeed, we fed grayscale images to the baselines instead of RGB. According to our previous experience, RL navigation methods typically do not benefit from RGB images. Still, we agree that for fair evaluation RGB images should be used, and we will re-evaluate the baselines with RGB images.\n\n> 84x84 input image and small network for baselines, 160x120 image and ResNet for the proposed method\n\nThe proposed method does make use of a higher resolution image and a larger network. However, we are not aware of any existing works demonstrating training of ResNet-size networks with large input images from scratch using RL. Moreover, such experiments could get extremely slow. 84x84 images and small networks are the absolute standard in the deep RL literature, and therefore we assumed it is fair to use these standard settings for the baselines. We will add baseline experiments with 160x120 input images.\n" ]
[ -1, 7, 3, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "S1qTBd6XM", "iclr_2018_SygwwGbRW", "iclr_2018_SygwwGbRW", "iclr_2018_SygwwGbRW", "SJMgFkxQG", "iclr_2018_SygwwGbRW", "SJMgFkxQG", "ByFzpZxzf", "iclr_2018_SygwwGbRW", "S1H89etgf", "iclr_2018_SygwwGbRW", "BkOtNnKxM", "BkOtNnKxM", "Hy3ONt2eM" ]
iclr_2018_B12Js_yRb
Learning to Count Objects in Natural Images for Visual Question Answering
Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a toy task show the effectiveness of this component and we obtain state-of-the-art accuracy on the number category of the VQA v2 dataset without negatively affecting other categories, even outperforming ensemble models with our single model. On a difficult balanced pair metric, the component gives a substantial improvement in counting over a strong baseline by 6.6%.
accepted-poster-papers
Initially this paper received mixed reviews. After reading the author response, R1 and and R3 recommend acceptance. R2, who recommended rejecting the paper, did not participate in discussions, did not respond to author explanations, did not respond to AC emails, and did not submit a final recommendation. This AC does not agree with the concerns raised by R2 (e.g. I don't find this model to be unprincipled). The concerns raised by R1 and R3 were important (especially e.g. comparisons to NMS) and the authors have done a good job adding the required experiments and providing explanations. Please update the manuscript incorporating all feedback received here, including comparisons reported to the concurrent ICLR submission on counting.
train
[ "r13as6sXf", "HkRBLTo7G", "S1dXkASrz", "H1GhmwqgG", "Hkwum9YgM", "SJ11jzclf", "B1bfGw-ff", "HJt5Gsrbf", "ByYnNiB-f", "S15E4oHbM", "Sy-67iHZz", "B1-TZiS-M", "Sk0dZsS-M", "B17QZjB-M", "rkkDeiB-M" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "Due to the length of our detailed point-by-point rebuttals, we would like to give a quick summary of our responses to the main concerns that the reviewers had.\n\n# Reviewer 3 (convinced by our rebuttal and increased the rating)\n\n- Too handcrafted\nThe current state-of-art in VQA on real images is nowhere near good enough for learning to count to be feasible using general models without hand-crafting some aspects of them for counting specifically. Hand-crafting gives the component many of its useful properties and guarantees, such as allowing us to understand why it made specific predictions. We are the first to use a graph-based approach with any significant benefit on VQA for real images, which in the future could certainly be generalized, but we had no success with generalizations so far.\n\n- NMS baseline missing\nWe have updated the paper with NMS results, strengthening our main results.\n\n- Comparison with (Chattopadhyay et al., 2017) missing\nTheir experimental setup majorly disadvantages VQA models, so their results are not comparable. We have updated the paper to make this clearer.\n\n\n# Reviewer2 (No response to rebuttal yet)\n\n- Application range is very limited\nWhile the reviewer claims that our component is entirely limited to VQA, this is not true since even the toy dataset that we use has not much to do with VQA -- it is a general counting task. Counting tasks have much practical use and we updated the paper to explicitly state how the component is applicable outside of VQA.\n\n- Built on a lot of heuristics without theoretical consideration\nOur component does not use an established mathematical framework (such as Determinantal Point Processes as the reviewer suggests in a comment) but we are justifying every step with what properties are needed to count correctly. That is, we are mathematically correctly modelling a sensible counting mechanism and disagree with the claim that these are just a bunch of heuristics. In both theory and practice, the counting mechanism gives perfect answers when ideal case assumptions apply and sensible answers when they do not apply. The lack of traditional theory also seems to be a complaint about large parts of Deep Learning and recent Computer Vision research in general.\n\nEspecially given the strong positives that this reviewer lists, we do not think that a final rating of 4 is fair towards our work.\n\n\n# Reviewer1 (No response to rebuttal yet)\n\n- Fails to show improvement over a couple of important baselines\nWe think that the reviewer must have misunderstood something; we do not know what this could possibly be referring to. If the reviewer is referring to baselines in the paper, all results show a clear improvement of our component over all existing methods. If the reviewer is referring to baselines not in the paper, then we do not see how this can be the case: we only left out baselines that are strictly weaker in all aspects than the ones we show in the paper. You can verify that our results on the number category (51.39%) outperforms everything, including ensemble models of state-of-the-art techniques with orthogonal improvements, on the the official leaderboard: https://evalai.cloudcv.org/web/challenges/challenge-page/1/leaderboard (our results are hidden for anonymity)\n\n- Qualitative examples of A, D, and C are needed\nWe have updated the paper to include some qualitative examples.", "We would like to point out a related paper that was submitted to ICLR 2018: Interpretable Counting for Visual Question Answering, https://openreview.net/forum?id=S1J2ZyZ0Z\nThey also tackle the problem of counting in VQA with a sequential counting method with several differences in the approach:\n\n- They use a more-or-less generic network for sequential counting and design a specific loss, while we design a specific network component and use a generic loss.\n- We use the standard full VQA dataset, whereas they create their own dataset by taking only counting questions from the VQA and Visual Genome datasets. This makes our results comparable to prior work on VQA, showing a clear benefit over existing results in counting. In total we use more questions to train with (since we are using all VQA questions, not just counting ones), but fewer counting questions (since we are not using counting questions from Visual Genome), so the impact of this difference on counting performance is unclear.\n- It is unclear to us whether their method is usable within usual VQA architectures due to their loss not applying when the question is not a counting question. Our model is just a regular VQA model with our component attached to it without relying on a specific loss function, so the usual losses that are used in VQA can be used directly. This allows our component to be easily used in other VQA models unlike their method.\n- Their method has the advantage of interpretability of the outputs. To understand a predicted count one can look at the *set of objects* it counted (this is something that our Reviewer3 wanted). Our method has the advantage of interpretability of the model weights and activations. To understand a predicted count one can look at the *activations through the component* with a clear interpretation for the activations, i.e. understanding *how* the model made the decision (though unlike their method, without a set of objects being obtained in the process, but a score for each object between 0 and 1).\n- In terms of performance, we can make some very rough comparisons with their numbers. The UpDown baseline that they re-implement is the same model architecture as used for the single-model results in our Table 1 by (Teney et al., 2017). This baseline model gets 47.4% accuracy on their dataset, which improves to 49.7% with their method (2.3% absolute improvement). Meanwhile, on number questions (a superset of counting questions, though mostly consisting of counting questions) with the regular VQA dataset, the same model architecture gets 43.9%, which improves to 51.4% with our model, a clearly much larger benefit (7.5%). Part of this is due to a stronger base model, but even then, the stronger baseline we compare against has a number accuracy of 46.6%,meaning that we have an absolute improvement of 4.8% with our model.\nThe improvement through our model on just the counting questions is even larger as we show in Table 2.", "Thank you for your update; we are glad to hear that you found the rebuttal convincing.\n\nWith regards to your comment about worries that the proposed model may be hard to reproduce due to its complexity: As we mention in a footnote in the paper, we will open-source all of our code soon. The complexity that you perceive can be boiled down to a sequence of simple tensor operations, so we think that it should be reasonably straightforward to implement in any modern Deep Learning framework. Here is the snippet of our model implementation in PyTorch (we do not rely on the dynamic computation graph feature of PyTorch), with the important bit being the forward function of the Counter class: https://gist.github.com/anonymous/669509edc32eb28cc508221de47baa43 . We will clean this and the rest of the code up to be easier to follow before release.", "\nSummary: \n- This paper proposes a hand-designed network architecture on a graph of object proposals to perform soft non-maximum suppression to get object count.\n\nContribution:\n- This paper proposes a new object counting module which operates on a graph of object proposals.\n\nClarity:\n- The paper is well written and clarity is good. Figure 2 & 3 helps the readers understand the core algorithm.\n\nPros:\n- De-duplication modules of inter and intra object edges are interesting.\n- The proposed method improves the baseline by 5% on counting questions.\n\nCons:\n- The proposed model is pretty hand-crafted. I would recommend the authors to use something more general, like graph convolutional neural networks (Kipf & Welling, 2017) or graph gated neural networks (Li et al., 2016).\n- One major bottleneck of the model is that the proposals are not jointly finetuned. So if the proposals are missing a single object, this cannot really be counted. In short, if the proposals don’t have 100% recall, then the model is then trained with a biased loss function which asks it to count all the objects even if some are already missing from the proposals. The paper didn’t study what is the recall of the proposals and how sensitive the threshold is.\n- The paper doesn’t study a simple baseline that just does NMS on the proposal domain.\n- The paper doesn’t compare experiment numbers with (Chattopadhyay et al., 2017).\n- The proposed algorithm doesn’t handle symmetry breaking when two edges are equally confident (in 4.2.2 it basically scales down both edges). This is similar to a density map approach and the problem is that the model doesn’t develop a notion of instance.\n- Compared to (Zhou et al., 2017), the proposed model does not improve much on the counting questions.\n- Since the authors have mentioned in the related work, it would also be more convincing if they show experimental results on CL\n\nConclusion:\n- I feel that the motivation is good, but the proposed model is too hand-crafted. Also, key experiments are missing: 1) NMS baseline 2) Comparison with VQA counting work (Chattopadhyay et al., 2017). Therefore I recommend reject.\n\nReferences:\n- Kipf, T.N., Welling, M., Semi-Supervised Classification with Graph Convolutional Networks. ICLR 2017.\n- Li, Y., Tarlow, D., Brockschmidt, M., Zemel, R. Gated Graph Sequence Neural Networks. ICLR 2016.\n\nUpdate:\nThank you for the rebuttal. The paper is revised and I saw NMS baseline is added. I understood the reason not to compare with certain related work. The rebuttal is convincing and I decided to increase my rating, because adding the proposed counting module achieve 5% increase in counting accuracy. However, I am a little worried that the proposed model may be hard to reproduce due to its complexity and therefore choose to give a 6.", "Summary\n - This paper mainly focuses on a counting problem in visual question answering (VQA) using attention mechanism. The authors propose a differentiable counting component, which explicitly counts the number of objects. Given attention weights and corresponding proposals, the model deduplicates overlapping proposals by eliminating intra-object edges and inter-object edges using graph representation for proposals. In experiments, the effectiveness of proposed model is clearly shown in counting questions on both a synthetic toy dataset and the widely used VQA v2 dataset.\n\nStrengths\n - The proposed model begins with reasonable motivation and shows its effectiveness in experiments clearly. \n - The architecture of the proposed model looks natural and all components seem to have clear contribution to the model.\n - The proposed model can be easily applied to any VQA model using soft attention. \n - The paper is well written and the contribution is clear.\n\nWeaknesses\n - Although the proposed model is helpful to model counting information in VQA, it fails to show improvement with respect to a couple of important baselines: prediction from image representation only and from the combination of image representation and attention weights. \n - Qualitative examples of intermediate values in counting component--adjacency matrix (A), distance matrix (D) and count matrix (C)--need to be presented to show the contribution of each part, especially in the real examples that are not compatible with the strong assumptions in modeling counting component.\n\nComments\n - It is not clear if the value of count \"c\" is same with the final answer in counting questions. \n\n", "This paper tackles the object counting problem in visual question answering. It is based on the two-stage method that object proposals are generated from the first stage with attention. It proposes many heuristics to use the object feature and attention weights to find the correct count. In general, it treats all object proposals as nodes on the graph. With various agreement measures, it removes or merges edges and count the final nodes. The method is evaluated on one synthetic toy dataset and one VQA v2 benchmark dataset. The experimental results on counting are promising. Although counting is important in VQA, the method is solving a very specific problem which cannot be generalized to other representation learning problems. Additionally, this method is built on a series of heuristics without sound theoretically justification, and these heuristics cannot be easily adapted to other machine learning applications. I thus believe the overall contribution is not sufficient for ICLR.\n\nPros:\n1. Well written paper with clear presentation of the method. \n2. Useful for object counting problem.\n3. Experimental performance is convincing. \n\nCons:\n1. The application range of the method is very limited. \n2. The technique is built on a lot of heuristics without theoretical consideration. \n\nOther comments and questions:\n\n1. The determinantal point processes [1] should be able to help with the correct counting the objects with proper construction of the similarity kernel. It may also lead to simpler solutions. For example, it can be used for deduplication using A (eq 1) as the similarity matrix. \n\n2. Can the author provide analysis on scalability the proposed method? When the number of objects is very large, the graph could be huge. What are the memory requirements and computational complexity of the proposed method? \nIn the end of section 3, it mentioned that \"without normalization,\" the method will not scale to an arbitrary number of objects. I think that it will only be a problem for extremely large numbers. I wonder whether the proposed method scales. \n\n3. Could the authors provide more insights on why the structured attention (etc) did not significantly improve the result? Theoritically, it solves the soft attention problems. \n\n4. The definition of output confidence (section 4.3.1) needs more motivation and theoretical justification. \n\n[1] Kulesza, Alex, and Ben Taskar. \"Determinantal point processes for machine learning.\" Foundations and Trends® in Machine Learning 5.2–3 (2012): 123-286.", "We have updated the paper clarifying some things that reviewers maybe misunderstood and added experimental results that the reviewers wanted to see.\n\n- Clarified that results in (Chattopadhyay et al., 2017) are not comparable at the end of section 2. (Reviewer3)\n- Improved explanation of why attention with sigmoid normalization (or similar) produces feature vectors that do not lend themselves to count at all in section 3. (Reviewer2)\n- Included NMS results in Table 2. (Reviewer3)\n- Clarified comparisons with (Zhou et al., 2017) in section 5.2.1. (Reviewer3)\n- Included some qualitative examples of matrices A, D, and C in Appendix E. (Reviewer1)\n- Explicitly state how the component has use outside of VQA in section 6. (Reviewer2)\n", "Thank you for the review. We are glad to hear that you think that everything is reasonably motivated, the results are good, and that there is a clear contribution with good writing.\n\n\n- Fails to show improvement with respect to a couple of important baselines\n\nCan you elaborate on what you mean with \"image representation only\" and \"combination of image representation and attention weights\"? We are not sure whether you are referring to existing experiments in the paper or experiments that you would like to see (we are happy to include these baselines if reasonable). Just to clarify the existing baselines that we already compare against: all the models in Table 1 and Table 2 use soft attention with softmax normalization on object proposal features. We did not list models using pixel representations, since they are outperformed by models using object proposals in all question categories of VQA (Andersen et al., 2017). Models with attention have been shown to outperform models without attention many times in the literature (e.g. survey in [1]).\n\n\n- Qualitative examples of matrices A, D, and C are needed\n\nThank you for the good idea. We will include some examples of these in a revision of the paper.\n\n\n- Unclear whether c is the same as the predicted answer\n\nc is not necessarily the predicted answer; it is just a feature (which gets turned into a linear interpolation of appropriate one-hot encoded vectors as per equation 8) that the answer classifier makes use of. Since not all questions in VQA are counting questions, the model learns how and when to use this feature. The existing model descriptions in 5.1 and 5.2, along with the diagram of the VQA model architecture, should make this clear.\n\n\nAdditional references:\n[1] Damien Teney, Qi Wu, and Anton van den Hengel. Visual Question Answering: A Tutorial. In IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 63-75. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8103161&isnumber=8103076", "- Claim of \"without norm\" the method doesn't scale to arbitrary numbers.\n\nTo keep things clear: we are talking about methods that apply unnormalized attention weights to a set of feature vectors and trying to count from the resulting single feature vector (or multiple if there are multiple attention glimpses). When referring to not being able to scale to arbitrary numbers, we are referring to numbers even beyond 2. Using this method, scaling the whole feature vector should have the effect of scaling the count to predict, since that is exactly what happens when the attention weights are not normalized and the input is simply duplicated (as per the example in section 3). The issue is that the model has to learn that joint scaling of all features (2048 features per glimpse in our case) is related to count, but scaling of individual features is still related to different levels of expression of that feature in the input. These two properties seem contradictory to us; when feature vectors in the input set can vary from each other greatly, it is unclear to us how the joint scaling of all features property can be learnt at all beyond tiny changes in scale. It also contradicts the common notion of being able to approximately linearly interpolate in the latent space of deep neural networks, since the magnitude of a feature is no longer directly related to the expression of that feature, but depends on the magnitude of all other features in a fairly complex relationship. Empirically, using a sigmoid activation or simple averaging across the set of feature vectors without attention has not helped in several previous works, neither for counting nor for overall performance as mentioned at the end of section 3.\n\nThus, we highly doubt that sigmoid activation or similar methods that do not normalize attention weights to sum to 1, despite leaving more information about counting in the feature vector than softmax normalization, can lead to feature vectors from which counting can be learned at all. As we discuss in section 2, any improvement in counting you see despite this requires counting information to already be present in the input, which limits the objects that can be counted to things that the object proposal network can distinguish. When saying that it does not scale to arbitrary numbers, we were conservative in that statement in that we can imagine that in special cases it might be possible to learn to relate very small joint feature scaling to counting information, but not generally or in practice.\n\nWe realize that this is a slightly alternative explanation than we provide in the paper and will update the paper to make this clearer accordingly.\n\n- Insights on why structured attention did not significantly improve the result\n\nWith structured attention, each individual glimpse can select a contiguous region of pixels associated to individual objects unlike regular soft attention. However, it still lacks a way of extracting information from the attention map itself, which is necessary for counting as we argue in section 3. In order to attend to multiple objects, multiple glimpses are needed as well. This makes structured attention on pixels very similar to soft attention on object proposals; the structure in the attention acts as an implicit object detector. Thus, while structured attention solves one problem with soft attention -- the same that using object proposals solves -- it is not enough to actually count.\n\n\n- Output confidence needs more motivation and theoretical justification\n\nWe think that we have provided sufficient motivation for the output confidence in the paper.\nHere is an expanded version of the consequences of it: The output confidence can learn to suppress the magnitude of the counting features on an example-by-example basis by how close the values of vector a and matrix D are to the ideal values. When for certain values of D and a the predicted count is inaccurate during training, the gradient update reduces the magnitude of the counting features for those values through the output confidence. This lets the counting component learn when it is inaccurate and allows the VQA model using the component to compensate for it instead of blindly trusting that the counting features are always reliable. We found this to be a useful -- though not absolutely necessary -- step that slightly improves counting performance in practice.\n\n\nAdditional references:\n[1] Zelda Mariet and Suvrit Sra. Diversity Networks: Neural Network Compression using Determinantal Point Processes. In ICLR, 2016.", "- Built on a lot of heuristics without theoretical consideration\n\nWe disagree that there is no theoretical consideration in our method. Mathematically, we are handling the edge cases for counting correctly and all other behaviour is based on a learned interpolation between the edge cases, guaranteeing at least some sort of sensible counting. It is true that it is not based on a well-established mathematical framework, but we are successfully solving a practical problem with a practical solution. Every step within the component is theoretically justified with what property is necessary in order for the component to produce correct counts, followed by a way of achieving each property. After building up the model this way, it produces perfect counts in the cases when our modeling assumptions hold (the properties that we claimed are necessary lead to this) and the performance degrades gracefully under uncertainties when our modeling assumptions do not hold (another consequence of the properties that we enforce). Thus, we disagree with your claim that these are theoretically unmotivated heuristics; every part of the component has a reasonable, theoretically-based motivation based on what a correct counting mechanism should do. AnonReviewer1 agrees that all steps are reasonably motivated in their review.\n\nWe do not think that this paper should be rejected simply because it is does not use an established mathematical model to solve a task. This complaint seems to apply to much of Deep Learning and Computer Vision research in general, not this paper in particular.\n\n\n- Determinantal point process should be able to help\n\nThis connection looks interesting, but from the survey it is not clear that it is at all applicable to counting. The mathematics may be well grounded, but the assumptions about diversity seem very ad-hoc. Thus, we think that using DPPs for counting would be an unjustified heuristic. Our approach may not be grounded in a mathematical formulation, but we are correctly handling edge cases and allowing a suitable interpolation to be learned from data. To our knowledge, DPPs have found very little use in the field of Deep Learning so far. We are only aware of [1] which uses DPPs for model compression, which is evidently unrelated to the task of counting.\n\n\n- Scalability of the proposed method\n\nAs stated in section 4.2.2, the time complexity is Theta(n^3) where n is the number of objects used. This can be reduced using the alternative similarity measure that we mention before that to Theta(n^2), though all results reported use the former similarity. The space complexity is Theta(n^2), as the matrices A, D, and C each have n^2 elements. Here are some numbers of our implementation, showing approximate time taken for one training epoch of the whole model on VQA v2 and amount of memory allocated on a Titan X (Pascal) GPU. The times for the low max object counts are very rough and are averaged across a few epochs -- as training time typically changes by about 10 seconds epoch by epoch -- and memory usage can vary slightly between runs.\n\nmax objects, time (minutes), memory (MiB)\n1, 5:50, 3701\n10 (default), 6:00, 3715\n25, 6:15, 4095\n50, 9:50, 7393\n60, 12:50, 10779\n\nAs you can see, increasing the number of objects from 10 to 25 incurs only small additional computational costs and even going to 50 objects, operating on a 2500 entry matrix per example, is still quite reasonable. For practical cases that we are dealing with in VQA, where we limit the maximum number of objects to 10 -- this covers the vast majority of counting questions -- using the counting component uses marginal amounts of extra computational resources. We do not claim that the method will be applicable to huge graphs and it is probably not the best mechanism for counting a large number of objects. There are also several ways of reducing the run time for large numbers of objects through e.g. a k-d tree with the reasonable assumption that when there are many objects to count, an object does not overlap with all other objects. The main difficulty of VQA is that most of the time the number of objects to count is relatively small; in contrast, the queries of what objects to count and the spatial relationships between objects can be complex.", "Thank you for your review. We are glad to see that you think the paper is well written and that the evidence for the usefulness to object counting is convincing. Given this, we are slightly surprised by the low rating as we think that the pros you list are very concrete compared to the cons.\nIn summary, we disagree with the main claims that the application range is very limited and that it is built on heuristics without theoretical considerations.\n\n\n- Method is solving a very specific problem which cannot be generalized to other representation learning problems / application range of the method is very limited\n\nWe disagree that the application of this method is limited to VQA. In fact, the toy task that we perform experiments on is clearly not a VQA task and shows that the component is applicable beyond VQA. Counting tasks have many practical real-world applications on their own.\n\nAn immediate research area wherein it can be used aside from VQA is image caption generation, where an attention LSTM can be used to attend over objects (Anderson et al, 2017). In this task, counting information should be useful for generating good captions and the attention mechanism has the same limitations as we discuss in section 3, which our counting component can be used for. Any task where counting of specific objects (without necessarily conditioning on a question input) is required but no ground truth bounding boxes are available -- which limits the use of some conventional methods for training counting models -- can use a pre-trained region proposal network. The score of a binary classification on each proposal whether it shows the object to count can be used as attention weight in the component to eliminate duplicates without the need for post-processing with non-maximum suppression and score thresholding, both of which require hyper-parameter tuning and disallow end-to-end training. This system can be trained end-to-end with the counting module, allowing a sensible approach for handling duplicate and overlapping bounding boxes.\n\nMore generally, tasks where a set of potential objects (wherein each object is given a score of how relevant it is) with possible duplicates needs to be counted, and duplicates can be identified through a pairwise distance metric (in the image case these are the 1 - IoU distances of bounding boxes), the component can be used for counting the objects with duplicates eliminated in a fully differentiable manner. Most importantly, appropriate relevancy scores and distances do not need to be specified explicitly as they can be learnt from data.\n\nAs we have shown in the paper, the component is robust enough to count without per-object ground-truth as supervision; only the aggregate count is needed. This makes it applicable to a wide variety of counting tasks beyond VQA.\n\n\n- Heuristics cannot be easily adapted to other machine learning applications\n\nWhile the properties that we use are specifically targeted towards counting, we think that there is value to be gained for the wider research community from our general approach to the problem. Our insight about the necessity of using the attention map itself, not just the feature vector coming out of the attention, may lead to recognition of problems that soft attention can introduce in other domains such as NLP, which in turn can lead to new solutions. The approach of a learned interpolation between correct behaviours, enforced by the network structure through monotonicity, may also be useful. For the monotonicity property, networks with more nonlinearities such as Deep Lattice Networks (You et al., 2017) can be used as well as we mention in Appendix A. Our way of treating the counting problem as a graphical problem and decomposing it into intra- and inter-object relations may find use in problems where there is some notion of object under uncertainty. The approach of creating a fully differentiable model despite many operations naively being non-differentiable, in particular when we want to remove certain edges but instead use equivalence under a sum to our benefit, contributes to the growing literature (e.g. (Jaderberg et al., 2015)) of making operations required for certain tasks differentiable and thus trainable in a deep neural network setting.", "- Not much improvement compared to (Zhou et al., 2017)\n\nThis is not a like-to-like comparison. Note that their model is an ensemble of 8 models wherein each individual model already performs significantly better than our baseline without counting module, due to the use of their state-of-the-art multimodal pooling method and pre-trained word embeddings. To be precise, their single model has a better overall accuracy by about 1.3%, which widens to a difference of about 3.2% after ensembling (we have only recently obtained their single-model results and will update the paper accordingly to make this clearer). Their single-model also exploits the existing primitive features better and starts with 2.7% better accuracy in number questions (these are the primitive counting features we discuss in section 2). Despite this difference in starting performance, our relatively simple baseline without their elaborate multimodal fusion outperforms their single model by over 2% and even their ensemble by about 0.3% in the number category, just by including the counting component in the model and without ensembling our model. Since their method should improve the quality of attention maps, we expect the benefit of the counting module -- which relies on the quality of attention maps -- to stack with their improvements. Keep in mind that their soft attention uses regular softmax normalization, which means that the limitations with respect to counting that we point out in section 3 apply to their model. We emphasize that the main comparison in Table 1 to make is: the performance on the number category of the baseline with counting module improves substantially compared to the baseline without the counting module and is also the best-ever reported accuracy on number questions. This shows that the more detailed results in Table 2 on the validation set are not simply due to hill-climbing on the validation set, since the test set of VQA v2 in Table 1 is only allowed to be evaluated on at most 5 times in total.\n\n\n- More convincing with results on CL\n\nMore results are almost always more convincing, but we feel like there is not much value to be gained by additionally evaluating on CLEVR (assuming that you mean CLEVR with CL) and there is a limited amount of experiments that we can put in a paper. This is mainly due to our use of bounding boxes -- non-standard for this dataset and thus making comparisons to existing work less useful -- and our focus on being able to count in the difficult setting demanded of by VQA v2: noisy attention maps (due to language and attention model with free-form human-posed questions) and noisy bounding boxes overlapping in complex ways (due to object proposal model on real images). These would be present in CLEVR to some extent as well, but in terms of synthetic tasks, we think that it is more useful for us to study counting behaviour on our toy dataset and in terms of VQA tasks, VQA v2 is more suitable for showing the benefits of our module than CLEVR.\n\nAdditional references:\n[1] Damien Teney, Qi Wu, and Anton van den Hengel. Visual Question Answering: A Tutorial. In IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 63-75. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8103161&isnumber=8103076\n[2] Ilija Ilievski and Jiashi Feng. Multimodal Learning and Reasoning for Visual Question Answering. In NIPS, 2017. http://papers.nips.cc/paper/6658-multimodal-learning-and-reasoning-for-visual-question-answering.pdf\n[3] Damien Teney, Lingqiao Liu, and Anton van den Hengel. Graph-Structured Representations for Visual Question Answering. In CVPR, 2017. http://openaccess.thecvf.com/content_cvpr_2017/papers/Teney_Graph-Structured_Representations_for_CVPR_2017_paper.pdf", "- Doesn't compare experiment numbers with (Chattopadhyay et al., 2017)\n\nThere are several major differences that make a direct comparison to the results in their work not useful (we have confirmed these differences with Chattopadhyay).\n\n1. They create a subset of the counting question subset of VQA v1, but their model is not trained on it. It is trained on the ~80 000 COCO training images with a ground truth labeling of how many objects there are for each of the 80 COCO classes, in essence giving them ~6 400 000 counts to train with. In contrast, there are only ~50 000 counting questions in the training set of VQA v2 (which is around twice the size of VQA v1), with the added difficulty of the types of objects being arbitrarily complex (e.g. \"how many people\" vs \"how many people wearing brown hats\").\n2. When they evaluate their model on VQA, they select a small subset (roughly 10%--20% of the counting question subset in VQA v1) where the ground-truth count of the COCO class that their NLP processing method extracts from the question is the same as the VQA ground-truth count. During evaluation, they run their method on the input image as usual, and simply use the output corresponding to the extracted class as prediction. This means that they are essentially evaluating on a subset of the COCO data that they previously evaluated on already, or conversely, only using the subset of VQA that basically matches the COCO validation data anyway. We feel that it is a stretch to call this a VQA task, since at no point any VQA is actually performed in their model.\n3. The VQA models are solving a slightly different task: unlike their proposed models, the VQA models are processing a natural language question, which may go wrong for the VQA models but is ensured to be correct for their proposed models (since they discard any examples where their NLP processing scheme gets it wrong). Additionally, VQA models are trained to not only try to answer counting questions, but also other questions.\n\nDue to these disadvantages to regular VQA models in their setup, we doubt that the performance of their model can be adequately compared to ours. In order for a comparison to be useful, we would have to train with the same training data that they used, which we feel is too much of a departure from the VQA setting in this paper; the general structure of the models they use don't have much to do with VQA models in the first place (their models regress counts for the 80 COCO classes simultaneously, whereas VQA models have an additional input -- the question -- which determines what to count and then classify a count answer).\n\nWe agree that superficially their results look related and we will clarify this matter in our next paper revision.\n\n\n- Doesn't handle symmetry breaking\n\nWe think that when the goal is to count, it is better for counting performance to not break symmetries, without having the limitation of producing discrete instances. For example, consider the case where there are 4 non-overlapping objects, each with a weight of 1/2. All edges have the same weight, but it is not clear whether there is a sensible way as to how the symmetry should be broken here. There is much precedence in Machine Learning for this type of approach, e.g. in a mixture of 2 Gaussians model, a sample in-between the two distributions is assigned to each distribution with an equal weight, rather than having a hard assignment of this sample to one distribution or the other.\n\nWe agree that having instances has clear benefits over the density map style approach in terms of interpretability. However, we don't think that current attention models are good enough yet, i.e. consistently produce scores either very close to 0 or 1, to be able for an approach with instances to be as accurate as without. Thus, we think that the density-map-like approach is appropriate for counting and not a problem.\n\n", "- Proposals are not jointly finetuned, did not study recall of the proposals and how sensitive the threshold is\n\nCan you clarify what threshold you are referring to in your comment? There is no hard threshold anywhere in our model.\n\nThis is certainly a valid concern, but applies to all VQA models using object proposals, not just our counting module. If the loss for counting is biased, then so is the loss for the rest of the model (e.g. \"what color is the car\" without an object proposal on the car). Joint training is nontrivial, since the architecture that generates the object proposal bounding boxes and features (Faster R-CNN) uses a two stage approach for training anyway and requires ground truth bounding boxes of objects. It is not clear to us whether joint training is at all possible in VQA, since it does not have ground truth bounding boxes available. Empirically, we are getting a substantial improvement in counting performance, so we think that the lack of joint training is not a major issue. This issue is certainly something that can be looked into more in the future, but we do not see it as a shortcoming of our module in particular.\n\nWhile we do not think that it is our responsibility to evaluate proposal recall, we are looking into manually labeling a small subset of training examples to get a sense of how much of an issue this is in general. One thing to keep in mind is that the loss pushes the VQA model as a whole to predict a certain answer, not just the counting component itself. That means that a bias towards not recognizing some types of objects (either in the object proposal network or the attention mechanism) can be accounted for by the rest of the model by biasing the count predictions of these types of objects slightly upwards. Bounding boxes that capture different parts of one object (e.g. one capturing the upper half of a person and one capturing the lower half of the same person) can also still lead to a correct count prediction if the attention mechanism recognizes that half the attention weight as usual should be given to those. In general, it might be enough for the counting module to produce a sensible prediction as long as some number of bounding boxes cover the all the required objects, not necessarily with one box per object.\n\n\n- Doesn't study a simple baseline that just does NMS on the proposal domain\n\nThank you for pointing out the lack of this baseline. We agree that this should have been included and we have started running experiments for this. Initial experiments are suggesting that when the counting module is replaced with the one-hot encoded number of objects determined by NMS (we are trying thresholds between 0.3 and 0.7) the performance is not much different, if at all, from the baseline without NMS. This applies to using one of the two attention maps (like the counting module) as well as the sum of the two attention maps (lack of gradient means that the model can't specialize the first attention map to locate the objects to count, so using the sum of the two attention maps might be more reasonable) for scoring the proposals, which suggests that the piecewise constant gradient of NMS is a major issue. Once we have the full results, we will certainly include this information in a revision to the paper.", "Thank you for your review. We are happy to see that you think the paper is well written and that the deduplication steps in the module are interesting. Given the aptness of your comments, it seems like you understand the paper better than you are giving yourself credit for.\nIn summary to your main complaints: We argue that a hand-crafted approach is reasonable for the current state of VQA, NMS baselines will be supplied, and comparisons to (Chattopadhyay et al., 2017) are not useful. \n\n- Proposed model is pretty hand-crafted, would recommend the authors to use something more general, like graph convolutional neural networks.\n\nIn summary, we think that with current VQA models, counting needs to be hand-crafted to some extent, hand-crafting counting has various useful properties, and that we tried a non-handcrafted approach similar to graph convolutional networks in the past without success.\n\nWe think that with the current state of VQA models on real images, it is unreasonable to expect a general model to learn to count without hand-designing some aspect of it in order for the model to learn. Pointed out many times such as in (Jabri et al., 2016), [1], and seen from the balanced pair accuracies in Table 2, much of current VQA performance is due to fitting better to spurious dataset biases with little \"actual\" learning of how to answer the questions. The necessity for a modularized approach is also recognized in a recently published work in NIPS [2], where they combine a variety of different types of models, each suited to a different pre-defined task (e.g. one face detection network, one scene classification network, etc.). The aspect that makes the counting task within VQA special is that there is some relatively easy-to-isolate logic to it, which is the focus of our module through soft deduplication and aggregation. Even in humans, counting is a highly structured process when going beyond the range where humans can subitize. While it would certainly be better if a neural network could discover the logic required for counting by itself, we think that a hand-engineered approach is perfectly valid for solving this problem given the current state of research and performance on VQA.\n\nThe hand-crafted nature gives the component several useful properties. Due to the structure of the component, it performs more-or-less correct counting by default, even when none of the parameters in it have been learnt yet. This allows it to generalize more easily to a test set with fewer training samples and under much noise, as is the case for VQA. Since all steps within the component have a clear motivation, the parameters that it learns are interpretable and can be used for explaining why it predicted a certain count. Changing the input has a predictable effect on the output due to the component structure enforcing monotonicity. This is particularly useful in comparison to a general deep neural network, which suffers from adversarial inputs causing unexpected predictions. The simple nature of the module with relatively few parameters keeps the computational costs low and allows it to be integrated into non-VQA tasks fairly easily. Note that the modeling assumptions that we make are not specific to VQA, but are assumptions about what a sensible counting method should do in ideal cases.\n\nIn our experience, integrating other types of models into VQA models is difficult without either inhibiting general performance or simply achieving essentially the same level of performance. As far as we are aware, there has not been any work which successfully uses a graph-based approach to VQA on real images. We did try to integrate relation networks (Santoro et al., 2017) into a VQA model, without much success in terms of performance on counting nor in any other category (though this obviously does not mean that a successful integration is not possible). Relation networks are a natural choice for VQA v2, perhaps more so than the neural networks for arbitrary graphs you suggest: they have been shown to work well for VQA on the CLEVR dataset and treat objects as nodes in a complete graph, similar to what our module uses as input. With our module, we at least show that a graph-based representation can find some use in VQA on real images in the first place and might motivate further research into graph-based approaches. In general, the sorts of graph-based approaches that you mention have only been successfully applied on the abstract VQA dataset so far [3], where a precise scene graph of synthetic images is used as input, not real images. On that dataset, good improvements in counting have been achieved by a general graph-based network. We imagine that this is due to the much less noisy nature of scene graphs on synthetic data compared to using pixel-based representations or object proposals on real images, making counting a much easier task in the abstract VQA case." ]
[ -1, -1, -1, 6, 6, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 3, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B12Js_yRb", "iclr_2018_B12Js_yRb", "H1GhmwqgG", "iclr_2018_B12Js_yRb", "iclr_2018_B12Js_yRb", "iclr_2018_B12Js_yRb", "iclr_2018_B12Js_yRb", "Hkwum9YgM", "S15E4oHbM", "Sy-67iHZz", "SJ11jzclf", "Sk0dZsS-M", "B17QZjB-M", "rkkDeiB-M", "H1GhmwqgG" ]
iclr_2018_HJsjkMb0Z
i-RevNet: Deep Invertible Networks
It is widely believed that the success of deep convolutional networks is based on progressively discarding uninformative variability about the input with respect to the problem at hand. This is supported empirically by the difficulty of recovering images from their hidden representations, in most commonly used network architectures. In this paper we show via a one-to-one mapping that this loss of information is not a necessary condition to learn representations that generalize well on complicated problems, such as ImageNet. Via a cascade of homeomorphic layers, we build the i-RevNet, a network that can be fully inverted up to the final projection onto the classes, i.e. no information is discarded. Building an invertible architecture is difficult, for one, because the local inversion is ill-conditioned, we overcome this by providing an explicit inverse. An analysis of i-RevNet’s learned representations suggests an alternative explanation for the success of deep networks by a progressive contraction and linear separation with depth. To shed light on the nature of the model learned by the i-RevNet we reconstruct linear interpolations between natural image representations.
accepted-poster-papers
This paper constructs a variant of deep CNNs which is provably invertible, by replacing spatial pooling with multiple shifted spatial downsampling, and capitalizing on residual layers to define a simple, invertible representation. The authors show that the resulting representation is equally effective at large-scale object classification, opening up a number of interesting questions. Reviewers agreed this is an strong contribution, despite some comments about the significance of the result; ie, why is invertibility a "surprising" property for learnability, in the sense that F(x) = {x, phi(x)}, where phi is a standard CNN satisfies both properties: invertible and linear measurements of F producing good classification. All in all, this will be a great contribution to the conference.
train
[ "BJOsVtsNz", "HyABCoKVz", "BJzKguLVz", "HJRdhx5eM", "rJxrJe9eG", "HkxP0bceM", "SyyNsbbNz", "H1ia7wJ4f", "ByS7UBGQz", "HyW2XBf7z", "By9vGBMXM", "H1dDbBfQf" ]
[ "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "author", "author" ]
[ "\n1) As mentioned in section 3.1 and detailed in the 4th paragraph of section 3.2, $\\tilde{S}$ splits the input into two tensors. In our case, we stick to the choice of Revnets and split the number of input channels in half. You will be able to check how this is done in detail in the code we will release alongside the de-anonymized version of the paper.\n\n2) Thanks for this question. What is displayed in fig. 5, are not noisy images, but rather precise reconstructions of an interpolation in feature space. Images obtained by this interpolation have no particular reason to look like real images, as their representation suffers from the curse of dimensionality. However, as indicated in the paper, it indeed opens the question about the structure of the feature space.", "I am interested in your paper. But I have two questions:\n\n1) As mentioned, features at each layer are decomposed into two parts. So how to decompose the input images? (I don't find the corresponding descriptions in your paper.) \n\n2) The reconstructed sequences x_t in Fig.5 contain lots of noise and are with low visual qualities. Can you explain the reasons?", "Thank you for the detail response and adding new sets of experiments for the questions that was raised and I updated the score to reflect the changes.", "ICLR I-Revnet\n\n\nThis paper build on top of ReVNets (Gomez et al., 2017) and introduce a variant that is fully \ninvertible. The model performs comparable to its variants without any loss of information.\nThey analyze the model and its learned representations from multiple perspectives in detail. \n \nIt is indeed very interesting an thought provoking to see that contrary to popular belief in the community no information loss is necessary to learn good generalizable features. What is missing, is more motivation for why such a property is desirable. As the authors mentions the model size has almost doubled compared to comparable ResNet. And the study of the property of the learned futures might probably limited to this i-RevNet only. It would be good to see more motivation, beside the valuable insight of knowing it’s possible.\n\nGenerally the paper is well written and readable, but few minor comments:\n1-Better formatting such as putting results in model sizes, etc in tables will make them easier to find.\n2-Writing down more in detail 3.1, ideally in algorithm or equation than all in text as makes it hard to read in current format.", "In this paper, the authors propose deep architecture that preserves mutual information between the input and the hidden representation and show that the loss of information can only occur at the final layer. They illustrate empirically that the loss of information can be avoided on large-scale classification such as ImageNet and propose to build an invertible deep network that is capable of retaining the information of the input signal through all the layers of the network until the last layer where the input could be reconstructed.\n\nThe authors demonstrate that progressive contraction and separation of the information can be obtained while at the same time allowing an exact reconstruction of the signal.\n\nAs it requires a special care to design an invertible architecture, the authors architecture is based on the recent reversible residual network (RevNet) introduced in (Gomez et al., 2017) and an invertible down-sampling operator introduced in (Shi et al., 2016). The inverse (classification) path of the network uses the same convolutions as the forward (reconstructing) one. It also uses subtraction operations instead of additions in the output computation in order to reconstruct intermediate and input layers.\n\nTo show the effectiveness of their approach on large-scale classification problem, the authors report top-1 error rates on the validation set of ILSVRC-2012. The obtained result is competitive with the original Resnet and the RevNet models. However, the proposed approach is expensive in terms of parameter budget as it requires almost 6.5 times more parameters than the RevNet and the Resnet architectures. Still, the classification and the reconstructing results are quite impressive as the work is the first empirical evidence that learning invertible representation that preserves information about the input is possible on large-scale classification tasks. Worth noting that recently, (Shwartz-Ziv and Tishby) demonstrated, not on large-scale datasets but on small ones, that an optimal representation for a classification task must reduce as much uninformative variability as possible while maximizing the mutual information between the desired output and its representation in order discriminate as much as possible between classes. This is called “information bottleneck principle”. The submitted paper shows that this principle is not a necessary condition large-scale classification.\n\nThe proposed approach is potentially of great benefit. It is also simple and easy to understand. The paper is well written and the authors position their work with respect to what has been done before. The spectral analysis of the differential operator in section 4.1 provide another motivation for the “hard-constrained” invertible architecture. Section 4.2 illustrates the ability of the network to reconstruct input signals. The visualization obtained suggests that network performs linear separation between complex learned factors. Section 5 shows that even when using either an SVM or a Nearest Neighbor classifier on n extracted features from a layer in the network, both classifiers progressively improve with deeper layers. When the d first principal components are used to summarize the n extracted features, the SVM and NN classifier performs better when d is bigger. This shows that the deeper the network gets, the more linearly separable and contracted the learned representations are.\n\nIn the conclusion, the authors state the following: “The absence of loss of information is surprising, given the wide believe, that discarding information is essential for learning representations that generalize well to unseen data”. Indeed, the authors have succeed in showing that this is not necessarily the case. However, the loss of information might be necessary to generalize well on unseen data and at the same time minimize the parameter budget for a given classification task.\n", "\n\nThe paper is well written and easy to follow. The main contribution is to propose a variant of the RevNet architecture that has a built in pseudo-inverse, allowing for easy inversion. The results are very surprising in my view: the proposed architecture is nearly invertible and is able to achieve similar performance as highly competitive variants: ResNets and RevNets.\n\nThe main contribution is to use linear and invertible operators (pixel shuffle) for performing downsampling, instead of non-invertible variants like spatial pooling. While the change is small, conceptually is very important.\n\nCould you please comment on the training time? Although this is not the point of the paper, it would be very informative to include learning curves. Maybe discarding information is not essential for learning (which is surprising), but the cost of not doing so is payed in learning time. Stating this trade-off would be informative. If I understand correctly, the training runs for about 150 epochs, which is maybe double of what the baseline ResNet would require?\n\nThe authors evaluate in Section 4.2 the show samples obtained by the pseudo inverse and study the properties of the representations learned by the model. I find this section really interesting. Further analysis will make the paper stronger.\n\nAre the images used for the interpolation train or test images?\n\nI assume that the network evaluated with the Basel Faces dataset, is the same one trained on Imagenet, is that the case?\n\nIn particular, it would be interesting (not required) to evaluate if the learned representation is able to linearize a variety of geometric image transformations in a controlled setting as done in:\n\nHénaff, O,, and Simoncelli, E. \"Geodesics of learned representations.\" arXiv preprint arXiv:1511.06394 (2015).\n\nCould you please clarify, what do you mean with fine tuning the last layer with dropout?\n\nThe authors should cite the work on learning invertible functions with tractable Jacobian determinant (and exact and tractable log-likelihood evaluation) for generative modeling. Clearly the goals are different, but nevertheless very related. Specifically,\n\nDinh, L. et al \"NICE: Non-linear independent components estimation.\" arXiv preprint arXiv:1410.8516 (2014).\n\n\nDinh, L. et al \"Density estimation using Real NVP.\" arXiv preprint arXiv:1605.08803 (2016).\n\nThe authors mention that the forward pass of the network does not seem to suffer from significant instabilities. It would be very good to empirically evaluate this claim.\n", "\nAt a given block, the tensor $\\tilde x_j$ can potentially have a different size from $x_j$ without losing in generality. Consequently, as the bottleneck layer $\\mathcal{F}_{j+1}$ is applied to $\\tilde x_j$, its output must match the shape of $x_j$. This is done, for instance, by applying an intermediary stride via $\\mathcal{F}_{j+1}$ (it is also possible to upsample its output), or increasing/reducing the channel size, accordingly to the size of $x_j$. \n\nIn our implementation and for consistency, we followed the approach of the RevNet which down-samples two interlaced blocks at a given depth $j$ (e.g. ${x_j,\\tilde x_j}). It means that two successive blocks (and thus also $S_j$ & $S_{j+1}$) of an $i$-RevNet work in concert, in order to have downsampled the signals ${x_{j+2},\\tilde x_{j+2}}$ by a factor $2^2$ w.r.t. the depth $j$.\n\nWe will add clarification of this to the camera ready. As mentioned in the manuscript, we will also release our code so you can check how this is implemented in detail.\n\nWe thank you very much again for your comment!", "I have a question regarding the description of S_j in section 3.2. It's mentioned that \"at each layer of depth j = 3, 21, 69, 285, the spatial resolution is reduced by 2^2 via S_j\". Consider the case when a tensor with N channels of spatial resolution M by M is passing through layer j. If S_j is a downsampling operation (i.e. j=3, 21, 69 or 285), then x_tilda_{j+1} will have 4N channels of size M/2 by M/2 while x_{j+1} will still have N channels of size M by M. This means that the addition operation in layer j+1 will be an addition between two tensors of unequal dimension. How is this dealt with for the bijective i-RevNet? Thank you!", "We thank the reviewer very much for raising many interesting and important points. Furthermore, we thank the reviewer for acknowledging that the presented results are surprising and our technical contributions conceptually important. We are also pleased that the reviewer finds the analysis of the learned representation very interesting. \n\nTo open up another dimension of the analysis, we have added a model which replaces the initial injective operator with a bijective operator as used in later layers. This model has almost the same number of parameters as the baseline and trains about a day faster, albeit performs worse by 1.5%. This is to show, that model size can be reduced substantially while the invertibility property improves.\n\n ==> Maybe discarding information is not essential for learning (which is surprising), but the cost of not doing so is paid in learning time.\n\nThank you for raising this interesting point.\nWe have added plots of the loss curves to the paper that show very similar training behaviour for an i-RevNet compared to a non-invertible ResNet baseline. Training hyperparameters (e.g. learning rate schedule, training iterations, regularization) are identical for all models we have analyzed in the paper.\nThus, there does not seem to be a cost to pay for not discarding information in terms of convergence behaviour.\n\n==> Are the images used for the interpolation train or test images? \n\nThe images used for interpolation are partially from datasets not seen during training (describable textures, Basel faces) and from the Imagenet training set. All interpolations have been obtained from the ILSVRC-2012 trained model.\n\n==> Could you please clarify, what do you mean with fine tuning the last layer with dropout? \n\nWe thank the reviewer for raising this question. For the sake of brevity, we have removed this fine-tuning in the current revision entirely. This change only affected Figure 4, we have updated the figure with interpolations from the newly added bijective model.\n\n==> The authors mention that the forward pass of the network does not seem to suffer from significant instabilities. It would be very good to empirically evaluate this claim. \n\nThank you for this important remark, we have empirically evaluated our claims by measuring the normalized l2 error between original and reconstruction on the whole validation set of ILSVRC-2012 and on randomly drawn uniform noise \\in [-1,1], with the same number of draws as the size of the validation set. We report expectation of the error over all samples. The results show no significant instabilities and the error is visually imperceivable:\n\ni-RevNet bijective:\nILSVRC-2012 validation set reconstruction error: 5.17e-06\n50k uniform noise draws reconstruction error: 2.63e-06\n\ni-RevNet injective:\nILSVRC-2012 validation set reconstruction error: 8.26e-7\n50k uniform noise draws reconstruction error: 5.52e-07\n \nWe thank the reviewer for the additional references, we have added NICE and Real-NVP to the related work section and discussed their relationship to our work.\n\nTo add results on more controlled geometric transformations, we have added interpolations between small geometrical perturbations to the reconstruction experiment. \n \nWe thank the reviewer once again for the very interesting and important remarks. We believe they have substantially improved the manuscript and helped to clarify many important points.\n", "We thank the reviewer very much for the valuable comments and for acknowledging that our main claims are very interesting and thought-provoking. In the following, we will elaborate on the increased model size and usefulness of such an architecture in detail. \n\nTo add another dimension to the model analysis and to shed light on the necessary model size, we have added a model which replaces the initial injective operator with a bijective operator as used in later layers. This model has almost the same number of parameters as the baselines and trains about a day faster, albeit performs worse by 1.5%. This is to show, that model size can be reduced substantially while the invertibility property improves.\n\n==> The authors mention model size has almost doubled \n\nThanks to this important remark, we have added another model that shows it is possible to avoid an excessive increase of model size in i-RevNets. The newly added model has 29M parameters as opposed to 28M in the RevNet baseline while having a top-1 accuracy of 73.3%, which is ~1.5% worse than the RevNet baseline. \n\nWe thank the reviewer once again for raising this point and believe that the newly introduced model makes the paper even stronger, as it shows that the invertibility property can even be improved by decreasing model size.\n\n==> Does the analysis apply to other models as well? \n\nWe thank the reviewer for this question, section 5.1 shows that progressive properties that are known to hold for lossy AlexNet type models on limited datasets, are in fact also possible to obtain in an architecture that is not able to discard information about the input on a large-scale task like Imagenet.\n\nTo further strengthen the results, we have extended our analysis of the separation contraction to a ResNet baseline. Our results show, that the behaviour of the non-invertible ResNet is the same as the one observed in i-RevNets, substantiating the generality of our findings.\n\n==> Why is such a model desirable? \n\nThe core question we answer is if the success of deep convolutional networks is based on progressively discarding uninformative variability, which is a wide standing believe in the CV and ML community. We show this does not have to be the case, which has been acknowledged as \"important\", \"interesting\" and \"thought-provoking\" by all reviewers. Thus, the invertibility property is desirable for understanding the success of deep learning better and shed light on some of the necessities for it to work well.\nFrom a practical point of view, invertible models are useful for feature visualization [1,2,3] and possibly useful to overcome difficulties in upsampling/decoding pixel-wise tasks that are still quite challenging [4]. Further, lossless models might be a good candidate for transfer learning. \n\nIn summary, we do believe that besides the theoretical interest of our work, which has been acknowledged by all reviewers, there is also a potential impact in deep learning applications for invertible models. \n\nWe thank the reviewer once again for the important questions and remarks, we believe that the added discussion and results of the new bijective i-RevNet and ResNet baseline substantially improve the paper. We have also incorporated suggested formatting improvements into the manuscript.\n\n[1] Mahendran, Aravindh, and Andrea Vedaldi. \"Understanding deep image representations by inverting them.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.\n[2] Dosovitskiy, Alexey, and Thomas Brox. \"Inverting visual representations with convolutional networks.\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016.\nAPA\t\n[3] Selvaraju, Ramprasaath R., et al. \"Grad-cam: Why did you say that? visual explanations from deep networks via gradient-based localization.\" arXiv preprint arXiv:1610.02391 (2016).\n[4] Wojna, Zbigniew, et al. \"The Devil is in the Decoder.\" arXiv preprint arXiv:1707.05847 (2017). \n", "We thank the reviewer very much for this encouraging review and the comments on our paper.\nWe would also like to thank the reviewer for acknowledging the presented results being impressive and potentially of great benefit.\n\nInspired by the reviewer's remark on model size and the quest for an optimal parameter budget, we have added another model to the paper that has a similar number of parameters as the RevNet and ResNet baselines. This way we show that an increased number of parameters is not necessary to obtain the invertible architecture. This newly added i-RevNet replaces the initial injective mapping with a bijective mapping. In consequence, the new model is slightly different in architecture from the baselines, as it keeps the input dimensionality constant. We have replaced the analysis of the injective i-RevNet by an analysis of the bijective i-RevNet throughout the whole paper.\n\nFurthermore, to show that the observed separation and contraction occur independently of invertibility, we have added a non-invertible ResNet baseline to the model analysis in section 5.1. We have also added training plots of ResNets compared to i-RevNets. The results show a progressive separation and contraction in invertible and non-invertible models and very similar training behaviour. \n\nOur main conclusions remain the same, while the new results substantiate their generality.\n\nWe thank the reviewer once again for the insightful comments thanks to which we were able to further improve the paper.", "Dear Reviewers,\n\nWe sincerely thank you for your work and effort in reviewing the manuscript. Your comments and remarks have been very helpful to improve the quality of the paper. \n\nWe have made two major additions to the manuscript that substantiate the generality of our claims.\n\n1) A bijective i-RevNet with 6 times fewer parameters, that shows a reduction of parameters can even improve the invertibility property.\n2) An Imagenet-trained ResNet to show our findings apply to non-invertible state-of-the-art models as well.\n\nPlease find detailed answers below, and thank you once again!" ]
[ -1, -1, -1, 8, 9, 8, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "HyABCoKVz", "iclr_2018_HJsjkMb0Z", "HyW2XBf7z", "iclr_2018_HJsjkMb0Z", "iclr_2018_HJsjkMb0Z", "iclr_2018_HJsjkMb0Z", "H1ia7wJ4f", "iclr_2018_HJsjkMb0Z", "HkxP0bceM", "HJRdhx5eM", "rJxrJe9eG", "iclr_2018_HJsjkMb0Z" ]
iclr_2018_BkUHlMZ0b
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach
The robustness of neural networks to adversarial examples has received great attention due to security implications. Despite various attack approaches to crafting visually imperceptible adversarial examples, little has been developed towards a comprehensive measure of robustness. In this paper, we provide theoretical justification for converting robustness analysis into a local Lipschitz constant estimation problem, and propose to use the Extreme Value Theory for efficient evaluation. Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness. The proposed CLEVER score is attack-agnostic and is computationally feasible for large neural networks. Experimental results on various networks, including ResNet, Inception-v3 and MobileNet, show that (i) CLEVER is aligned with the robustness indication measured by the ℓ2 and ℓ∞ norms of adversarial examples from powerful attacks, and (ii) defended networks using defensive distillation or bounded ReLU indeed give better CLEVER scores. To the best of our knowledge, CLEVER is the first attack-independent robustness metric that can be applied to any neural network classifiers.
accepted-poster-papers
This paper proposes a new metric to evaluate the robustness of neural networks to adversarial attacks. This metric comes with theoretical guarantees and can be efficiently computed on large-scale neural networks. Reviewers were generally positive about the strengths of the paper, especially after major revisions during the rebuttal process. The AC believes this paper will contribute to the growing body of literature in robust training of neural networks.
train
[ "rk8Ucb5gf", "B1ZlEVXyf", "BJiW7IkZM", "BJdX_3LGz", "H1i4bwD7f", "Hyu463UzG", "SyK9an8Gz", "Hkt4hhIMM", "Hk0A_3IGG", "ry18wnIMz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ "The work claims a measure of robustness of networks that is attack-agnostic. Robustness measure is turned into the problem of finding a local Lipschitz constant which is given by the maximum of the norm of the gradient of the associated function. That quantity is then estimated by sampling from the domain of maximization and observing the maximum value of the norm out of those samples. Such a maximum process is then described by the reverse Weibull distribution which is used in the estimation.\n\nThe paper closely follows Hein and Andriushchenko (2017). There is a slight modification that enlarges the class of functions for which the theory is applicable (Lemma 3.3). As far as I know, the contribution of the work starts in Section 4 where the authors show how to practically estimate the maximum process through back-prop where mini-batching helps increase the number of samples. This is a rather simple idea that is shown to be effective in Figure 3. The following section (the part starting from 5.3) presents the key to the success of the proposed measure. \n\nThis is an important problem and the paper attempts to tackle it in a computationally efficient way. The fact that the norms of attacks are slightly above the proposed score is promising, however, there is always the risk of finding a lower bound that is too small (zeros and large gaps in Figure 3). It would be nice to be able to show that one can find corresponding attacks that are not too far away from the proposed score.\n\nFinally, a minor point: Definition 3.1 has a confusing notation, f is a K-valued vector throughout the paper but it also denotes the number that represents the prediction in Definition 3.1. I believe this is just a typo.\n\nEdit: Thanks for the fixes and clarification of essential parts in the paper.\n", "Summary\n========\n\nThe authors present CLEVER, an algorithm which consists in evaluating the (local) Lipschitz constant of a trained network around a data point. This is used to compute a lower-bound on the minimal perturbation of the data point needed to fool the network.\n\nThe method proposed in the paper already exists for classical function, they only transpose it to neural networks. Moreover, the lower bound comes from basic results in the analysis of Lipschitz continuous functions.\n\n\nClarity\n=====\n\nThe paper is clear and well-written.\n\n\nOriginality\n=========\n\nThis idea is not new: if we search for \"Lipschitz constant estimation\" in google scholar, we get for example\nWood, G. R., and B. P. Zhang. \"Estimation of the Lipschitz constant of a function.\" (1996)\nwhich presents a similar algorithm (i.e., estimation of the maximum slope with reverse Weibull).\n\n\nTechnical quality\n==============\n\nThe main theoretical result in the paper is the analysis of the lower-bound on \\delta, the smallest perturbation to apply on\na data point to fool the network. This result is obtained almost directly by writing the bound on Lipschitz-continuous function\n | f(y)-f(x) | < L || y-x ||\nwhere x = x_0 and y = x_0 + \\delta.\n\nComments:\n- Lemma 3.1: why citing Paulavicius and Zilinskas for the definition of Lipschitz continuity? Moreover, a Lipschitz-continuous function does not need to be differentiable at all (e.g. |x| is Lipschitz with constant 1 but sharp at x=0). Indeed, this constant can be easier obtained if the gradient exists, but this is not a requirement.\n\n- (Flaw?) Theorem 3.2 : This theorem works for fixed target-class since g = f_c - f_j for fixed g. However, once g = min_j f_c - f_j, this theorem is not clear with the constant Lq. Indeed, the function g should be \ng(x) = min_{k \\neq c} f_c(x) - f_k(x).\nThus its Lipschitz constant is different, potentially equal to\nL_q = max_{k} \\| L_q^k \\|, \nwhere L_q^k is the Lipschitz constant of f_c-f_k. If the theorem remains unchanged after this modification, you should clarify the proof. Otherwise, the theorem will work with the maximum over all Lipschitz constants but the theoretical result will be weakened.\n\n- Theorem 4.1: I do not see the purpose of this result in this paper. This should be better motivated.\n\n\nNumerical experiments\n====================\n\nGlobally, the numerical experiments are in favor of the presented method. The authors should also add information about the time it takes to compute the bound, the evolution of the bound in function of the number of samples and the distribution of the relative gap between the lower-bound and the best adversarial example.\n\nMoreover, the numerical experiments look to be realized in the context of targeted attack. To show the real effectiveness of the approach, the authors should also show the effectiveness of the lower-bound in the context of non-targeted attack.\n\n\n#######################################################\n\nPost-rebuttal review\n---------------------------\n\nGiven the details the authors provided to my review, I decided to adjust my score. The method is simple and shows to be extremely effective/accurate in practice.\n\nDetailed answers:\n\n1) Indeed, I was not aware that the paper only focuses on one dimensional functions. However, they still work with less assumption, i.e., with no differential functions. I was pointing out the similarities between their approach and your: the two algorithms (CLEVER and Slope) are basically the same, and using a limit you can go from \"slope\" to \"gradient norm\".\nIn any case, I have read the revision and the additional numerical experiment to compare Clever with their method is a good point.\n\n2) \" Overall, our analysis is simple and more intuitive, and we further facilitate numerical calculation of the bound by applying the extreme value theory in this work. \"\nThis is right. I am just surprised is has not been done before, since it requires only few lines of derivation. I searched a bit but it is not possible to find any kind of similar results. Moreover, this leads to good performances, so there is no needs to have something more complex.\n\n3) \"The usual Lipschitz continuity is defined in terms of L2 norm and the extension to an arbitrary Lp norm is not straightforward\"\nIndeed, people usually use the Lipschitz continuity using the L2norm, but the original definition is wider.\nQuickly, if you have a differential, scalar function from a space E -> R, then the gradient is a function from space E to E*, the dual of the space E.\nLet || . || the norm of space E. Then, || . ||* is the dual norm of ||.||, and also the norm of E*.\nIn that case, Lipschitz continuity writes\nf(x)-f(y) <= L || x-y ||, with L >= max_{x in E*} || f'(x) ||*\nIn the case where || . || is an \\ell-p norm, then || . ||* is an \\ell-q norm; with 1/p+1/q = 1.\n\nIf you are interested, there is a clear and concise explanation in the introduction of this paper: Accelerating the cubic regularization of Newton’s method on convex problems, by Yurii Nesterov.\n\nI have no additional remarks for 4) -> 9), since everything is fixed in the new version of the paper.\n\n", "In this work, the objective is to analyze the robustness of a neural network to any sort of attack.\n\nThis is measured by naturally linking the robustness of the network to the local Lipschitz properties of the network function. This approach is quite standard in learning theory, I am not aware of how original this point of view is within the deep learning community.\n\nThis is estimated by obtaining values of the norm of the gradient (also naturally linked to the Lipschitz properties of the function) by backpropagation. This is again a natural idea.", "\n1. Regarding the comment of using local Lipschitz properties of the network function:\n\nWe thank the reviewer for pointing this out. We note that this paper is the *first* work to derive the lower bound of minimum distortion using (local) cross-Lipschitz continuity assumption. For continuously differentiable classification functions, we show that with the Lipschitz continuity assumption, our result is consistent with Hein & Andriushchenko (2017), who used Mean Value Theorem and Holder’s inequality to obtain the same lower bound. In addition, we show in Lemma 3.3 that our approach can easily extend to non-differentiable functions (e.g. ReLU activations), whereas the analysis in Hein & Andriushchenko (2017) is restricted to continuously differentiable functions.\n\n2. Regarding the comment of using the norm of the gradient by backpropagation to estimate Lipschitz constant: \n\nWe note that there exist other estimation methods, e.g. Wood & Zhang (1996) as mentioned by AnonReviewer 2, where they calculate the slope between pairs of sample points instead of taking the samples on the norm of the gradient in this paper. However, as shown in Table 3 and Figure 4 in p.10 of the revised paper, their approach (denoted as SLOPE) perform poorly on estimating Lipschitz constant for high-dimensional functions like neural networks, thus are not suitable to estimate minimum adversarial distortions. \n", "Dear AnonReviewer 2,\n\nFollowing your valuable comments and suggestions, we have addressed the confusions in our theory, made comparisons with the additional reference you mentioned and added new numerical results. We have listed all changes we made in the general response to help you quickly find out the added materials. Thanks to your insightful comments, we believe our paper has been greatly improved after addressing all the concerns raised. For responses to any particular questions, please kindly read the corresponding section of our rebuttal. \n\nWe will greatly appreciate it if you can provide new comments on the revised version of our paper. Thank you!\n\nSincerely,\nAuthors of Paper 767", "\n4. Regarding the comment: “Moreover, a Lipschitz-continuous function does not need to be differentiable at all (e.g. |x| is Lipschitz with constant 1 but sharp at x=0). Indeed, this constant can be easier obtained if the gradient exists, but this is not a requirement”: \n\nWe thank the reviewer for this comment. Indeed, as we show in Lemma 3.3, we can easily extend our analysis using the Lipschitz assumption to obtain the robustness guarantee for non-differentiable functions with a finite number of non-differentiable points (like networks with ReLU activations). \n\n5. Regarding the comment: “(Flaw?) Theorem 3.2 : This theorem works for fixed target-class since g = f_c - f_j for fixed g. However, once g = min_j f_c - f_j, this theorem is not clear with the constant Lq. Indeed, the function g should be \ng(x) = min_{k \\neq c} f_c(x) - f_k(x).\nThus its Lipschitz constant is different, potentially equal to\nL_q = max_{k} \\| L_q^k \\|, \nwhere L_q^k is the Lipschitz constant of f_c-f_k. If the theorem remains unchanged after this modification, you should clarify the proof. Otherwise, the theorem will work with the maximum over all Lipschitz constants but the theoretical result will be weakened.”: \n\nWe thank the reviewer for pointing out this potential ambiguity. There was an abuse of notation in Theorem 3.2 where the Lipschitz constant L_q is the lipschitz constant for function f_c-f_j, which is dependent on the index j. We have revised the notation accordingly in the revised paper and we use L_q^j to denote it is a Lipschitz constant of function (f_c- f_j) and is dependent on index j. For the untargeted attack that the reviewer is referring to, we note that Theorem 3.2 is indeed for un-targeted attacks, as it takes the min over all the targeted attack bound. We have made it clearer in the revised paper by adding a note of “Formal guarantee on lower bound for untargeted attack” in Theorem 3.2. In comparison, we also added Corollary 3.2.2 to give the formal guarantee for *targeted* attack. The algorithms for computing CLEVER for targeted and untargeted attacks are summarized in Algorithm 1 and 2 in Section 4.2. We note that we also included additional experiments for untargeted attacks in Table 2 in Section 5.3. \n\n6. Regarding the comment: “ Theorem 4.1: I do not see the purpose of this result in this paper. This should be better motivated.”: \n\nWe thank the reviewer for pointing this important observation. In the revised paper, we give a clearer explanation in the beginning of Section 4.1 of why we derive the CDF of $||\\nabla g(x)||_q$. The reason is that in this work, we propose to use a new sampling method and extreme value theory to estimate the local Lipschitz constant; extreme value theory requires samples in a distribution of $||\\nabla g(x)||_q$. A reader may wonder how the this distribution looks like. As an example, we show that we can derive the CDF of $||\\nabla g(x)||_q$ for a 2-layer neural network with ReLU activation in Theorem 6.1 in Appendix D. \n", "\nWe thank the reviewer for the positive comments on the clarity of our paper. However, we believe there might be some misunderstanding on the originality and technical quality of our work. Please allow us to clarify below. \n\n1. Regarding the comment: “This idea is not new: if we search for \"Lipschitz constant estimation\" in google scholar, we get for example Wood, G. R., and B. P. Zhang. \"Estimation of the Lipschitz constant of a function.\" (1996) which presents a similar algorithm (i.e., estimation of the maximum slope with reverse Weibull)”: \n\nWe thank the reviewer for pointing out this very early work of local Lipschitz constant estimation. We note that their sampling methodology is entirely different from our approach, as they estimate the Lipschitz constant by calculating the “slope” between pairs of sample points whereas in this paper we take the samples on the norm of the gradient directly. As “slope” is an approximation of gradient norm, it is conceivably (and also verified by our experiments in section 5.3, Table 3 and Figure 4) that the estimation will be less accurate than our method of directly computing the max norm of gradient. In addition, they only justified Lipschitz constant estimation for an *one-dimensional* function whereas our classifier function is very high-dimensional (d = 784 for MNIST, 3072 for CIFAR, 150,528 for ImageNet). In fact, how to accurately estimate Lipschitz constant for a high-dimensional function is still an open question. In this paper, we proposed to estimate Lipschitz constant by directly computing max norm of the gradient for the samples and using extreme value theory. As we show in Table 3 and Figure 4 in p.10 of the revised paper, Wood and Zhang’s (1996) approach (denoted as SLOPE) performs poorly on estimating Lipschitz constant for high-dimensional functions (i.e., neural net classifiers) and hence it is not suitable to use their method to evaluate adversarial perturbations in neural networks. \n\n2. Regarding the comment: “The main theoretical result in the paper is the analysis of the lower-bound on \\delta, the smallest perturbation to apply on a data point to fool the network. This result is obtained almost directly by writing the bound on Lipschitz-continuous function”: \n\nWe thank the reviewer for this comment. Although our analysis is intuitive and straightforward, to the best of our knowledge, this is the *first* work that directly uses Lipschitz continuity to prove such a perturbation analysis. In comparison, Hein & Andriushchenko (2017) implicitly assumed Lipschitz continuity but used mean value theorem and Holder’s inequality in their analysis, which is not straightforward to achieve the same result, as also suggested by the reviewer. In addition to the difference in derivation of the bound, we would like to emphasize that our analysis can be easily extended to non-differentiable functions with a finite number of non-differentiable points, whereas Hein & Andriushchenko’s analysis is restricted to continuously differentiable functions. Overall, our analysis is simple and more intuitive, and we further facilitate numerical calculation of the bound by applying the extreme value theory in this work. \n\n3. Regarding the comment: “Lemma 3.1: why citing Paulavicius and Zilinskas for the definition of Lipschitz continuity?”: \n\nLemma 3.1 is not just the definition of Lipschitz continuity; it also gives the relationship between (local) Lipschitz constant in general Lp (p>=1) norm and the dual norm of gradient. The usual Lipschitz continuity is defined in terms of L2 norm and the extension to an arbitrary Lp norm is not straightforward, thus we refer readers to Paulavicius and Zilinskas paper. \n", "\n7. Regarding the comment: “Globally, the numerical experiments are in favor of the presented method. The authors should also add information about the time it takes to compute the bound, the evolution of the bound in function of the number of samples and the distribution of the relative gap between the lower-bound and the best adversarial example.”: \n\nWe thank the reviewer for this suggestion. Following your suggestion, we have included additional experimental results in Section 5.4 - Time v.s. Estimation Accuracy. In Figure 7, we vary the number of samples (N_b=50,100,250,500) and compute the L2 CLEVER scores for three large ImageNet models, Inception-v3, ResNet-50 and MobileNet. We observe that 50 or 100 samples are usually sufficient to get a reasonably accurate robustness estimation despite using a smaller number of samples. On a single GTX 1080 Ti GPU, the cost of 1 sample (with N_s = 1024 in Algorithm 1) is measured as 1.2 s for MobileNet, 5.5 s for ResNet-50 and 7.3 s for Inception-v3, thus the computational cost of CLEVER is feasible for state-of-the-art large-scale deep neural networks. Additional figures for MNIST and CIFAR datasets are given in Figure 9 in Appendix E2. We also added Figure 5 to show the empirical CDF of the gap between CLEVER score and the L2 distortion founded by CW attacks (the best attack) for 3 imagenet networks with random targets. It shows that at least 80% of the images have small gaps, demonstrating the effectiveness of our approach.\n\n8. Regarding the comment: “Moreover, the numerical experiments look to be realized in the context of targeted attack. To show the real effectiveness of the approach, the authors should also show the effectiveness of the lower-bound in the context of non-targeted attack.”: \n\nWe thank the reviewer for this important suggestion. Following your suggestion, we have added the experiments of un-targeted attack in Section 5.3. The results comparing average untargeted clever score and distortion found by CW and I-FGSM attacks are summarized in Table 2. We show that CW and I-FGSM attack results agree with the predicted robustness by CLEVER score, demonstrating the effectiveness of our approach. \n\n9. Finally, we thank again the reviewer for the positive comments on the clarity of our paper and we hope our answers above were able to address all the comments regarding originality and technical contributions of our paper. As suggested by the reviewer, in the current version of paper, we have included three sets of new experimental results regarding \n(1) untargeted attacks (Section 5.3, Table 2) \n(2) comparison to the slope sampling method of Wood & Zheng (1996) paper (Section 5.3, Table 3, Figure 4)\n(3) more numerical results of previous experiments (Section 5.3, Figure 5, Figure 7 and Figure 9)\nto show the advantage of our proposed method. \n\nAs we highly value all reviewers’ inputs, we would like to use this opportunity to ask for your comments on the updated version during the author rebuttal stage. We believe we have carefully addressed all of your concerns, and we sincerely hope you could reconsider your decision.\n", "\n1. Regarding the comment: “The fact that the norms of attacks are slightly above the proposed score is promising, however, there is always the risk of finding a lower bound that is too small (zeros and large gaps in Figure 3). It would be nice to be able to show that one can find corresponding attacks that are not too far away from the proposed score”: \n\nWe thank the reviewer for bringing this issue to our attention. Indeed, zero and small lower bounds were caused by the unstable MLE solver in scipy. We have fixed this issue by renormalizing samples before MLE and updated the results in Table 4 and Figure 6 in p.11 of the revised paper. In Figure 5, we show the empirical CDF of the gaps for 100 ImageNet images, and find that most gaps are indeed small. We also report the percentage of images where p-value in K-S test is greater than 0.05 in Figure 3 (p.8) and Table 5 (p.16). The numbers are all close to 100%, justifying the hypothesis that the sampled maximum gradient norms follow the reverse Weibull distribution.\n\n2. Regarding the comment: “Finally, a minor point: Definition 3.1 has a confusing notation, f is a K-valued vector throughout the paper but it also denotes the number that represents the prediction in Definition 3.1. I believe this is just a typo”: \n\nWe thank the reviewer for pointing out this typo. We have fixed the typos in Definition 3.1 accordingly. \n", "We thank AnonReviewer 1 and AnonReviewer 3 for the constructive comments and overall positive assessments. We also thank AnonReviewer 2 for the insightful comments and valuable suggestions. We will detail in our response below of how we have addressed these comments and reply to each reviewer in the comments. Besides, we believe there might be some misunderstanding on the originality and technical quality of our paper, which will also be clarified.\n\nTo improve the quality of this paper, we have added more theoretical results, figures, and experimental results to our revised version (uploaded). We summarize these changes as below:\n\n* Summary of the changes:\nSection 3:\n1. We have made it clearer at the beginning of Section 3 that our robustness guarantee in this paper is more general and more intuitive than Hein & Andriushchenko (2017) by using Lipschitz continuity assumption. For continuously differentiable functions, our result is consistent with Hein & Andriushchenko (2017); our analysis can easily extend to non-differentiable functions (e.g. ReLU activations) whereas the analysis in Hein & Andriushchenko (2017) is restricted to continously differentiable functions. \n\n2. We have added Corollary 3.2.2 as a formal guarantee of the targeted attack. We also added a note that Theorem 3.2 and Corollary 3.2.1 are formal guarantees for un-targeted attack. \n\n3. We have changed the notation of Lipschitz constant of function (f_c-f_j) from L_q to L_q^j to make it clearer that it is dependent on the index j. \n\nSection 4:\n1. We have added a paragraph before Section 4.1 to comment on the difference between our approach and (Wood & Zhang, 1996) as mentioned by AnonReviewer 2. We note that the sampling methodology is entirely different and their approach (denoted as SLOPE) works poorly on estimating Lipschitz constant for high dimensional functions like neural networks, as demonstrated in our Table 3 and Figure 4 in p.10.\n\n2. In Section 4.1, we gave a clearer explanation of why we want to derive the CDF of $||\\nabla g(x)||_q$. The reason is that in this paper, we propose to sample the maximum of $||\\nabla g(x)||_q$ to estimate local Lipschitz constant via extreme value theory. A reader may wonder how the distribution of $||\\nabla g(x)||_q$ looks like. Thus, as an example, we show that we can derive the CDF of $||\\nabla g(x)||_q$ for a 2-layer neural network with ReLU activation in Theorem 6.1 in Appendix D. \n\n3. In Section 4.2, we added Algorithm 2 (clever-u) to illustrate how to compute the clever score for untargeted attacks. The original Algorithm 1 (clever-t) is for targeted attacks. \n\nSection 5: \n1. In Section 5.2, we reported the percentage of images where p-value in Kolmogorov-Smirnov test is greater than 0.05 in Figure 3 and Table 5 (in appendix E1). The numbers are all close to 100%, justifying the hypothesis that the sampled maximum gradient norms follow reverse Weibull distribution.\n\n2. In Section 5.3, we added untargeted attack results in Table 2. We show that CW and I-FGSM attack results agree with the predicted robustness by CLEVER score.\n\n3. In Section 5.3, we also implemented the method in Wood and Zhang (1996) to estimate Lipschitz constant and calculate the average L2 and L infinity distortion for targeted attacks in Table 3 (denoted as SLOPE) and Figure 4. We show that their method (SLOPE) gives poor estimates on the distortions for high dimensional functions like neural networks. \n\n4. In Section 5.3, we also fixed an unstable MLE estimation issue in scipy by renormalizing samples before MLE and improved the results in Table 4 and Figure 6. \n\n5. In Section 5.4, we reported the runtime for ImageNet networks - on a single GTX 1080 Ti GPU, the cost of 1 sample (with Ns = 1024 in Algorithm 1) is measured as 1.2 s for MobileNet, 5.5 s for ResNet-50 and 7.3 s for Inception-v3. Thus the computational cost of CLEVER is feasible for state-of-the-art large-scale deep neural networks. We also discussed how the number of samples affects estimation accuracy in Figure 7 for 3 imagenet models and Figure 9 for mnist and cifar in Appendix E2 - we observe that 50 or 100 samples are usually sufficient to get a reasonably accurate robustness estimation despite using a smaller number of samples. The results indicate that our method is practical for large networks.\n" ]
[ 7, 7, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BkUHlMZ0b", "iclr_2018_BkUHlMZ0b", "iclr_2018_BkUHlMZ0b", "BJiW7IkZM", "B1ZlEVXyf", "B1ZlEVXyf", "B1ZlEVXyf", "B1ZlEVXyf", "rk8Ucb5gf", "iclr_2018_BkUHlMZ0b" ]
iclr_2018_r1vuQG-CW
HexaConv
The effectiveness of Convolutional Neural Networks stems in large part from their ability to exploit the translation invariance that is inherent in many learning problems. Recently, it was shown that CNNs can exploit other invariances, such as rotation invariance, by using group convolutions instead of planar convolutions. However, for reasons of performance and ease of implementation, it has been necessary to limit the group convolution to transformations that can be applied to the filters without interpolation. Thus, for images with square pixels, only integer translations, rotations by multiples of 90 degrees, and reflections are admissible. Whereas the square tiling provides a 4-fold rotational symmetry, a hexagonal tiling of the plane has a 6-fold rotational symmetry. In this paper we show how one can efficiently implement planar convolution and group convolution over hexagonal lattices, by re-using existing highly optimized convolution routines. We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget. Furthermore, we find that the increased degree of symmetry of the hexagonal grid increases the effectiveness of group convolutions, by allowing for more parameter sharing. We show that our method significantly outperforms conventional CNNs on the AID aerial scene classification dataset, even outperforming ImageNet pre-trained models.
accepted-poster-papers
This paper implements Group convolutions on inputs defined over hexagonal lattices instead of square lattices, using the roto-translation group. The internal symmetries of the hexagonal grid allow for a larger discrete rotation group than when using square pixels, leading to improved performance on CIFAR and aerial datasets. The paper is well-written and the reviewers were positive about its results. That said, the AC wonders what is the main contribution of this work relative to existing related works (such as Group Equivarant CNNS, Cohen & Welling'16, or steerable CNNs, Cohen & Welling'17). While it is true that extending GCNNs to hexagonal lattices is a non-trivial implementation task, the contribution lacks significance in the mathematical/learning fronts, which are perhaps the ones ICLR audience will care more about. Besides, the numerical results, while improved versus their square lattice counterparts, are not a major improvement over the state-of-the-art. In summary, the AC believes this is a borderline paper. The unanimous favorable reviews tilt the decision towards acceptance.
train
[ "SysTGDdxf", "rJnr8zdgM", "Syvd8Qcgf", "BkDvOSdQM", "Sy1xuHOXM", "SJrJPrdQG", "Byo0SHu7z", "HybbMN0Zf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "\nThe authors took my comments nicely into account in their revision, and their answers are convincing. I increase my rating from 5 to 7. The authors could also integrate their discussion about their results on CIFAR in the paper, I think it would help readers understand better the advantage of the contribution.\n\n----\n\nThis paper is based on the theory of group equivariant CNNs (G-CNNs), proposed by Cohen and Welling ICML'16.\n\nRegular convolutions are translation-equivariant, meaning that if an image is translated, its convolution by any filter is also translated. They are however not rotation-invariant for example. G-CNN introduces G-convolutions, which are equivariant to a given transformation group G.\n\nThis paper proposes an efficient implementation of G-convolutions for 6-fold rotations (rotations of multiple of 60 degrees), using a hexagonal lattice. The approach is evaluated on CIFAR-10 and AID, a dataset of aerial scene classification. The approach outperforms G-convolutions implemented on a squared lattice, which allows only 4-fold rotations on AID by a short margin. On CIFAR-10, the difference does not seem significative (according to Tables 1 and 2).\nI guess this can be explained by the fact that rotation equivariance makes sense for aerial images, where the scene is mostly fronto-parallel, but less for CIFAR (especially in the upper layers), which exhibits 3D objects.\n\nI like the general approach of explicitly putting desired equivariance in the convolutional networks. Using a hexagonal lattice is elegant, even if it is not new in computer vision (as written in the paper). However, as the transformation group is limited to rotations, this is interesting in practice mostly for fronto-parallel scenes, as the experiences seem to show. It is not clear how the method can be extended to other groups than 2D rotations.\n\nMoreover, I feel like the paper sometimes tries to mask the fact that the proposed method is limited to rotations. It is admittedly clearly stated in the abstract and introduction, but much less in the rest of the paper.\n\nThe second paragraph of Section 5.1 is difficult to keep in a paper. It says that \"From a qualitative inspection of these hexagonal interpolations we conclude that no information is lost during the sampling procedure.\" \"No information is lost\" is a strong statement from a qualitative inspection, especially of a hexagonal image. This statement should probably be removed. One way to evaluate the information lost could be to iterate interpolation between hexagonal and squared lattices to see if the image starts degrading at some point.\n\n\n\n", "The paper proposes G-HexaConv, a framework extending planar and group convolutions for hexagonal lattices. Original Group-CNNs (G-CNNs) implemented on squared lattices were shown to be invariant to translations and rotations by multiples of 90 degrees. With the hexagonal lattices defined in this paper, this invariance can be extended to rotations by multiples of 60 degrees. This shows small improvements in the CIFAR-10 performances, but larger margins in an Aerial Image Dataset. \n\nDefining hexagonal pixel configurations in convolutional networks requires both resampling input images (under squared lattices) and reformulate image indexing. All these steps are very well explained in the paper, combining mathematical rigor and clarifications. \n\nAll this makes me believe the paper is worth being accepted at ICLR conference. \n\nSome issues that would require further discussion/clarification: \n- G-HexaConv critical points are memory and computation complexity. Authors claim to have an efficient implementation but the paper lacks a proper quantitative evaluation. Memory complexity and computational time comparison between classic CNNs and G-HexaConv should be provided.\n- I encourage the authors to open the source code for reproducibility and comparison with future transformational equivariant representations \n-Also, in Fig.1, I would recommend to clarify that image ‘f’ corresponds to a 2D view of a hexagonal image pixelation. My first impression was a rectangular pixelation seen from a perspective view.\n", "The paper presents an approach to efficiently implement planar and group convolutions over hexagonal lattices to leverage better accuracy of these operations due to reduced anisotropy. They show that convolutional neural networks thus built lead to better performance - reduced inductive bias - for the same parameter budget.\n\nG-CNNs were introduced by Cohen and Welling in ICML, 2016. They proposed DNN layers that implemented equivariance to symmetry groups. They showed that group equivariant networks can lead to more effective weight sharing and hence more efficient networks as evinced by better performance on CIFAR10 & CIFAR10+ for the same parameter budget. This paper shows G-equivariance implemented on hexagonal lattices can lead to even more efficient networks. \n\nThe benefits of using hexagonal lattices over rectangular lattices is well known in the signal processing as well as in computer vision. For example, see \n\nGolay M. Hexagonal parallel pattern transformation. IEEE Transactions on Computers 1969. 18(8): p. 733-740.\n\nStaunton R. The design of hexagonal sampling structures for image digitization and their use with local operators. Image and Vision Computing 1989. 7(3): p. 162-166. \n\nL. Middleton and J. Sivaswamy, Hexagonal Image Processing, Springer Verlag, London, 2005\n\nThe originality of the paper lies in the practical and efficient implementation of G-Conv layers. Group-equivariant DNNs could lead to more robust, efficient and (arguably) better performing neural networks.\n\nPros\n\n- A good paper that systematically pushes the state of the art towards the design of invariant, efficient and better performing DNNs with G-equivariant representations.\n\n- It leverages upon the existing theory in a variety of areas - signal & image processing and machine learning, to design better DNNs.\n\n - Experimental evaluation suffices for a proof of concept validation of the presented ideas. \n\n \nCons\n\n- The authors should relate the paper better to existing works in the signal processing and vision literature.\n\n- The results are on simple benchmarks like CIFAR-10. It is likely but not immediately apparent if the benefits scale to more complex problems.\n\n- Clarity could be improved in a few places\n\n: Since * is used for a standard convolution operator, it might be useful to use *_g as a G-convolution operator.\n\n: Strictly speaking, for translation equivariance, the shift should be cyclic etc.\n\n: Spelling mistakes - authors should run a spellchecker.\n", "Dear commenter, \n\nThank you for your interest in our paper. \n\nAlthough hexagonal grids have been used in signal processing for some time, our work is focused on the implementation of the group convolution for 6-fold rotational groups p6 and p6m. Thus, unlike other methods, our approach is able to exploit the symmetries of the hexagonal grid to improve statistical efficiency by parameter sharing. We have shown that our method convincingly beats a solid baseline on CIFAR, and outperforms the transfer learning baseline on AID. Our method is not related to adaptive, deformable, or separable convolution in either approach or intent.\n", "Dear reviewer,\n\nWe thank you for your comments and suggestions.\n\nOn hexagonal literature: \nWe agree that it is important to recognize existing work on hexagonal signal processing, and have added references by Petersen, Hartman and Middleton in the updated paper. \n\nOn benefits to scaling:\nWe agree that CIFAR-10 is a relatively simple benchmark. In our experiments we do show that an identical network architecture where conventional conv layers are replaced by hexagonal g-conv layers, results in consistent improvements on two distinct datasets. Furthermore, we plan to release our codebase which can help further research to scale these methods to larger problems.\n\nOn the group convolution operator:\nWe think it is a very good suggestion to change group convolution operators from * to *_g, to clarify what type of convolution is used. We changed the relevant operators in the updated paper.\n\nOn exact translation equivariance in CNNs: \nWe agree with the reviewer that in order for equivariance to hold exactly, either:\nshifts should be cyclic, or\nOne should use “valid”-mode convolutions and consider the input image as a function defined on all of Z^2, where values outside of the original image are zero.\nIn practice, we use “same” convolution instead of “valid” convolution, because the latter would increase the size of the feature maps with each layer. Thus, a typical convolutional network is not exactly translation equivariant. We have added a footnote that addresses this detail.\n\nOn spellchecking:\nWe have run a spellchecker and fixed the spelling mistakes. ", "Dear reviewer,\n\nThank you for your review.\n\nOn performance of G-HexaConvs:\nIn the experiments we show that performance consistently improves with increasing degrees of symmetry. We understand the concern of the reviewer that these differences are small for the CIFAR dataset. The results of the experiments were collected over 10 different runs with random parameter initializations. The experiments section of the paper has been updated to emphasize that the values are obtained by taking the average of 10 runs. To show statistical significance of 6-fold rotational symmetries over 4-fold rotational symmetries, we have done a significance test on our data. We test p4 and p4m versus p6 and p6m (our method) in a pairwise t-test, and find it passes with p=0.036. \n\nAlso, it should not be undervalued that our method outperforms a transfer learning approach on AID, that has been pretrained on ImageNet. Our method reduces classification error by 2% compared to networks that leverage only 4-fold symmetries. And our methods improve the error of conventional network by more than 11%.\n\nOn extensions to other groups than 2D rotations:\nThe reviewer is right to observe that in fronto-parallel scenes, this method can leverage global symmetries in the picture. Nonetheless, our experiments on CIFAR-10 show that although the margin of the benefits is smaller, our method can leverage local symmetries on a smaller scale and improve performance. These findings agree with earlier experiments by Cohen and Welling who used only 4-fold symmetries.\n\nOn masking limitations to the group of rotations:\nIt is not our intention to mask in any way that our method is limited to mirror and rotation transformations. Note that although the mathematical framework introduced by Cohen and Welling can be used for any group, in some cases, such the case of 6-fold rotational symmetry, the concrete implementation is far from trivial. Our paper is focused on the various data structures and indexing schemes that are required for an efficient implementation of hexagonal G-convolutions. If the reviewer is not entirely satisfied after the updates we made to the paper, perhaps the reviewer can help us by pointing to specific locations that could be improved in this respect.\n\nOn information loss conclusion by qualitative inspection: \nWe completely agree with the reviewer this is not a precise claim. None of the conclusions on our paper depend on this claim. Moreover, classification performance does not degrade when using hex-images. The paragraph is rephrased in the updated paper.\n", "Dear reviewer,\n\nThank you for your support and comments.\n\nOn memory & computation complexity:\nIn our method, the memory and computational complexity scale as in the framework introduced by Cohen and Welling. Say n is the number of elements in a group (e.g. 6 rotations), and say we wish to keep the number of parameters fixed relative to a planar CNN. Then memory scales with sqrt(n) (e.g. ~2.5), and computational complexity scales with n. This is exactly the same cost as simply increasing the number of channels by ~2.5, which one would normally do only when the dataset was much larger.\n\nOn open source:\nTo facilitate the development of further research, in areas such as G-HexaConvs on other coordinate systems, we will release our source code on github. This also addresses the second point that the reviewer raises.\n\nOn Figure 1:\nTo improve the clarity of Fig. 1, we modified the borders and size. In addition, the caption also describes what the image f is. We hope that this addresses the reviewer’s concerns regarding the figure.\n", "As the reviewers suggested, hex. shape kernels have been used for vision tasks for a long decade. \n\nIn the following paper, hex. shape kernels were used to train CNNs for image classification and detection, using Cifar 10/100 and Imagenet datasets:\n\nZ. Sun, M. Ozay, T. Okatani, Design of Kernels in Convolutional Neural Networks for Image Classification, ECCV 2016.\n\nThere are also works on adaptive convolution, which employ variations of adaptive shape convolution operations, for instance;\n\nS. Niklaus, L. Mai, F. Liu, Video Frame Interpolation via Adaptive Convolution, CVPR 2017.\n\nS. Niklaus, L. Mai, F. Liu, Video Frame Interpolation via Adaptive Separable Convolution, ICCV 2017.\n\nJ. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei, Deformable Convolutional Networks, ICCV 2017.\n\n- What is the novelty of HexaConv compared to these previous works (i.e. Hex. kernels and adaptive/deformable convolution)? A detailed comparison is required in order to get the superiority of the proposed HexaConv.\n\n- The performance of HexaConv is less than the perf. of these sota methods. Could you please provide a more detailed analysis, esp. using HexaConv on larger datasets of natural images, e.g. Imagenet?\n\n" ]
[ 7, 7, 7, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1vuQG-CW", "iclr_2018_r1vuQG-CW", "iclr_2018_r1vuQG-CW", "HybbMN0Zf", "Syvd8Qcgf", "SysTGDdxf", "rJnr8zdgM", "iclr_2018_r1vuQG-CW" ]
iclr_2018_rJzIBfZAb
Towards Deep Learning Models Resistant to Adversarial Attacks
Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against a well-defined class of adversaries. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest robustness against a first-order adversary as a natural security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models.
accepted-poster-papers
This paper presents new results on adversarial training, using the framework of robust optimization. Its minimax nature allows for principled methods of both training and attacking neural networks. The reviewers were generally positive about its contributions, despite some concerns about 'overclaiming'. The AC recommends acceptance, and encourages the authors to also relate this work with the concurrent ICLR submission (https://openreview.net/forum?id=Hk6kPgZA-) which addresses the problem using a similar approach.
train
[ "rkdJB4SBG", "Hy0j8ecgz", "rkO53U_ez", "SyRt7SoxG", "BkTN7DaXz", "SyUsGvpmG", "HkRKZDTQM", "HJJe50sez", "rkj4yGsgf", "rJw7T03yG", "ryQ33hmkf", "rJsu3TGyG" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "author", "public" ]
[ "We have been performing an analysis of the robustness of many of the papers submitted here. This paper provides a substantially stronger defense than many of the other submissions, and we were not able to meaningfully invalidate any of the claims made. Given our analysis so far, it looks like this is the strongest defense submitted to ICLR 2018.", "- The authors investigate a minimax formulation of deep network learning to increase their robustness, using projected gradient descent as the main adversary. The idea of formulating the threat model as the inner maximization problem is an old one. Many previous works on dealing with uncertain inputs in classification apply this minimax approach using robust optimization, e.g.: \n\nhttps://www2.eecs.berkeley.edu/Pubs/TechRpts/2003/CSD-03-1279.pdf\nhttp://www.jmlr.org/papers/volume13/ben-tal12a/ben-tal12a.pdf\n\nIn the case of convex uncertainty sets, many of these problems can be solved efficiently to a global minimum. Generalization bounds on the adversarial losses can also be proved. Generalizing this approach to non-convex neural network learning makes sense, even when it is hard to obtain any theoretical guarantees. \n\n- The main novelty is the use of projected gradient descent (PGD) as the adversary. From the experiments it seems training with PGD is very robust against a set of adversaries including fast gradient sign method (FGSM), and the method proposed in Carlini & Wagner (CW). Although the empirical results are promising, in my opinion they are not sufficient to support the bold claim that PGD is a 'universal' first order adversary (on p2, in the contribution list) and provides broad security guarantee (in the abstract). For example, other adversarial example generation methods such as DeepFool and Jacobian-based Saliency Map approach are missing from the comparison. Also it is not robust to generalize from two datasets MNIST and CIFAR alone. \n\n- Another potential issue with using projected gradient descent as adversary is the quality of the adversarial example generated. The authors show empirically that PGD finds adversarial examples with very similar loss values on multiple runs. But this does not exclude the possibility that PGD with different step sizes or line search procedure, or the use of randomization strategies such as annealing, can find better adversarial examples under the same threat model. This could make the robustness of the network rather dependent on the specific implementation of PGD for the inner maximization problem. \n\n- In Tables 3, 4, and 5 in the appendix, in most cases models trained with PGD are more robust than models trained with FGSM as adversary, modulo the phenomenon of label leakage when using FGSM as attack. However in the bottom right corner of Table 4, FGSM training seems to be more robust than PGD training against black box PGD attacks. This raises the question on whether PGD is truly 'universal' and provides broad security guarantees, once we add more first order attacks methods to the mix. \n\n", "This paper proposes to look at making neural networks resistant to adversarial loss through the framework of saddle-point problems. They show that, on MNIST, a PGD adversary fits this framework and allows the authors to train very robust models. They also show encouraging results for robust CIFAR-10 models, but with still much room for improvement. Finally, they suggest that PGD is an optimal first order adversary, and leads to optimal robustness against any first order attack.\n\nThis paper is well written, brings new ideas and perfoms interesting experiments, but its claims are somewhat bothering me, considering that e.g. your CIFAR-10 results are somewhat underwhelming. All you've really proven is that PGD on MNIST seems to be the ultimate adversary. You contrast this to the fact that the optimization is non-convex, but we know for a fact that MNIST is fairly simple in that regime; iirc a linear classifier gets something like 91% accuracy on MNIST. So my guess is that the optimization problem on MNIST is in fact pretty convex and mostly respects the assumptions of Danskin's theorem, but not so much for CIFAR-10 (maybe even less so for e.g. ImageNet, which is what Kurakin et al. seem to find).\n\nConsidering your CIFAR-10 results, I don't think anyone should \"suggest that secure neural networks are within reach\", because 1) there is still room for improvement 2) it's a safe bet that someone will always just come up with a better attack than whatever defense we have now. It has been this way in many disciplines (crypto, security) for centuries, I don't see why deep learning should be exempt. Simply saying \"we believe that our robust models are significant progress on the defense side\" was enough, because afaik you did improve on CIFAR-10's SOTA; don't overclaim. \nYou make these kinds of claims in a few other places in this paper, please be careful with that.\n\nThe contributions in your appendix are interesting. \nAppendix A somewhat confirms one of the postulates in Goodfellow et al. (2014): \"The direction of perturbation, rather than the specific point in space, matters most. Space is not full of pockets of adversarial examples that finely tile the reals like the rational numbers\".\nAppendix B and C are not extremely novel in my mind, but definitely add more evidence. \nAppendix E is quite nice since it gives an insight into what actually makes the model resistant to adversarial examples.\n\n\nRemarks:\n- The update for PGD should be using \\nabla_{x_t} L(\\theta,x_t,y), (rather than only \\nabla_x)?\n- In table 2, attacking a with 20-step PGD is doing better than 7-step. When you say \"other hyperparameter choices didn’t offer a significant decrease in accuracy\", does that include the number of steps? If not why stop there? What happens for more steps? (or is it too computationally intensive?)\n- You only seem to consider adversarial examples created from your dataset + adv. noise. What about rubbish class examples? (e.g. rgb noise)\n", "This paper consolidates and builds on recent work on adversarial examples and adversarial training for image classification. Its contributions:\n\n - Making the connection between adversarial training and robust optimization more explicit.\n\n - Empirical evidence that:\n * Projected gradient descent (PGD) (as proposed by Kurakin et al. (2016)) reasonably approximates the optimal attack against deep convolutional neural networks\n * PGD finds better adversarial examples, and training with it yields more robust models, compared to FGSM \n\n - Additional empirical analysis:\n * Comparison of weights in robust and non-robust MNIST classifiers\n * Vulnerability of L_infty-robust models to to L_2-bounded attacks\n\nThe evidence that PGD consistently finds good examples is fairly compelling -- when initialized from 10,000 random points near the example to be disguised, it usually finds examples of similar quality. The remaining variance that's present in those distributions shouldn't hurt learning much, as long as a significant fraction of the adversarial examples are close enough to optimal.\n\nGiven the consistent effectiveness of PGD, using PGD for adversarial training should yield models that are reliably robust (for a specific definition of robustness, such as bounded L_infinity norm). This is an improvement over purely heuristic approaches, which are often less robust than claimed.\n\nThe comparison to R+FGSM is interesting, and could be extended in a few small ways. What would R+FGSM look like with 10,000 restarts? The distribution should be much broader, which would further demonstrate how PGD works better on these models. Also, when generating adversarial examples for testing, how well would R+FGSM work if you took the best of 2,000 random restarts? This would match the number of gradient computations required by PGD with 100 steps and 20 restarts. Again, I expect that PGD would be better, but this would make that point clearer. I think this analysis would make the paper stronger, but I don't think it's required for acceptance, especially since R+FGSM itself is such a recent development.\n\nOne thing not discussed is the high computational cost: performing a 40-step optimization of each training example will be ~40 times slower than standard stochastic gradient descent. I suspect this is the reason why there are results on MNIST and CIFAR, but not ImageNet. It would be very helpful to add some discussion of this.\n\nThe title seems unnecessarily vague, since many papers have been written with the same goal -- make deep learning models resistant to adversarial attacks. (This comment does not affect my opinion about whether or not the paper should be accepted, and is merely a suggestion for the authors.)\n\nAlso, much of the paper's content is in the appendices. This reads like a journal article where the references were put in the middle. I don't know if that's fixable, given conference constraints.", "We thanks the reviewer for providing feedback.\n \nRegarding \"secure networks are within reach\" claim: We definitely agree that there is (large) room for improvement on CIFAR10. Our claim, however, comes from the fact that (to the best of our knowledge) the classifiers we trained were the first ones to robustly classify any non-trivial fraction of the test set. We indeed believe (and provide experimental evidence for it) that no attack will significantly decrease the accuracy of our classifier. We view our results as a baseline that shows that classification robustness is indeed achievable (against a well-defined class of adversaries at least). We agree that our claims ended up sounding too strong though. We will update our paper to tone them down.\n \nRegarding CIFAR10 results: The reviewer points out that the optimization landscape for MNIST is much simpler than that of CIFAR10 and that this would explain the difference in the performance of the resulting classifiers. We want to point out, however, that our performance on CIFAR10 is due to poor generalization and not the difficulty of the training problem itself. As can be seen in Figure 1b, we are able to train a perfectly robust classifier with 100% adversarial accuracy on the training set. This shows that the optimization landscape of the problem is still tractable with a PGD adversary.\n \nRegarding CIFAR10 attack parameters: We didn’t explore additional parameters due to the computational constraints at the time. Overall we have observed that the number of PGD steps does not change the resulting accuracy by more than a few percent. For instance, if we retrain the (non-wide version of the) CIFAR10 network with a 5-step PGD adversary we get the following accuracies when testing against PGD: 5 steps -> 45.00%, 10 steps -> 43.02%, 20 steps -> 42.65%, 100 steps -> 42.21%.\n \nRegarding rubbish class examples: We agree that rubbish class examples are an important class to consider. However it is unclear how to rigorously define them. As we discuss in our paper (3rd paragraph of page 3), providing any kind of robustness guarantees requires a precise definitions of the allowed adversarial perturbations.\n", "We thank the reviewer for the feedback.\n \nWe agree with the reviewer that min-max approach for robust classification have been studied before. As we mention in our paper (right after equation 2.1), such formulations go back at least to the work of Abraham Wald in the 1940s (e.g., see https://www.jstor.org/stable/1969022). What we view as the main contribution of our paper lies, however, not in introducing a new problem formulation but in studying if such formulation can inform training methods that lead to reliably robust deep learning models in practice.\n \nWe do not claim that training with a PGD adversary is the main novelty of our paper - prior work has already employed a variety of iterative first-order methods. Instead, our goal is to argue that training with PGD is a principled approach to adversarial robustness and to give both theoretical and empirical evidence for this view. (See the connection via Danskin’s theorem, and our loss function explorations in Appendix A.) Moreover, we demonstrate that adversarial training with PGD - when done properly - leads to state-of-the-art robustness on two canonical datasets. In contrast to much other work in this area, we have also validated the robustness of our models via a public challenge in which our model underwent (unsuccessful) attacks by other research groups.\n \nRegarding PGD being a \"universal\" first-order adversary and the \"broad security guarantee\" claim: first, we would like to note that on Page 2 of our paper, we state that we provide evidence for this view, not that this view is necessarily correct. Still, we believe that from the point of view of first order methods, our evaluation approach is comprehensive. Moreover, it is also worth noting that there has been increasing evidence for this view since we first published our paper: (i) No researcher has been able to break our released models. (ii) Follow-up work has used verification tools to test the PGD approaches and found that the adversarial examples found by iterative first order methods are almost as good as the adversarial examples found with a computationally expensive exhaustive search. Hence we believe that at least in the context of L_infinity robustness, viewing PGD as a universal first-order adversary has merit.\n \nRegarding JSMA and DeepFool: JSMA is an attack that is designed to perturb as few pixels as possible (often by a large value). Restricting this attack to the space of attacks we consider (0.3 distance from original in L_infinity norm) leads to an attack that is very slow and, as far as we could tell, less potent than PGD. Deepfool is an attack that has been designed with an aim of computing minimum norm perturbations. For the regime we are studying, the only difference between DeepFool and the CW attack is the choice of target class to use at each step. We didn’t feel that testing against this variation was necessary (given the length of our paper). Again we want to emphasize that we invited the community to attempt attacks against our published model and we didn’t receive any attacks that significantly lowered the performance of our model. Nevertheless we will perform the suggested experiments and add them to the final version.\n \nRegarding the point about PGD step size and variations: While one needs to tune PGD to a certain degree, we found that the method is robust to reasonable changes in the choice of PGD parameters. Training against different PGD variants also leads to robust networks. \n \nRegarding Table 4: We emphasize that Table 4 contains results for transfer attacks, which add an additional complication due to the mismatch between the source model used to construct the attack, and the target model that we would like to attack. We do observe that training with FGSM offers more robustness against *transfer* attacks constructed using networks trained with PGD. There is an important caveat however. The larger robustness is due to the difference in the two models, not because FGSM produces inherently more robust models. We view this effect as an artifact of the transferability phenomenon rather than a fundamental shortcoming of PGD-based adversarial training. When we consider the minimum across all columns in each row, the PGD-trained target model offers significantly more robustness than the FGSM-trained model (64.2% vs. 0.0%).\n", "We thank the reviewer for the positive feedback.\n \nRegarding R+FGSM: We evaluated our robust networks against R+FGSM with multiple restarts and got the following results.\n- MNIST. PGD-40: 93.2%, R+FGSM x40: 92.2%, R+FGSM x2000: 90.51%.\n- CIFAR10 (non-wide). PGD-10: 43.02%, R+FGSM x10: 50.17%, R+FGSM x2000: 48.66%.\nThese experiments suggest that for evaluation purposes R+FGSM is qualitatively similar to PGD (at least for adversarially trained networks). Still if one attempts to adversarially *train* using R+FGSM, the resulting classifier overfits to the R+FGSM perturbations and while achieving high training and test accuracy against R+FGSM, it is completely vulnerable to PGD.\n \nWe also created loss histograms to compare PGD and R+FGSM with the results plotted at https://ibb.co/gcJuxG (final loss value frequency over 10,000 random restarts for 5 random examples). We observe that R+FGSM exhibits a similar concentration phenomenon to that observed for PGD in Appendix A. We will include these experiments in the final paper version. We agree that further investigating the difference between PGD and R+FGSM is worth exploring in subsequent research.\n \nRegarding the computational cost of robust optimization: It is indeed true that training against a PGD adversary increases the training time by a factor that is roughly equal to the number of PGD steps. This is a drawback of this method that we hope will be addressed in future research. But, at this point, getting sufficient robustness indeed leads to a running time overhead. We will add a brief discussion of this in the final version of the paper.\n \nRegarding title choice: Our intention was to convey that there exist robustness baselines that current techniques can achieve. Still, we agree that the title might be too vague and we will revisit our choice.", "We thank the reviewer for bringing the work of Lyu et al. to our attention. We will cite it and discuss it in the \"Related Work\" section of our updated paper. As we already point out in our paper, both the general min-max framework, as well as its application to the problem of adversarial examples, are not new. Min-max formulations have been used extensively in the context of robust optimization and statistics, going back at least to the work of Abraham Wald in the 1930s and 40s. In the context of adversarial examples, we already cite the work of Shaham et al. (https://arxiv.org/abs/1511.05432) and Huang et al. (https://arxiv.org/abs/1511.03034), which consider a similar min-max formulation and appeared on arXiv nearly concurrently with the work of Lyu et al.\n\nTo clarify our contributions: the min-max formulation is part of the approach and *not* claimed as a contribution (see our introduction and the reply to \"Certified Defenses for Data Poisoning Attacks\" above). Instead, one of our main contributions is to study the loss landscape of the saddle point problem, *without replacing the loss by its first-order approximation*. It is known that solving the saddle point problem with a first-order approximation of the loss (see Figure 6 of Appendix B in our paper) produces networks that are vulnerable to more sophisticated (multi-step) attacks.", "Lyu et al. (2015) has made the connection to robust optimization (and proposed a new regularization). \nPlease cite their work https://arxiv.org/pdf/1511.06385.pdf when you introduce the minimax formulation.\n\nHaving said this, I understand that the main contribution of this paper is a systematic empirical study of the minimax formulation. The abstract and the intro seem to give the impression that this is the first paper that suggests a unifying minimax framework. Citing other works and attributing credit accordingly would help emphasize the true contributions of the paper.", "As the author of the data poisoning paper that is mentioned, I just wanted to agree with the authors of the current paper that these papers (in my opinion) are quite different. It is of course interesting to compare the similarities/differences in the min-max formulation, but the problems studied in the two papers are so different that this would (again, in my opinion) have no affect whatsoever on novelty of the current submission.", "We thank the reviewer for inquiring about the novelty of our work. As we point out in our paper (see Page 3), the min-max formulation itself is not new. In fact, problem formulations of this form have been studied for multiple decades (c.f. the work of Abraham Wald). Moreover, there is a rich literature concerning min-max problems in robust optimization. Claiming a min-max formulation as new would ignore a significant body of prior work.\n\nAs we point out in the introduction, our main contribution is *how* we employ the min-max formulation to study adversarially robust machine learning. To the best of our knowledge, our paper is the first detailed study of the min-max formulation for robust neural networks. From a scientific point of view, the question is not only whether adversarial robustness can be described with a min-max formulation, but whether such a formulation actually matches the computational reality we face in the practice of deep learning.\n\nOne concrete contribution is our thorough experiments exploring the adversarial loss landscape. Combined with Danskin’s theorem, they give evidence to the theory that adversarial training is indeed a principled way to solve the aforementioned min-max problem. Furthermore, our paper conducted the first public attack challenge to ascertain the robustness of a proposed defense. The challenge showed that our MNIST model is the first deep network that could not easily be broken with a new attack (subject to l_infinity constraints).\n\nFinally, we would also like to point out that our paper appeared publicly nearly concurrently with the NIPS paper mentioned by the reviewer. Moreover, there are several important differences compared to this paper. For instance, their focus is on robustness to corrupt training data, while our paper is about robustly classifying new unseen examples. Overall, we believe that reducing the comparison with the cited NIPS paper to the min-max formulation is an oversimplification of both their work and our work.\n", "This NIPS paper seems to have the same formulation for defending against attacks: https://arxiv.org/pdf/1706.03691.pdf\n\nThey consider the same min-max formulation and try to build a defense against attacks (similar idea to current paper?). \n\nThey consider data-poisoning, but one could imagine the same ideas applied to adversarial attacks. Does the current paper propose something very new from what was described in the NIPS paper? I'm not a specialist in analysis or large-scale optimization, hence my question. \n\nThanks!" ]
[ -1, 7, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJzIBfZAb", "iclr_2018_rJzIBfZAb", "iclr_2018_rJzIBfZAb", "iclr_2018_rJzIBfZAb", "rkO53U_ez", "Hy0j8ecgz", "SyRt7SoxG", "rkj4yGsgf", "iclr_2018_rJzIBfZAb", "ryQ33hmkf", "rJsu3TGyG", "iclr_2018_rJzIBfZAb" ]
iclr_2018_By4HsfWAZ
Deep Learning for Physical Processes: Incorporating Prior Scientific Knowledge
We consider the use of Deep Learning methods for modeling complex phenomena like those occurring in natural physical processes. With the large amount of data gathered on these phenomena the data intensive paradigm could begin to challenge more traditional approaches elaborated over the years in fields like maths or physics. However, despite considerable successes in a variety of application domains, the machine learning field is not yet ready to handle the level of complexity required by such problems. Using an example application, namely Sea Surface Temperature Prediction, we show how general background knowledge gained from the physics could be used as a guideline for designing efficient Deep Learning models. In order to motivate the approach and to assess its generality we demonstrate a formal link between the solution of a class of differential equations underlying a large family of physical phenomena and the proposed model. Experiments and comparison with series of baselines including a state of the art numerical approach is then provided.
accepted-poster-papers
This paper proposes to use data-driven deep convolutional architectures for modeling advection diffusion. It is well motivated and comes with convincing numerical experiments. Reviewers agreed that this is a worthy contribution to ICLR with the potential to trigger further research in the interplay between deep learning and physics.
train
[ "BJBU32dgz", "SyeN_yclz", "Bk6nbuf-M", "Skf3ipimM", "r1saUCzXG", "rkX5LRzmz", "Skp8ICG7f", "BkK-sP4xf", "H1UFgYQlG", "HJdRxVkgM", "H1lXnCjyG", "Sy569OJ1G", "SykEv6KRW" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "author", "public", "author", "public" ]
[ "The paper ‘Deep learning for Physical Process: incorporating prior physical knowledge’ proposes\nto question the use of data-intensive strategies such as deep learning in solving physical \ninverse problems that are traditionally solved through assimilation strategies. They notably show\nhow physical priors on a given phenomenon can be incorporated in the learning process and propose \nan application on the problem of estimating sea surface temperature directly from a given \ncollection of satellite images.\n\nAll in all the paper is very clear and interesting. The results obtained on the considered problem\nare clearly of great interest, especially when compared to state-of-the-art assimilation strategies\nsuch as the one of Béréziat. While the learning architecture is not original in itself, it is \nshown that a proper physical regularization greatly improves the performance. For these reasons I \nbelieve the paper has sufficient merits to be published at ICLR. That being said, I believe that \nsome discussions could strengthen the paper:\n - Most classical variational assimilation schemes are stochastic in nature, notably by incorporating\nuncertainties in the observation or physical evolution models. It is still unclear how those uncertainties \ncan be integrated in the model;\n - Assimilation methods are usually independent of the type of data at hand. It is not clear how the model\nlearnt on one particular type of data transpose to other data sequences. Notably, the question of transfer\nand generalization is of high relevance here. Does the learnt model performs well on other dataset (for instance,\nacquired on a different region or at a distant time). I believe this type of issue has to be examinated \nfor this type of approach to be widely use in inverse physical problems. \n", "In this paper, the authors show how a Deep Learning model for sea surface temperature prediction can be designed to incorporate the classical advection diffusion model. The architecture includes a differentiable warping scheme which allows back propagation of the error and is inspired by the fundamental solution of the PDE model. They evaluate the suggested model on synthetic data and outperform the current state of the art in terms of accuracy.\n\npros\n- the paper is written in a clear and concise manner\n- it suggests an interesting connection between a traditional model and Deep Learning techniques\n- in the experiments they trained the network on 64 x 64 patches and achieved convincing results\n\ncons\n- please provide the value of the diffusion coefficient for the sake of reproducibility\n- medium resolution of the resulting prediction\n\n\nI enjoyed reading this paper and would like it to be accepted.\n\nminor comments:\n- on page five in the last paragraph there is a left parenthesis missing in the inline formula nabla dot w_t(x))^2.\n- on page nine in the last paragraph there is the word 'flow' missing: '.. estimating the optical [!] between 2 [!] images.'\n- in the introduction (page two) the authors refer to SST prediction as a 'relatively complex physical modeling problem', whereas in the conclusion (page ten) it is referred to as 'a problem of intermediate complexity'. This seems to be inconsistent.", "The authors use deep learning to learn a surrogate model for the motion vector in the advection-diffusion equation that they use to forecast sea surface temperature. In particular, they use a CNN encoder-decoder to learn a motion field, and a warping function from the last component to provide forecasting. \n\nI like the idea of using deep learning for physical equations. I would like to see a description of the algorithm with the pseudo-code in order to understand the flow of the method. I got confused at several points because it was not clear what was exactly being estimated with the CNN. Having an algorithmic environment would make the description easier. I know that authors are going to publish the code, but this is not enough at this point of the revision. \n\nPhysical processes in Machine learning have been studied from the perspective of Gaussian processes. Just to mention a couple of references “Linear latent force models using Gaussian processes” and \"Numerical Gaussian Processes for Time-dependent and Non-linear Partial Differential Equations\"\n\nIn Theorem 2, do you need to care about boundary conditions for your equation? I didn’t see any mention to those in the definition for I(x,t). You only mention initial conditions. How do you estimate the diffusion parameter D? Are you assuming isotropic diffusion? Is that realistic? Can you provide more details about how you run the data assimilation model in the experiments? Did you use your own code?\n", "- Minor corrections and typos, as suggested by the reviewers. This includes imprecision and missing references.\n\n- A new paragraph has been included in the section \"conclusion and future directions\" for answering the comment of reviewer 2 regarding the modeling of uncertainty in the proposed model.\n\n- A more detailed proof of our theorem 1 as been added in Appendix A.\n\n- Some additional experiments were added in Appendix B, for answering (partially) the questions of reviewer 1 regarding the capacity of the model to generalize to new situations.\n", "Thanks a lot for the suggestions and comment, we corrected the mistakes in the updated version. \n\nThe value of the diffusion coefficient in this case is 0.45, we have precised it in the updated version.\nWe chose this value of the image resolution in order to limit the complexity of the computations, but the model could be used as well with larger images.\n\n", "\nThanks a lot for your comments and suggestions.\n\n1. Incorporating uncertainty in the model\nThis is the next step of our work. We could introduce uncertainties in different forms. We started to work on a variant of this model using a scheme similar to conditional VAE (Variational Auto Encoder) with the idea that the model should be able to predict multiple potential vector flow candidates instead of a mean value. VAEs allow sampling from noise distributions and then generating diverse candidates. We have added a paragraph in the “Conclusion and future work” section where we discuss this point and provide some references indicating what type of approach could be used.\n\n\n2. Generalization to other instances\n\n\nWe agree that the potential of the model should also be evaluated for other conditions and at other places. This is however a whole study by itself involving dealing with different datasets, and performing many different types of tests. For now we do not have the availability of such datasets and this is left for further study.\nIn order to provide some indications on the generalization performance, we however performed some additional tests on the available data. We evaluated the model on data distant in time and in space. For the former, we trained the model on the period 2011-2017 and tested on 2006-2010. The regions are the same as the one used in the main text (regions numbered 17 to 20 on figure 4). We have plotted the daily MSE on Figure 6 appendix B in the new version of the paper. The conclusion is that the range of error remains the same and there is a slight tendency for the error to increase when the time distance between test and train increases too. \n For the latter (sequences from different regions), we have trained the model on 2 regions and tested on 2 other regions (and permuted the couples of regions). The two couples of regions have different dynamics. Results are provided on table 2, appendix B. The conclusion here is again that the range of error values remains the same. The error depends more on the region dynamics than on the train / test conditions. For regions with high dynamics, the loss is higher than for stable regions. Performance degrades more for the former regions than for the latter when the training set is sampled from a different region. Extensive additional tests should be performed in order to go beyond these partial conclusions.\n\nNote also that it is possible to fine-tune the model using available data of the specific zone in question. Other work, such as [Fischer and al], or [Ilg and al], suggest that deep models trained to predict a motion vector field can generalize from synthetic to real data, and when fine-tuned, there is an improvement in performance.\n\n[Fischer and al]: https://arxiv.org/abs/1504.06852\n[Ilg and al]: https://arxiv.org/abs/1612.01925\n", "Thanks a lot for your comments. \n\n1. As you mentioned, the model is composed of two components: a CDNN which acts as a motion estimator, and a warping mechanism which predicts the future observation by moving the present observation along the motion field. The output of the CDNN is a vector field, i.e. a tensor of size WxHx2 where W and H are the width and height of the input images, and ‘2’ corresponds to the velocity.\n\nInference and training work as follows:\n\nInference - Input: an image sequence (I_t-k+1,...,I_t) of k consecutive images representing temperature acquisitions. Output: an image sequence ( \\hat{I}_t+1, \\hat{I}_t+K) of K consecutive image predictions. Given the inputs (I_t-k+1,...,I_t) the CDNN will compute an estimated vector field \\hat{w}_t. This vector field is then used to advect image I_t so as to compute an estimate of the future observation I_t+1. For multiple time step prediction (i.e. K > 1), the computed output \\hat{I}_t+1 is fed back into the CDNN, leading to the following input sequence (I_t-k+2, …, I_t, \\hat{I}_t+1) for estimating motion w_t+1. One can then estimate I_t+2 by advecting image \\hat{I}_t+1 using \\hat{w}_t+1, and so on.\n\nTraining - The training set is a consecutive sequence of images. An example is sampled from the training set and a loss value is computed between the model prediction and the target. Since the warping scheme (the solution to the advection-diffusion equation) is entirely differentiable, the gradient of the loss can be backpropagated through this component for modifying the parameters the CDNN module. \n\nBelow is a pseudo-code for the training step\n\nInput: training set : sequence of SST images I_{1:T}\nOutput: trained model parameters (CDNN parameters)\nIterate until convergence\n-- Sample a sequence I_{t- k+1 : t+K} of images\n\n-- Forward pass of the model: using I_{t-k+1 : t }, we infer K future observations I_{t+1 : t+K} using the inference scheme proposed above.\n\n-- Compute the loss between the targets and model predictions.\n\n-- Backward pass of the model. The gradient of the loss function with respect to the CDNN parameters is back propagated through the warping scheme in order to update the CDNN parameters via SGD (in the experiments we used Adam).\n\n2. Thanks for the references on Gaussian processes. We did not go through the GP literature on the topic of physico-statistical modeling. Even if the methods and the application are different, the motivation and arguments are clearly similar. We have added the references you suggested in the new version of the paper plus additional references in the related work section, under ‘ML for Physical modeling’.\n\n3. In the theorem, a sufficient condition for the existence of this solution of the advection-diffusion equation, is that the image function I is square-integrable, (the square of I is Lebesgue integrable). A consequence is that I tends to zero as x approaches infinity. This will allow us to calculate the solution. In practice, we do consider that I has a compact support, i.e. I is zero outside its definition domain \\Omega. This latter case is more specific than the square integrable hypothesis, and the theorem still applies. We have added a more detailed proof of the theorem in the appendix.\n\n4. We did make the hypothesis that diffusion is isotropic and this is one of the simplification hypotheses adopted in the paper. Our intention was to give a proof of concept about the incorporation of prior knowledge, and to show that the proposed approach performed on par with more complex state of the art assimilation methods. If we focused on the application itself, improvements could probably be brought by including additional priors. In particular, attention mechanisms could be added to the warping mechanism for modeling anisotropy.\n\n5. The diffusion parameter D is estimated on a validation set and its value is 0.45 - this is now indicated in the paper.\n \n6. The data assimilation code is run with a specific code provided by the authors of (Bereziat 2015). We had several interactions with the paper authors while performing the tests with their methods.\n", "To approximate the values of the spatial derivatives, we use the forward Euler method, i.e. \\frac{\\partial u}{\\partial x} \\approx \\frac{u_{n+1, k} - u_{n, k}}{\\Delta x}, were x is the spatial discretisation value, and n and k are the spatial indices of the pixels.", "Just for completeness, do you approximate the derivatives via numerical difference, given that you are predicting over a discrete set of coordinates? E.g.:\n\\frac{\\partial u }{\\partial x} \\approx \\sum_{i,j \\in Dom} \\frac{u(i,j) - u(i-1,j)}{\\delta}", "1. Gradient and divergence are computed wrt to the spatial variables x and y. To be more precise :\n\n \\left \\| \\nabla w \\right \\|^2 = (\\frac{\\partial u }{\\partial x})^2 + (\\frac{\\partial u }{\\partial y})^2 + (\\frac{\\partial v }{\\partial x})^2 + (\\frac{\\partial v }{\\partial y})^2\n\nand \n\n(\\nabla . w)^2 = ( \\frac{\\partial u}{\\partial x} + \\frac{\\partial v}{\\partial y})^2\n\n2. We used directly the data generated by NEMO and available at\n\nhttp://marine.copernicus.eu/services-portfolio/access-to-products/?option=com_csw&view=details&product_id=GLOBAL_ANALYSIS_FORECAST_PHY_001_024\n", "Nice work, really happy to see combining physics with DL does give such nice results. \nI was wondering if you could answer a few questions:\n1. In the regularization scheme - equation 6 - you are using the gradient and the divergence. With respect to what quantity (e.g. the input?) are you taking the gradient, and if it is the input, what is exactly is the divergence than when the input is not the cartesian coordinates but numerical values?\n\n2. Could you elaborate in more detail of how \"exactly\" did you use NEMO in order to obtain the data used for training? My understanding is that you used the NOAA6 data and assimilate it into the NEMO engine. However, there are a few core engines under the NEMO project. I know this might be frustrating but for reproducibility, this would be very helpful as I don't think many people are familiar with the software. Or of how to actually to obtain the data from NOAA6.\n", "Thank you for the questions and remarks.\n\n1. Since the integration is on R^2, we introduce the two corresponding 1-D variables, we then separate the Fourier transform of the advection term into two terms corresponding to each variable. Using an integration by parts for both terms and re-integrating with respect to the other variable, we obtain the solution. We will provide the details in a updated version.\n2. The parameter k was chosen by testing on the validation set. The optimal tradeoff between complexity and accuracy appears to be k=4.\n3. If the value of the pixel comes from outside the frame, we use the mean value of the pixels of the image. Since this is not informative, the model thus learns not to use values coming from outside the frame. \n4. The diffusion coefficient is set as an hyperpameter. \n", "Hi,\n\ninteresting paper. Could you please clarify the few details below?\n\n1. In the proof of the theorem, how exactly did you obtain the fourier transform of the advection term of the PDE?\n2. you mention that the input to the network is a \"sequence of k consecutive SST images\": what value of \"k\" did you use?\n3. When applying the warping scheme, what is the policy for when the previous position \"x - w\" falls outside of the boundaries of the images?\n4. Still in the warping scheme, how do you estimate the diffusion coefficient D?\n\nThank you and apologies if the answers are in the paper and I missed them." ]
[ 7, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_By4HsfWAZ", "iclr_2018_By4HsfWAZ", "iclr_2018_By4HsfWAZ", "iclr_2018_By4HsfWAZ", "SyeN_yclz", "BJBU32dgz", "Bk6nbuf-M", "H1UFgYQlG", "HJdRxVkgM", "H1lXnCjyG", "iclr_2018_By4HsfWAZ", "SykEv6KRW", "iclr_2018_By4HsfWAZ" ]
iclr_2018_ryazCMbR-
Communication Algorithms via Deep Learning
Coding theory is a central discipline underpinning wireline and wireless modems that are the workhorses of the information age. Progress in coding theory is largely driven by individual human ingenuity with sporadic breakthroughs over the past century. In this paper we study whether it is possible to automate the discovery of decoding algorithms via deep learning. We study a family of sequential codes parametrized by recurrent neural network (RNN) architectures. We show that cre- atively designed and trained RNN architectures can decode well known sequential codes such as the convolutional and turbo codes with close to optimal performance on the additive white Gaussian noise (AWGN) channel, which itself is achieved by breakthrough algorithms of our times (Viterbi and BCJR decoders, representing dynamic programing and forward-backward algorithms). We show strong gen- eralizations, i.e., we train at a specific signal to noise ratio and block length but test at a wide range of these quantities, as well as robustness and adaptivity to deviations from the AWGN setting.
accepted-poster-papers
This paper studies trainable deep encoders/decoders in the context of coding theory, based on recurrent neural networks. It presents highly promising results showing that one may be able to use learnt encoders and decoders on channels where no predefined codes are known. Besides these encouraging aspects, there are important concerns that the authors are encouraged to address; in particular, reviewers noted that the main contribution of this paper is mostly on the learnt encoding/decoding scheme rather than in the replacement of Viterbi/BCJR. Also, complexity should be taken into account when comparing different decoding schemes. Overall, the AC leans towards acceptance, since this paper may trigger further research in this direction.
train
[ "BJKuTzwBf", "HyBtZlVrf", "r1u-g-JSf", "ry10QYxSM", "ByEN_hFgM", "S1PB3Ocef", "BkbcZjAgM", "S1LKJamVf", "rygrya7Ez", "r1oRA2QVG", "rkAzD4Z4f", "SyDC5dgEG", "BJwz5EyEz", "B1lvp76mG", "Bk79HmpQf", "HyGycOnXM", "SJkduKjXz", "BydMVxizM", "Hkm9QejMM", "r11m9pcfG", "HyOXBaqzG", "SJTmc33JG", "Syfah8z1z", "B1vxh70A-", "HkRwKLzkG", "BJG7lOZJz", "B1krcrCRZ", "r1_T4BACW", "ryUtBwdRW" ]
[ "public", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "public", "public", "author", "author", "public", "public", "author", "author", "author", "author", "author", "author", "author", "author", "public", "public", "public", "public" ]
[ "Thanks!", "Thanks for your comments. Indeed, the Turbo decoder is not nearest neighbor, and therefore there is no theorem that the turbo decoder will perform better on every other noise distribution with the same variance. Indeed, if such was the case, there would be no way for the turbo decoder to do worse (since it is mathematically proven). \n\nOn the other hand, Turbo codes do well on the AWGN channel, even when no theorem fully explains its performance. Thus this is an empirical fact, and sets up an empirical expectation that such good performance may extend to other distributions. Furthermore, Turbo decoder is deployed routinely in practice and its bad performance when the distribution is perturbed can affect co-existence with other technologies (like radar) and this has not been fully appreciated. And we show that it can be remedied. ", "Let me summarize what you said: \n-- The theory says: for a decoder based on nearest neighboring (like Viterbi), Gaussian is the worst noise.\n-- Performance of turbo decoder on Gaussian+burst noise is worse than Gaussian, because turbo is not a nearest neighbor decoder.\n\nThen you concluded: \"This is a case where the decoder behaves very differently from that of our expectation from the theoretical result.\" \n\nWhy did you *expect* Gaussian to be the worst noise for turbo, given that turbo is not nearest neighbor-based and hence the theory doesn't apply?\n\nI'm confused how does this suggest Gaussian+burst noise is an adversarial example for turbo decoder? We actually *don't expect* the decoder to work better for burst noise than Gaussian and the poor performance is *not surprising*. \n", "As someone who has worked on coding theory for many years, I would like to add a comment, explaining \nwhy I found this paper very interesting and how it is related to this review. \n\nAs mentioned, using density evolution we can design degree sequences for LDPCs that have thresholds that get very close to the Shannon limit. The only case where we can actually approach arbitrarily close is the erasure channel. For BIAWGN and BSC we can get quite close (but not actually arbitrarily close).\nHowever, for slightly more complicated channels we have no idea how to do that or even what the fundamental limits are (e.g. deletion channel). I find this paper exciting because it defines a new family of possibilities in code and decoder design. It took us 50 years to go from Shannon's paper to modern LDPC and Turbo codes. So we should not expect that this paper beats LDPCs in their own game but rather as opening a new area of investigation. ", "In this paper the authors propose to use RNNs and LSTMs for channel coding. But I have the impression the authors completely miss the state of the art in channel coding and the results are completely useless for any current communication system. I believe that machine learning, in general, and deep learning, in particular, might be of useful for physical layer communications. I just do not see why it would be useful for channel coding over the AWGN channel. Let me explain.\n\nIf the decoder knows that the encoder is using a convolutional code, why does it need to learn the decoder instead of using the Viterbi or BCJR algorithms that are known to be optimal for sequences and symbols, respectively. I cannot imagine an scenario in which the decoder does not know the convolutional code that it is being used and the encoder sends 120,000 bits of training sequence (useless bits from information standpoint) for the decoder to learn it. More important question, do the authors envision that this learning is done every time there is a new connection or it is learnt once and for all. If it is learnt every time that would be ideal if we were discovering new channel codes everyday, clearly not the case. If we learnt it one and for all and then we incorporated in the standard that would only make sense if the GRU structure was computationally better than the BCJR or Viterbi. I would be surprise if it is. If instead of using 2 or 3 memories, we used 6-8 does 120,000 bits be good enough or we need to exponentially increase the training sequence? So the first result in the paper shows that a tailored structure for convolutional encoding can learn to decode it. Basically, the authors are solving a problem that does not need solving. \n\nFor the Turbocodes the same principle as before applies. In this case the comments of the authors really show that they do not know anything about coding. In Page 6, we can read: “Unlike the convolutional codes, the state of the art (message-passing) decoders for turbo codes are not the corresponding MAP decoders, so there is no contradiction in that our neural decoder would beat the message-passing ones”. This is so true, so I expected the DNN structure to be significantly better than turbodecoding. But actually, they do not. These results are in Figure 15 page 6 and the solution for the turbo decoders and the DNN architecture are equivalent. I am sure that the differences in the plots can be explained by the variability in the received sequence and not because the DNN is superior to the turbodecoder. Also in this case the training sequence is measured in the megabits for extremely simple components. If the convolutional encoders were larger 6-8 bits, we would be talking about significantly longer training sequences and more complicated NNs.\n\nIn the third set the NNs seems to be superior to the standard methods when burst-y noise is used, but the authors seems to indicate that that NN is trained with more information about these bursts that the other methods do not have. My impression is that the authors would be better of focusing on this example and explain it in a way that it is reproducible. This experiment is clearly not well explained and it is hard to know if there is any merit for the proposed NN structure. \n\nFinally, the last result would be the more interesting one, because it would show that we can learn a better channel coding and decoding mechanism that the ones humans have been able to come up with. In this sense, if NNs can solve this problem that would be impressive and would turn around how channel coding is done nowadays. If this result were good enough, the authors should only focus in it and forget about the other 3 cases. The issue with this result is that it actually does not make sense. The main problem with the procedure is that the feedback proposal is unrealistic, this is easy to see in Figure 16 in which the neural encoder is proposed. It basically assumes that the received real-valued y_k can be sent (almost) noiselessly to the encoder with minimal delay and almost instantaneously. So the encoder knows the received error and is able to cancel it out. Even if this procedure could be implemented, which it cannot be. The code only uses 50 bits and it needed 10^7 iterations (500Mbs) to converge. The authors do not show how far they are from the Shannon limit, but I can imagine that with 50 bit code, it should be pretty far. \n\nWe know that with long enough LDPC codes we can (almost) reach the Shannon limit, so new structure are not needed. If we are focusing on shorter codes (e.g. latency?) then it will be good to understand why do we need to learn the channel codes. A comparison to the state of the art would be needed. Because clearly the used codes are not close to state of the art. For me the authors either do not know about coding or are assuming that we do not, which explains part of the tone of this review. \n", "Error-correcting codes constitute a well-researched area of study within communication engineering. In communication, messages that are to be transmitted are encoded into binary vector called codewords that contained some redundancy. The codewords are then transmitted over a channel that has some random noise. At the receiving end the noisy codewords are then decoded to recover the messages. Many well known families of codes exist, notably convolutional codes and Turbo codes, two code families that are central to this paper, that achieve the near optimal possible performance with efficient algorithms. For Turbo and convolutional codes the efficient MAP decodings are known as Viterbi decoder and the BCJR decoder. For drawing baselines, it is assumed that the random noise in channel is additive Gaussian (AWGN).\n\nThis paper makes two contributions. First, recurrent neural networks (RNN) are proposed to replace the Viterbi and BCJR algorithms for decoding of convolutional and Turbo decoders. These decoders are robust to changes in noise model and blocklength - and shows near optimal performance.\n\nIt is unclear to me what is the advantage of using RNNs instead of Viterbi or BCJR, both of which are optimal, iterative and runs in linear time. Moreover the authors point out that RNNs are shown to emulate BCJR and Viterbi decodings in prior works - in light of that, why their good performance surprising?\n\nThe second contribution of the paper constitutes the design and decoding of codes based on RNNs for a Gaussian channel with noisy feedback. For this channel the optimal codes are unknown. The authors propose an architecture to design codes for this channel. This is a nice step. However, in the performance plot (figure 8), the RNN based code-decoder does not seem to be outperforming the existing codes except for two points. For both in high and low SNR the performance is suboptimal to Turbo codes and a code by Schalkwijk & Kailath. The section is also super-concise to follow. I think it was necessary to introduce an LSTM encoder - it was hard to understand the overall encoder. This is an issue with the paper - the authors previously mentioned (8,16) polar code without mentioning what the numbers mean. \n\nHowever, I overall liked the idea of using neural nets to design codes for some non-standard channels. While at the decoding end it does not bring in anything new (modern coding theory already relies on iterative decoders, that are super fast), at the designing-end the Gaussian feedback channel part can be a new direction. This paper lacks theoretical aspect, as to no indication is given why RNN based design/decoders can be good. I am mostly satisfied with the experiments, barring Fig 8, which does not show the results that the authors claim.\n", "This paper shows how RNNs can be used to decode convolutional error correcting codes. While previous recent work has shown neural decoders for block codes results had limited success and for small block lengths. \nThis paper shows that RNNs are very suitable for convolutional codes and achieves state of the art performance for the first time. \nThe second contribution is on adaptivity outside the AWGN noise model. The authors show that their decoder performs well for different noise statistics outside what it was trained on. This is very interesting and encouraging. It was not very clear to me if the baseline decoders (Turbo/BCJR) are fairly compared here since better decoders may be used for the different statistics, or some adaptivity could be used in standard decoders in various natural ways. \n\nThe last part goes further in designing new error correcting schemes using RNN encoders and decoders for noisy feedback communication. \nFor this case capacity is known to be impossible to improve, but the bit error error can be improved for finite lenghts. \nIt seems quite remarkable that they beat Schalkwijk and Kailath and shows great promise for other communication problems.\n\nThe paper is very well written with good historical context and great empirical results. I think it opens a new area for information theory and communications with new tools. \n\nMy only concern is that perhaps the neural decoders can be attacked with adversarial noise (which would not be possible for good-old Viterbi ). It would be interesting to discuss this briefly. \nA second (related) concern is the lack of theoretical understanding of these new decoders. It would be nice if we could prove something about them, but of course this will probably be challenging. \n\n", "Thank you for the references. We will cite them appropriately in the final version.", "Thanks for continuing this interesting discussion. Indeed, the presented definition of adversarial examples in the question is quite interesting: i.e., the departure from expectation. Measured that way, the example of Viterbi decoder operating on bursty signal will not qualify as an adversarial example (we had intended a different definition based on adversary injecting a signal to the channel whose goal is to make the decoder fail). \n\nHowever, if you take a Turbo decoder operating on a bursty signal, indeed one can justify that it is an adversarial example under the definition in question. This is because of the following: there is a theorem asserting that Gaussian noise is the worst case noise, i.e., once you fix the average power of the noise, the performance of a nearest-neighbor decoder will be worst if the noise is Gaussian. Therefore the performance on any other channel with the same average power should be *better.* However, in the case of the channel with Gaussian noise + bursty noise having the same average power, the performance of the *turbo* decoder is much worse than in the channel with only Gaussian noise. This is a case where the decoder behaves very differently from that of our expectation from the theoretical result; this is because the iterative BCJR decoder is not carrying out nearest neighbor decoding (unlike the Viterbi decoder).\n\nFurthermore, we would like to point out another nuance in adversarial examples. In the aforementioned cases, we are only changing the distribution of the channel (without looking at the data). A more stringent version, studied in the CS theory literature as worst-case noise, would allow the adversary to choose the noise after looking at the data input into the channel. Even in computer-vision, this is the usual definition, where the adversary looks at the image before injecting the noise. We would like to point out that for real communication problems, the coarse-grained adversary, which can only control the noise distribution maybe a bit more realistic. ", "Thanks for your interest. We agree that latency is an important factor and the throughput demand in decoding is even more aggressive than in computer vision. We would like to point out that our neural networks are much more shallow (2 layers) compared to the 1000 layers employed in vision. It is not immediately obvious how the computational tradeoffs will play out in this case. This is beyond the scope of the present paper and an important direction for further research. ", "Neat idea and results, and I'm excited to see more people working on this.\n\nYou wrote \"generalization is difficult [when using neural networks to decode non-sequential codes]\", and the focus of your paper is on convolutional codes. \n\nHowever, there actually have been a few papers recently that get it to work for arbitrary linear codes and longer polar codes:\n\n\"Learning to Decode Linear Codes Using Deep Learning\"- https://arxiv.org/abs/1607.04793\n\"Neural Offset Min-Sum Decoding\" - https://arxiv.org/abs/1701.05931\n\"Deep Learning Methods for Improved Decoding of Linear Codes\" - https://arxiv.org/abs/1706.07043\n\"Scaling Deep Learning-based Decoding of Polar Codes via Partitioning\" - https://arxiv.org/abs/1702.06901\n\"improved Polar Decoder Using Deep Learning\" - https://www.researchgate.net/publication/321122117_Improved_polar_decoder_based_on_deep_learning", "As far as I know, adversarial examples are inputs that we *expect* the model to process correctly, but it *surprisingly* doesn't. As an example, assume we have an image that is classified correctly. We *expect* small perturbations of that image to be correctly classified as well. A such perturbed image is called \"adversarial example,\" if, to our *surprise*, it can fool the model. \n\nNow consider, say, Viterbi decoder and burst noise. Do we *expect* the model to process such cases correctly, and that it *surprisingly* doesn't?\n", "Ideas, such as distillation or binarization, have been mostly applied to image data and is not immediately clear whether they will work on other data types as well. Honestly, I don't think they will reduce complexity without significantly losing performance, unless experimental results will prove otherwise. \n\nAlso, compared to vision, there is a really real-time constraint in decoding, as typically millions of bits needs to be decoded every second...\n\nI do like the idea of the paper (this is why I'm commenting here after all), but not sure if it will be practical. ", "The answer is subtle and we explain in detail below. We weren't as clear in our earlier response and we apologize for that. \n\nFor an apples-to-apples comparison of two different (memoryless, random) noise sequences, let us keep the average energy (i.e., the expected value of the sum of squares of the noise values) to be the same. Then Shannon showed in his original 1948 paper that among all memoryless noise sequences (with the same average energy), Gaussian is the worst in terms of capacity. However it was not clear for a long time, if a decoder trained to be optimal for the Gaussian noise (i.e, the minimum distance decoder) would be robust to other noise pdfs. This was confirmed by a strong result of Lapidoth ’96 (Nearest Neighbor Decoding for Additive Non-Gaussian Noise Channels, IEEE Transactions on Information Theory): for any finite block length, the BER achieved by the minimum distance decoder for any noise pdf is *lower bounded* by the BER for Gaussian noise. Since Viterbi decoding is the minimum distance decoder for convolutional codes, it is naturally robust in the precise sense above. \nOn the other hand, turbo decoder does not inherit this property, making it vulnerable to adversarial attacks. As can be seen in Section 3, the performance of turbo decoder (designed for the AWGN) under T distributed noise/bursty noise is extremely poor. \n\nWhen the noise is not Gaussian (which is the worst-case scenario), then there could be decoders that achieve much better performance. This is the sense in which our allusion to susceptibility of Viterbi decoding to adversarial/other noise pdfs was made. Specifically, we consider the practical scenario of bursty noise as an important example of non-Gaussian noise in this paper. Practical considerations suggest that we shouldn't keep the average noise comparable to Gaussian (which is the background noise and generally much smaller (20~50dB lower) than interference), so the apples-to-apples comparison setting discussed in the para is not as relevant. In this case, Viterbi decoder is not as effective as another decoder that harnesses the specific property of the bursty noise statistics -- indeed, as we see in Section 3, the neural network decoder is adaptive to this situation. Furthermore, when the noise is bursty, the turbo decoder with its constituent BCJR decoders is subject to severe error propagation that leads to a significant degradation in performance. We demonstrate that our learned neural network decoder outperforms well known hand-coded methods in the literature for this exact same setting.", "We accept this point in entirety. Both BER and complexity are important metrics of performance of a decoder. In this paper our comparison metrics have focused on the BER. We will make this point very clear in the revised paper. The main claim in the paper is that there is an alternative decoding methodology which has been hitherto unexplored and to point out that this methodology can yield excellent BER performance. Regarding the circuit complexity, we would like to point out that in computer vision, there have been many recent ideas to make large neural networks practically implementable in a cell phone. For example, the idea of distilling the knowledge in a large network to a smaller network and the idea of binarization of weights and data in order to do away with complex multiplication operations have made it possible to implement inference on much larger neural networks than the one in this paper in a smartphone. Such ideas can be utilized in our problem to reduce the complexity too. We would like to point out that a serious and careful circuit implementation complexity optimization and comparison is significantly complicated and submit that it is outside the scope of a single paper. Having said this, a preliminary comparison is discussed below with another anonymous reviewer, but we provide it here for completeness: \n\nThe number of multiplications is quadratic in \n- the dimension of hidden states​ of GRU​ (200) for the proposed neural decoder, and \n- the number of encoder states (4) for Viterbi and BCJR. \n\nThe number of add-compare-select units is \n- 0 for the proposed neural decoder, and \n- linear in the number of encoder states (4) for Viterbi. \n\nApart from optimizing the size/complexity of the current neural decoder, significant parallelization is possible in the multiplicative units in the neural decoder, as well as pipelining. These designs in conjunction with a careful analysis of the fixed point arithmetic requirements of the different weights are under active research, and outside the scope of this paper.\n\nMore generally circuit implementation complexity improves with time due to both hardware design and component improvements. This has been the case throughout the history of reliable communication: promising algorithms are first proposed and then followed by a surge of research in efficient circuit implementation. A case in point is the recently proposed polar codes where significant recent research has made its decoding competitive in practice (and indeed has been accepted for parts of the 5G wireless standard earlier in the summer of 2017). \n\n ", "The paper has not compared the complexity of decoders per information bit. This is a basic and essential comparison, especially in this case that efficient implementation is key for e.g., mobile devices. I think it is not fair to compare a highly more complex algorithm against a simpler one and claim superior performance. ", "Can you elaborate on how the Viterbi decoder is vulnerable to adversarial examples?", "Below are detailed comments.\n\nQ1. Why should one use data to learn the Viterbi/BCJR/Turbo when we know them already?\nA1. This is because, we only know the optimal algorithms in simple settings and how to generalize them to more complicated or unknown channel models is sometimes unclear. We demonstrate that neural networks that learn from data can yield more robust and adaptable algorithms in those settings. This is the point of Section 3 and is elaborated below.\n\nTwo advantages to RNN decoders, that go beyond mimicing Viterbi or BCJR:\n(1) Robustness: Viterbi and BCJR decoders are known to be vulnerable to changes in the channel, as those are highly tailored for the AWGN. We show in Section 3, via numerical experiments with T-dsitributed noise, that the neural network decoder trained on AWGN is much more robust against the changes in the channel. This makes, among other things, our neural network decoder much more attractive alternative to Viterbi or BCJR decoders in practice, where the channel model is not available.\n(2) Adaptivity: It is not easy to extend the idea of Viterbi decoder and iterative Turbo decoding beyond the simple convolutional codes and the standard Gaussian channel (or any other Discrete Memoryless Channel). On the other hand, our neural network decoder provides a new paradigm for decoding that can be applied to any encoder and any channel, as it learns from training examples. To showcase the power of this “adaptivity”, we show improved performance on bursty channels. \n\nA more stark example of the utility presents itself in the feedback channel. There exists no known practical encoding-decoding scheme for a feedback channel. Only because we have a neural network decoder that can adapt to any encoder, we are able to find a novel encoder (also NN based) that uses the feedback information correctly and achieves the performance significantly better than any other competing schemes. This would have not been possible without a NN decoder and the techniques we learned in training one to mimic the simple Viterbi. \n \nQ2. Learning is done every time there is a new connection or it is learnt once and for all?\nA2. We are not sure if we understand the question. The channel models in communication are *statistical* (and in particular the AWGN one) and codes are built to have a probabilistically good performance. The question of \"learning done every time a connection is made\" does not arise. \n\nQ3. If instead of using 2 or 3 memories, we used 6-8 does 120,000 bits be good enough or we need to exponentially increase the training sequence?\nA3. First note that even the Viterbi/BCJR decoder's computational complexity increases exponentially in the (memory) state dimension. While we have not tried a careful analysis on the complexity of neural decoders for codes with state dimension higher than 3, we observe the following: from dimension 2 to 3, we increased the size of neural decoder - from 2 layered bi-GRUs (200 hidden nodes) to 2 layered bi-LSTMs (400 hidden nodes). The reason we havent explored a careful study of the memory size in neural network decoding is the following: modern coding theory recommends improving the 'quality' of the convolutional code not by increasing the memory state dimension, but via the 'turbo effect'. The advantage of turbo codes over convolutional codes is that ​it uses convolutional codes with a short memory as the constituent codes, but the interleaver allows for very long range memory, that is naturally decoded via iterative methods. The end result is that turbo codes can be decoded with a far lesser decoding complexity than convolutional codes with a very long memory, for the same BER performance. Indeed, turbo codes have largely replaced convolutional codes in modern practice. \n\nQ4. In Figure 15 page 6, the solution for the turbo decoders and the DNN architecture are equivalent? I am sure that the differences in the plots can be explained by the variability in the received sequence and not because the DNN is superior to the turbodecoder. \nA4. Turbo codes have been used in every day cellular communication systems since 2000 and their decoders have been highly optimized. \n(1) A nontrivial improvement in BER performance over state of the art would be considered a major development and discussion topic in standard body documents (we refer the reviewer to Annex B of 3GPP TS 36.101 (cited at the bottom of page 1 of our manuscript) from this summer for a feel for how large an impact on standard body decisions minor improvements to BER performance can have). Indeed, a \"significantly better than turbo decoding\" performance would qualify as a very major result in communication theory with correspondingly large impact on practice.\n(2) Our BER/BLER performance is averaged over 100000 blocks and the standard deviation is infinitesimally small. This is contradiction to the statement of being \"sure that it can be can be explained by the variability in the received sequence.\"", "\nQ5. NN is trained with more information about burst noise model than others? My impression is that the authors would be better of focusing on this example and explain it in a way that it is reproducible. \nA5. The two state-of-the-art heuristics - erasure thresholding and saturation thresholding (Safavi-Naeini et al. 2015) that we are comparing the NN decoder against to - fully utilize the knowledge on the bursty channel model. Specifically, the threshold value in those methods is choosen based on the burst noise model. We believe the experiment is fully explained and entirely reproducible (we are also uploading our code base on Github after the ICLR review process is completed). Perhaps the reviewer can be specific on exactly which parts of the experiment could use a better explanation. \n\nQ6. The feedback proposal is unrealistic. It basically assumes that the received real-valued y_k can be sent (almost) noiselessly to the encoder with minimal delay and almost instantaneously. So the encoder knows the received noise and is able to cancel it out.\nA6. (1) The AWGN channel with output feedback is the most classical of models in communication theory (studied by Shannon himself in 1956. There has been a huge effort in the literature over the ensuing decades (the important of which we have amply cited in our manuscript) and is of very basic importance to multiple professional societies (including IEEE communication society and IEEE Information theory society). Although idealistic, it provides a valuable training ground to understand how to use feedback to more efficiently communicate.\n(2) The \"W\" in the phrase AWGN (which is our channel model refers to \"white\" which means the noise is memoryless across different time symbols. So even a single time step delay (not to mention \"minimal delay\") does not allow the \"the encoder knows the received noise and is able to cancel it out.\" Perhaps the reviewer would like to reconsider his/her ratiocination?", "Thank you for your comments. \n\n1. Neural decoders can be attacked with adversarial noise: \nThis is a great point, which is related to the current ongoing advances in other areas of neural networks (e.g. classification). At a high level, there are two types of adversarial noise that can hurt our approach. The first one is poisoning training data. If we are training on data collected from real channels, an adversary who knows that we are training can intervene and add adversarial noise to make our trained decoder useless. This proposes an interesting game between the designer (us) and the attacker, in the form of how much noise power does the adversary need in order to make our decoder fail. This we believe is a fascinating research question, and we will add discussions in the final version of our manuscript.\nThe second type of attack is adversarial examples, where at the test time an adversary changes the channel to make our decoder fail. In this scenario, both Viterbi and our decoder are vulnerable. Our numerical experiments on robustness is inspired by such scenarios, where we show that neural network decoders are more robust against such attacks (or natural dynamic changes) in Section 3.\n\n2. Fair comparison to baseline decoders: \nThere are two ways to run fair experiments on other channels, in our opinion. One is to mimic dynamic environment of real world by using encoder-decoders that are tailored for AWGN (both Turbo/BCJR and Neural Network decoder trained on AWGN) and see how robust it is against changes in the channel. This is the experiments we run with T-distribution. Another is, as suggested by the reviewer, to design decoders based on the new statistics of the channel that work well outside of AWGN. This is the experiments we run with bursty channels. We agree that these two experiments are addressing two different questions, but we believe we are fair in the comparisons to competing decoders within each setting. \n\n3. Theoretical understanding of these neural decoders/coding schemes is a challenging but very interesting future research direction. \n", "Thank you for your comments. \n\n1. Representability, Learnability and Generalization:\nThere are three aspects to showing that a learning problem can be solved through a parametric architecture. \n\n(1) Representability: The ability to represent the needed function through a neural network. For Viterbi/BCJR algorithms, this representability was shown in prior work by handcrafting parameters that represent the Viterbi/BCJR algorithms. We note that neural networks with sufficient number of parameters can indeed represent any function through the universal approximation theorem for feedforward networks and RNNs (Cybenko,G.1989, Siegelmann,H.T.&Sontag,E.D.1995) and therefore this result is not that surprising. \n\n(2) Learnability: Can the required function be learnt directly through gradient descent on the observed data? For Viterbi and BCJR, learnability was neither known through prior work nor is it obvious. One of the main contributions of our work is that those algorithms can be learnt from observed data. \n\n(3) Generalization: Does the learnt function/algorithm generalize to unobserved data? We show this not only at the level of new unobserved codewords, but also show that the learnt algorithm trained on shorter blocks of length 100 can generalize well to longer blocks of length up to 10,000. Such generalization is rare in many realistic problems.\n\nTo summarize, out of the three aspects, only representability was known from prior work (and, we agree with the reviewer, that it is the least surprising given universal representability). Learnability and generalization of the learnt Viterbi and BCJR algorithms to much larger block lengths are both unknown from prior art and they are surprising, and interesting in their own right. We note that Viterbi and BCJR algorithms are useful in machine learning beyond communications problem, representing dynamic programming and forward-backward algorithms, respectively.\n\nPeter Elias introduced convolutional codes in 1955 but efficient decoding through dynamic programming (Viterbi decoding) was only available in 1967 requiring mathematical innovation. We note that ability to learn the Viterbi algorithm from short block length data (which can be generated by full-search) and generalizing them to much longer blocks implies an alternative methodology to solve the convolutional code problem. Such an approach could have significant benefits in problems where corresponding mathematically optimal algorithms are not known at the moment.\n\nWe demonstrate the power of this approach by studying the problem of channel-with-feedback where no good coding schemes are known despite 70 years of research.\n\n2. Advantages of using RNNs instead of Viterbi or BCJR:\nThere are two advantages to RNN decoders, that go beyond mimicing Viterbi/BCJR.\n\n(1) Robustness: Viterbi and BCJR decoders are known to be vulnerable to changes in the channel, as those are highly tailored for the AWGN. We show in Section 3, via numerical experiments with T-dsitributed noise, that the neural network decoder trained on AWGN is much more robust against the changes in the channel. This makes, among other things, our neural network decoder much more attractive alternative to Viterbi/BCJR decoders in practice, where the channel model is not available.\n\n(2) Adaptivity: It is not easy to extend the idea of Viterbi decoder and iterative Turbo decoding beyond the simple convolutional codes and the standard Gaussian channel (or any other Discrete Memoryless Channel). On the other hand, our neural network decoder provides a new paradigm for decoding that can be applied to any encoder and any channel, as it learns from training examples. To showcase the power of this “adaptivity”, we show improved performance on the bursty channel.\n\nA more stark example of the utility presents itself in the feedback channel. There exists no known practical encoding-decoding scheme for a feedback channel. Only because we have a neural network decoder that can adapt to any encoder, we are able to find a novel encoder (also neural network based) that uses the feedback information correctly and achieves the performance significantly better than any other competing schemes. This would have not been possible without a neural network decoder and the techniques we learned in training one to mimic the simple Viterbi.\n\n3. Updated curve for new codes on AWGN channel with feedback:\nWe have improved our encoder significantly by borrowing the idea of zero-padding from coding theory. In short, most of the errors occurs in the last bit, whose feedback information was not utilized by our encoder. We resolved this issue by padding a zero in the end of information bits (Hence, the codeword length is 3(K+1) for K information bits). This significantly improves the performance as shown in the new Figure 8. A full description of the encoder-decoder architecture is provided in Appendix D. \n\n4. We replaced \"(8,16) polar code\" by “rate 1/2 polar code over 8 information bits”.", "Tables 1 and 2 are for the code of Fig. 1. \n\nRe codes with state dimension higher than 2 (or 3):\n\nFirst note that the Viterbi/BCJR decoder's computational complexity increases exponentially in the (memory) state dimension. While we have not tried a careful analysis on the complexity of neural decoders for codes with state dimension higher than 3, we observe the following: from dimension 2 to 3, we increased the size of neural decoder - from 2 layered bi-GRUs (200 hidden nodes) to 2 layered bi-LSTMs (400 hidden nodes). It seems that the network has to scale as the state dimension increases, but we don't have a good guess of in what order it will scale. \n\nOn a related note: ​Modern coding theory recommends improving the 'quality' of the convolutional code not by increasing the memory state dimension, but via the 'turbo effect'. The advantage of turbo codes over convolutional codes is that ​it uses convolutional codes with a short memory as the constituent codes, but the interleaver allows for very long range memory, that is naturally decoded via iterative methods. The end result is that turbo codes can be decoded with a far lesser decoding complexity than convolutional codes with a very long memory, for the same BER performance. Indeed, turbo codes have largely replaced convolutional codes in modern practice. \n", "Thanks for your interest. \n\n1) Viterbi algorithm is not a simple deterministic function but an algorithm to find the shortest path on a directed graph (defined by the encoding structure as well as the number of information bits) with non-negative weights on edges (defined by the Gaussian noise samples). In other words, it is the Dijkstra's shortest path algorithm (dynamic programming) on a specific graph that changes instance-by-instance (since the noise samples and number of information bits vary). While it is conceivable that Viterbi algorithm can be represented with a high capacity neural network, it is unclear if it can be learnt from data in reasonable training time. This is what we demonstrate. \n\n2) Re memory/computation complexity: please see our response to \"complexity comparison\" below. \n\n3) Why should one use data to learn the Viterbi algorithm (or BCJR or Turbo decoding) when we know them already? This is because, we only know the optimal algorithms in simple settings and how to generalize them to more complicated or unknown channel models is sometimes unclear. We demonstrate that neural networks that learn from data can yield more robust and adaptable algorithms in those settings; see Section 3.\n\n4) We agree that it will be very interesting if the ML model leads to discovery of new class of coding schemes. Indeed, this is exactly what we show in Section 4, that for the Gaussian channel with feedback, the NN discovered codes beat the state-of-the-art codes. \n", "\nThe complexity of all decoders (Viterbi, BCJR, Neural) is linear in the number of information bits (block length). \n\nThe actual run times are hard to compare since ​some operations can be​ ​parallelized: e.g., matrix vector multiplications in the neural decoder can be easily parallelized in a GPU.\n\nThanks for pointing out the typo. ", "Thanks for your interest. \n\nThe number of multiplications is quadratic in \n- the dimension of hidden states​ of GRU​ (200) for the proposed neural decoder, and \n- the number of encoder states (4) for Viterbi and BCJR. \n\nThe number of add-compare-select units is \n- 0 for the proposed neural decoder, and \n- linear in the number of encoder states (4) for Viterbi. \n\nThe dimension of hidden states in the GRU can be potentially reduced, using ideas such as network distillation. Apart from optimizing the size/complexity of the current neural decoder, significant parallelization is possible in the multiplicative units in the neural decoder, as well as pipelining. These designs in conjunction with a careful analysis of the fixed point arithmetic requirements of the different weights are under active research, and outside the scope of this paper. \n", "Interesting work; however, given that Viterbi algorithm, for example, is a simple (but elegant) well-defined algebraic function, isn't it expectable that a neural network with sufficient capacity would be able to approximate that? \n\nAlso, given the existing algorithm with efficient implementation, is it reasonable to replace it with an RNN? Will the time and memory complexity of such an RNN not be a major issue? \n\nAnd, in general, is it a good practice to replace (or replicate) deterministic functions with \"learning\" data-driven models?\n\nOf course, it would be very interesting if the ML model leads to the discovery of (or gives some insight into) a new class of coding schemes. But, this direction is not specifically pursued or examined in paper.", "For the results in Tables 1 and 2: are you using the code of Fig.1 or one of the codes of Fig.9 ?\n\nAlso, did you try similar analysis for codes with state dimension higher than 2 (or 3) ? \nIf not, what is your educated guess for the case of higher state dimensions?", "Thanks for your answer. \n\nCan you approximately calculate how many operations per information BIT you have in the neural decoer as compared to the viterbi decoder (like add-compare-select , multiplication, etc.) ?", "Can you please elaborate on the complexity of the proposed neural decoder as compared to this of the Viterbi/BCJR decoder ?\n\nAlso, note a typo in Fig. 9: (a) and (b) figures should be exchanged." ]
[ -1, -1, -1, -1, 2, 6, 9, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 4, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "HyBtZlVrf", "r1u-g-JSf", "rygrya7Ez", "ByEN_hFgM", "iclr_2018_ryazCMbR-", "iclr_2018_ryazCMbR-", "iclr_2018_ryazCMbR-", "rkAzD4Z4f", "SyDC5dgEG", "BJwz5EyEz", "iclr_2018_ryazCMbR-", "B1lvp76mG", "Bk79HmpQf", "SJkduKjXz", "HyGycOnXM", "iclr_2018_ryazCMbR-", "r11m9pcfG", "ByEN_hFgM", "ByEN_hFgM", "BkbcZjAgM", "S1PB3Ocef", "B1krcrCRZ", "BJG7lOZJz", "ryUtBwdRW", "r1_T4BACW", "iclr_2018_ryazCMbR-", "iclr_2018_ryazCMbR-", "B1vxh70A-", "iclr_2018_ryazCMbR-" ]
iclr_2018_rJYFzMZC-
Simulating Action Dynamics with Neural Process Networks
Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated. In this work, we introduce Neural Process Networks to understand procedural text through (neural) simulation of action dynamics. Our model complements existing memory architectures with dynamic entity tracking by explicitly modeling actions as state transformers. The model updates the states of the entities by executing learned action operators. Empirical results demonstrate that our proposed model can reason about the unstated causal effects of actions, allowing it to provide more accurate contextual information for understanding and generating procedural text, all while offering more interpretable internal representations than existing alternatives.
accepted-poster-papers
this submission proposes a novel extension of existing recurrent networks that focus on capturing long-term dependencies via tracking entities/their statesand tested it on a new task. there's a concern that the proposed approach is heavily engineered toward the proposed task and may not be applicable to other tasks, which i fully agree with. i however find the proposed approach and the authors' justification to be thorough enough, and for now, recommend it to be accepted.
test
[ "SJUEXlDxf", "r1Hu15Kxz", "rJQcSB5gG", "S1d6eY-7f", "HyGietbmz", "ryq_xY-7M", "Hy2mkt-7z", "HyFjC_-Xz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "Summary\n\nThis paper presents Neural Process Networks, an architecture for capturing procedural knowledge stated in texts that makes use of a differentiable memory, a sentence and word attention mechanism, as well as learning action representations and their effect on entity representations. The architecture is tested for tracking entities in recipes, as well as generating the natural language description for the next step in a recipe. It is compared against a suit of baselines, such as GRUs, Recurrent Entity Networks, Seq2Seq and the Neural Checklist Model. While I liked the overall paper, I am worried about the generality of the model, the qualitative analysis, as well as a fair comparison to Recurrent Entity Networks and non-neural baselines.\n\nStrengths\n\nI believe the authors made a good effort in comparing against existing neural baselines (Recurrent Entity Networks, Neural Checklist Model) *for their task*. That said, it is unclear to me how generally applicable the method is and whether the comparison against Recurrent Entity Networks is fair (see Weaknesses).\nI like the ablation study.\n\nWeaknesses\n\nWhile I find the Neural Process Networks architecture interesting and I acknowledge that it outperforms Recurrent Entity Networks for the presented tasks, after reading the paper it is not clear to me how generally applicable the architecture is. Some design choices seem rather tailored to the task at hand (manual collection of actions MTurk annotation in section 3.1) and I am wondering where else the authors see their method being applied given that the architecture relies on all entities and actions being known in advance. My understanding is that the architecture could be applied to bAbI and CBT (the two tasks used in the Recurrent Entity Networks paper). If that is the case, a fair comparison to Recurrent Entity Networks would have been to test against Recurrent Entity Networks on these tasks too. If they the architecture cannot be applied in these tasks, the authors should explain why.\nI am not convinced by the qualitative analysis. Table 2 tells me that even for the best model the entity selection performance is rather unreliable (only 55.39% F1), yet all examples shown in Table 3 look really good, missing only the two entities oil (1) and sprinkles (3). This suggests that these examples were cherry-picked and I would like to see examples that are sampled randomly from the dev set. I have a similar concern regarding the generation task. First, it is not mentioned where the examples in Table 6 are taken from – is it the train, dev or test set? Second, the overall BLEU score seems quite low even for the best model, yet the examples in Table 6 look really good. In my opinion, a good qualitative analysis should also discuss failure cases. Since the BLEU score is so low here, you might also want to compare perplexity of the models.\nThe qualitative analysis in Table 5 is not convincing either. In Appendix A.1 it is mentioned that word embeddings are initialized from word2vec trained on the training set. My suspicion is that one would get the clustering in Table 4 already from those pretrained vectors, maybe even when pretrained on the Google news corpus. Hence, it is not clear what propagating gradients through the Neural Process Networks into the action embeddings adds, or put differently, why does it have to be a differentiable architecture when an NLP pipeline might be enough? This could easily be tested by another ablation where action embeddings are pretrained using word2vec and then fixed during training of the Neural Process Network. Moreover, in 3.3 it is mentioned that even the Action Selection is pretrained, which makes me wonder what is actually trained jointly in the architecture and what is not.\nI think the difficulty of the task at hand needs to be discussed at some point, ideally early in the paper. Until examples on page 7 are shown, I did not have a sense for why a neural architecture is chosen. For example, in 2.3 it is mentioned that for \"wash and cut\" the two functions fwash and fcut need to be selected. For this example, this seems trivial as the functions have the same name (and you could even have a function per name!). As far as I understand, the point of the action selector is to only have a fixed number of learned actions and multiple words (cut, slice etc.) should select the same action fcut. Otherwise (if there is little language ambiguity) I would not see the need for a complex neural architecture. Related to that, a non-neural baseline for the entity selection task that in my opinion definitely needs to be added is extracting entities using a pretrained NER system and returning all of them as the selection.\np2 Footnote 1: So if I understand this correctly, this work builds upon a dataset of over 65k recipes from Kiddon et al. (2016), but only for 875 of those detailed annotations were created?\n\nMinor Comments\n\np1: The statement \"most natural language understanding algorithms do not have the capacity …\" should be backed by reference.\np2: \"context representation ht\" – I would directly mention that this is a sentence encoding.\np3: 2.4: I have the impression what you are describing here is known in the literature as entity linking.\np3 Eq.3: Isn't c3*0 always a vector of zeros?\np4 Eq.6: W4 is an order-3 tensor, correct?\np4 Eq.8: What is YC and WC here and what are their dimensions? I am confused by the softmax, as my understanding (from reading the paragraph on the Action Selection Loss on p.5) was that the expression in the softmax here is a scalar (as it is done for every possible action), so this should be a sigmoid to allow for multiple actions to attain a probability of 1?\np5: \"See Appendix for details\" -> \"see Appendix C for details\"\np5 3.3: Could you elaborate on the heuristic for extracting verb mentions? Is only one verb mention per sentence extracted?\np5: \"trained to minimize cross-entropy loss\" -> \"trained to minimize the cross-entropy loss\"\np5 3.3: What is the global loss?\np6: \"been read (§2.5.\" -> \"been read (§2.5).\"\np6: \"We encode these vectors using a bidirectional GRU\" – I think you composing a fixed-dimensional vector from the entity vectors? What's eI?\np7: For which statement is (Kim et al. 2016) the reference? Surely, they did not invent the Hadamard product.\np8: \"Our model, in contrast\" use\" -> \"Our model, in contrast, uses\".\np8 Related Work: I think it is important to mention that existing architectures such as Memory Netwroks could, in principle, learn to track entities and devote part of their parameters to learn the effect of actions. What Neural Process Networks are providing is a strong inductive bias for tracking entities and learning the effect of actions that is useful for the task considered in this paper. As mentioned in the weaknesses, this might however come at the price of a less general model, which should be discussed.\n\n# Update after the rebuttal\nThanks for the clarifications and updating the paper. I am increasing my score by two points and expect to see the ablations as well as the NER baseline mentioned in the rebuttal in the next revision of the paper. Furthermore, I encourage the authors to include the analysis of pretrained word2vec embeddings vs the embeddings learned by this architecture into the paper. ", "SUMMARY.\n\nThe paper presents a novel approach to procedural language understanding.\nThe proposed model reads food recipes and updates the representation of the entities mentioned in the text in order to reflect the physical changes of the entities in the recipe.\nThe authors also propose a manually annotated dataset where each passage of a recipe is annotated with entities, actions performed over the entities, and the change in state of the entities after the action.\nThe authors tested their model on the proposed dataset and compared it with several baselines.\n\n\n----------\n\nOVERALL JUDGMENT\nThe paper is very well written and easy to read.\nI enjoyed reading this paper, I found the proposed architecture very well thought for the proposed task.\nI would have liked to see a little bit more of analysis on the results, it would be interesting to see what are the cases the model struggles the most.\n\nI am wondering how the model would perform without intermediate losses i.e., entity selection loss and action selection loss.\nIt would also be interesting to see the impact of the amount of 'intermediate' supervision on the state change prediction.\n\nThe setup for generation is a bit unclear to me.\nThe authors mentioned to encode entity vectors with a biGRU, do the authors encode it in order of appearance in the text? would not it be better to encode the entities with some structure-agnostic model like Deep Sets?\n\n", "The paper studies procedural language, which can be very useful in applications such as robotics or online customer support. The system is designed to model knowledge of the procedural task using actions and their effect on entities. The proposed solution incorporates a structured representation of domain-specific knowledge that appears to improve performance in two evaluated tasks: tracking entities as the procedure evolves, and generating sentences to complete a procedure. The method is interesting and presents a good amount of evidence that it works, compared to relevant baseline solutions. \n\nThe proposed tasks of tracking entities and generating sentences are also interesting given the procedural context, and the authors introduce a new dataset with dense annotations for evaluating this task. Learning happens in a weakly supervised manner, which is very interesting too, indicating that the model introduces the right bias to produce better results.\n\nThe manual selection and curation of entities for the domain are reasonable assumptions, but may also limit the applicability or generality from the learning perspective. This selection may also explain part of the better performance, as the right bias is not just in the model, but in the construction of the \"ontologies\" to make it work.", "--- Minor Comments ---\nWe appreciate the reviewer’s thought-provoking questions about the impact of our model. We’ve updated the paper to extend the qualitative evaluation with additional examples and to clarify where our approach differentiates with the goals of general-purpose memory models such as Recurrent Entity Networks. We thank the reviewer for pointing out additional baselines and ablations to run to show the importance of the components of the model and will update the paper to incorporate them as we get the results. Finally, we appreciate the reviewer’s comments pointing out minor corrections to be made in the paper, and have incorporated them in the revised version.\n\nBelow, we address minor comments made by the reviewer that were not addressed in the paragraphs above.\n\np3: 2.4: I have the impression what you are describing here is known in the literature as entity linking.\n\nAssuming the reviewer is referring to the recurrent attention paragraph, we think coreference resolution would be a more accurate analogue to the task being handled as the goal of the recurrent attention mechanism is to tie connections between entity changes in the text without the use of an external KB. However, coreference tasks are defined only over explicitly mentioned entities in the text, while our task requires reasoning about implicit mentions as well, e.g., “Add water to the pot. Boil for 30 minutes” (where the implicit argument of Boil is water).\n\np3 Eq.3: Isn't c3*0 always a vector of zeros?\n\nYes, we included this option in the choice distribution as an easy short-circuit for the model to choose to include no entities in a particular step. \n\np4 Eq.6: W4 is an order-3 tensor, correct?\n\nYes, W_4 is a bilinear projection tensor between the action embedding and the entity embedding. We’ve clarified this in the new version of the paper\n\np4 Eq.8: What is YC and WC here and what are their dimensions? I am confused by the softmax, as my understanding (from reading the paragraph on the Action Selection Loss on p.5) was that the expression in the softmax here is a scalar (as it is done for every possible action), so this should be a sigmoid to allow for multiple actions to attain a probability of 1?\n\nThe softmax here predicts the end state for each state change in the lexicon. Each state change is predicted individually, so Y_c corresponds to the end state being predicted for an individual state change. W_c corresponds to a projection for each individual state change. Each state predictor is a separate multi-class classifier that predicts the end state of the entity from the output of the action applicator. These predictors are trained using the State Change loss in section 5. Actions are selected by the sigmoid in Equation 1.\n\np6: \"We encode these vectors using a bidirectional GRU\" – I think you composing a fixed-dimensional vector from the entity vectors? What's eI?\n\neI is the concatenation of final time step hidden states from encoding the entity state vectors in both directions using a bidirectional GRU.\n\np7: For which statement is (Kim et al. 2016) the reference? Surely, they did not invent the Hadamard product.\n\nKim et al. used the Hadamard product to jointly project two input representation in multimodal learning. We used their citation as a motivation for our decision to jointly project signal from the entity state vectors and word context representation. We’ve removed the citation to get rid of this ambiguity.\n", "\n--- Learning Causality-aware Action Embeddings ---\nWe apologize for the confusion about the action selector pretraining. What we meant in this case is that the MLP used to select action embeddings is pretrained. The action embeddings themselves are learned jointly with the entity selector, the simulation module, state change predictors and sentence encoder. \n\nWe included Table 5 to show our action embeddings model semantic similarity between real-world actions. While word2vec embeddings would, no doubt, capture lexical semantics between these actions, the neural process network learns commonalities between actions that aren’t as extractable with a word2vec model. Looking at the action embeddings for ``bake”, ``boil”, and ``pour”, for example, we list the cosine similarities between pairs below:\n\nSkipgram:\nboil - bake → 0.329\nboil - pour → 0.548\n\nNPN:\nboil - bake → 0.687\nboil - pour → -0.119\n\nThe NPN learns action embedding representations based on the state changes those actions induce as opposed to the local context windows around the mentions of the action in text, thereby encoding different semantics in the learned representation. While we did not use pretrained skipgram embeddings to initialize the action embeddings in our work, it is possible that including them when training our model might even lead to better results on our task as the action embeddings could encode elements of both lexical (word2vec) and frame (NPN) semantics. Conversely, we would argue that using only pretrained action embeddings from a word2vec model with no additional training would cause the bilinear matrix from the simulation module to have to learn the simulation mapping functions on its own, which would make the model less expressive. We will include both additional ablations in our final paper.\n\nFor the moment, the action selector learns from distant supervision (string matching in each sentence is used to extract verb mention(s) as labels), but the model is designed to generalize beyond this signal. For example, in the sentence “Boil the water in the pot”, the model is designed to be able to select a composite action that includes an action such as f_put because boiling water involves moving the location of water to the pot. For the moment, we initialize a single action embedding for each verb in the lexicon and let the model learn to map sentences to a mask over these action embeddings. We agree with the reviewer, however, that it would be an interesting investigation to make the action embeddings ``implicit,” letting the model learn to select combinations of elementary actions. This approach is one of our current avenues of future work and could have the effect of generalizing the model similarly to the un-tied version of the REN. \n\nThe reviewer makes a good point about including an NER baseline in the evaluation and we will include it in the final paper. We don’t anticipate the performance being much stronger than the GRU baseline, however, since current NER systems can only identify entities that are directly mentioned in the text, thereby missing elided, coreferent and composite mentions.\n", "Because our response was longer than 5000 characters, we separate our response into multiple parts to break the writing at natural breaks in the response.\n\n--- Motivation with Respect to Related Work ---\nWe thank the reviewer for the question that prompts us to better clarify the key differences between previous approaches based on datasets such as bAbI, and the task proposed in our study. The motivation of our work is to probe a research direction where we make use of naturally existing text with no gold labels, and investigate the role of the modular architecture and intermediate loss functions (with distant supervision) for learning latent action dynamics. In sum, the key contributions of our work are (1) to introduce a new task and dataset (including detailed annotations for evaluation) that bring up unique challenges that previous datasets did not cover, and (2) to propose a new model that is better suited for this new challenge of reasoning about action dynamics. \n\nAs such, our newly introduced task actually complements work on densely labeled datasets such as bAbI. The bAbI dataset is synthetically constructed such that training labels cover the full spectrum of the semantics the model is expected to learn (i.e., # of training instances is extremely high for the # of words/concepts involved). When the training set provides sufficient inductive signals, it is possible to train an end-to-end model to extract the complex relations needed to do well on the task, and Recurrent Entity Networks are one of the best models architected. In our task, because the dataset alone does not provide sufficient inductive signals (only 875 recipes are densely labeled for evaluation), we investigate methods to provide better inductive biases using intermediate losses guided by distant supervision. We view both types of research directions --- integrating inductive biases into datasets (bAbI) vs. models (NPN) --- as important to pursue. They are complementary to each other, and our work focuses on the latter that has been relatively less explored in the existing literature.\n\nCBT is a cloze task based on children’s stories. While CBT is based on real natural language text like ours, the nature of the task differs much from ours in that answering the cloze task often requires remembering the surface patterns associated with each entity throughout the story excerpt (as has been also suggested by the Window Encoding used by prior approaches on this task). In contrast, our task focuses on the unspoken causal effects of actions, rather than explicitly mentioned descriptions about entities. \n\nGiven the key differences between our task and others, it seems beyond the scope to require our model to outperform on all other tasks with different modeling requirements. That said, we are happy to include a detailed and insightful narrative about these differences in our revision, along with side by side performance comparisons. At this time, our conjecture is that updating entity states only through action application is likely to be too restrictive for CBT and bAbI tasks where remembering surface patterns without corresponding actions is crucial. However, our neural process networks can be easily extended to directly connect the sentence encoding to the simulation module (a minor change to one equation), in order to allow for updating entity representations even when there are no explicit actions associated.\n\n--- Qualitative Examples ---\nThe examples in all tables were taken from the development set. We chose these examples to provide interesting case studies on some of the patterns that the model is able to learn by reading text and simulating the underlying world. We agree with the reviewer that an analysis of failure cases should have been included in the original submission and have updated the paper to include examples of similarly interesting cases the model misses. We intend to expand the model’s capabilities to capture these in future work.\n", "We thank the reviewer for their positive feedback. We share the same excitement about the potential for knowledge-guided architectures that simulate world dynamics.\n\nWe’ve edited the paper to show more analysis examples. We’d originally shown examples that presented interesting case studies on the model’s capabilities. We’ve now added other interesting cases that the model fails to handle, but that future simulators would need to capture to correctly model the domain.\n\n--- Intermediate Losses for learning with Distant Supervision ---\nThanks for the question about the impact of the intermediate modular loss as that was one of the key investigation points of our work: whether a neural network trained with a single loss (with distant supervision) could learn the internal dynamics of the task, or whether adding additional losses as guides (with additional distant supervision) would promote the architected inductive biases. This investigation point is a direct consequence of the fact that we do not assume a manually constructed dataset that provides sufficient annotated labels that support directly learning implied action dynamics. Instead, we make use of naturally existing data as is, and investigate the role of the modular architecture and distantly supervised intermediate losses for learning latent structure. \n\nTo provide more detailed comments about the intermediate loss: without the entity selection and the action selection loss, the model would not learn the necessary bias to use the correct actions and entities in predicting the final states. Pretraining the action selector was also especially useful as it allowed the model to use the correct action embeddings when predicting the state changes that were happening in each step. This allowed errors in predicting the final states to be backpropagated to the correct action embeddings from the start. \n\nWe also think it’s an interesting question to see how many examples the model must see during training to learn to select entities and simulate state changes. We thought about including experiments that randomly dropped a percentage of the training set labels and will add these ablations in the final paper. \n\n--- Generation Modeling Variations ---\nWe appreciate the reviewer’s suggestion for using deep sets to encode the state vectors and agree that it seems like a better modeling fit at an intuitive level. While we did not try deep sets as an encoding method, in our pilot study, we explored several attention mechanisms over both the context words and the entity state vectors, and we found that the simple sequential encoding leads to the best performance, a conclusion that had also been found in prior work (Kiddon et al. 2016). We will look into deep sets as an encoding mechanism and report it in the final paper if helpful.\n", "\n--- Architectures with modular prior knowledge representations --- \nIt is correct that our method assumes that predefined sets of entities, actions, and their causal effects are given before initializing the simulation environment. One motivation behind this design choice is to investigate more explicit and modular representations of the world (i.e., entities, actions, and their causal effects), abstracting away from specific words that appeared in the input text. We postulated that this modular architecture would better support integration of prior knowledge about actions and their causal effects, which can be viewed as part of the common sense knowledge people start with that biases how they read and interpret text. We agree with the reviewer that an interesting future research direction would be fully automatic acquisition of ontological knowledge, which we felt was beyond the scope of this paper. \n\n--- Mostly automatic acquisition of prior knowledge --- \nWe also wonder whether there might have been slight confusion about how we acquire the predefined sets of entities, actions, and their causal effects. Importantly, for the training set, we acquire entities and actions automatically from the training corpus. We manually annotated entities and actions only for the purpose of evaluation, but do not use them during training. It is correct that we manually curate the handful of dimensions of action causality, however, primarily because there does not seem to be an easy way to acquire them automatically.\n" ]
[ 6, 9, 8, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJYFzMZC-", "iclr_2018_rJYFzMZC-", "iclr_2018_rJYFzMZC-", "HyGietbmz", "ryq_xY-7M", "SJUEXlDxf", "r1Hu15Kxz", "rJQcSB5gG" ]
iclr_2018_BkeqO7x0-
Unsupervised Cipher Cracking Using Discrete GANs
This work details CipherGAN, an architecture inspired by CycleGAN used for inferring the underlying cipher mapping given banks of unpaired ciphertext and plaintext. We demonstrate that CipherGAN is capable of cracking language data enciphered using shift and Vigenere ciphers to a high degree of fidelity and for vocabularies much larger than previously achieved. We present how CycleGAN can be made compatible with discrete data and train in a stable way. We then prove that the technique used in CipherGAN avoids the common problem of uninformative discrimination associated with GANs applied to discrete data.
accepted-poster-papers
this work adapts cycle GAN to the problem of decipherment with some success. it's still an early result, but all the reviewers have found it to be interesting and worthwhile for publication.
test
[ "S1skfxRxM", "r1TBz6I4M", "SykysFulM", "ryn4mW9ef", "HkGl-GcZf", "By1ReMc-M", "HkWfxz9WM", "By7yez5Zz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "SUMMARY\n\nThe paper considers the problem of using cycle GANs to decipher text encrypted with historical ciphers. Also it presents some theory to address the problem that discriminating between the discrete data and continuous prediction is too simple. The model proposed is a variant of the cycle GAN in which in addition embeddings helping the Generator are learned for all the values of the discrete variables. \nThe log loss of the GAN is replaced by a quadratic loss and a regularization of the Jacobian of the discriminator. Experiments show that the method is very effective. \n\nREVIEW\n\nThe paper considers an interesting and fairly original problem and the overall discussion of ciphers is quite nice. Unfortunately, my understanding is that the theory proposed in section 2 does not correspond to the scheme used in the experiments (contrarily to what the conclusion suggest and contrarily to what the discussion of the end of section 3, which says that using embedding is assumed to have an equivalent effect to using the methodology considered in the theoretical part). Another important concern is with the proof: there seems to be an unmotivated additional assumption that appears in the middle of the proof of Proposition 1 + some steps need to be clarified (see comment 16 below).\nThe experiments do not have any simple baseline, which is somewhat unfortunate.\n\n\nDETAILED COMMENTS:\n\n1- The paper makes a few bold and debatable statements:\n\nline 9 of section 1\n\"Such hand-crafted features have fallen out of favor (Goodfellow et al., 2016) as a\nresult of their demonstrated inferiority to features learned directly from data in end-to-end learning\nframeworks such as neural networks\"\n\nThis is certainly an overstatement and although it might be true for specific types of inputs it is not universally true, most deep architectures rely on a human-in-the-loop and there are number of areas where human crafted feature are arguably still relevant, if only to specify what is the input of a deep network: there are many domains where the notion of raw data does not make sense, and, when it does, it is usually associated with a sensing device that has been designed by a human and which implicitly imposes what the data is based on human expertise. \n\n2- In the last paragraph of the introduction, the paper says that previous work has only worked on vocabularies of 26 characters while the current paper tackles word level ciphers with 200 words. But, isn't this just a matter of scalability and only possible with very large amounts of text? Is it really because of an intrinsic limitation or lack of scalability of previous approaches or just because the authors of the corresponding papers did not care to present larger scale experiments? \n\n\n3- The discussion at the top of page 5 is difficult to follow. What do you mean when you say \"this motivates the benefits of having strong curvature globally, as opposed to linearly between etc\"\nWhich curvature are we talking about? and what how does the \"as opposed to linearly\" mean? Should we understand \"as opposed to having curvature linearly interpolated between etc\" or \"as opposed to having a linear function\"? Please clarify.\n\n4- In the same paragraph: what does \"a region that has not seen the Jacobian norm applied to it\" mean? How is a norm applied to a region? I guess that what you mean is that the generator G might creates samples in a part of the space where the function F has not yet been learned and is essentially close to 0. Is this what you mean?\n\n5- I do not understand why the paper introduces WGAN since in the end it does not use them but uses a quadratic loss, introduced in the first display of section 4.3.\n\n6- The paper makes a theoretical contribution which supports replacing the sample y by a sample drawn from a region around y. But it seems that this is not used in the experiment and that the authors consider that the introduction of the embedding is a substitution for this. Indeed, in the last paragraph of section 3.1, the paper says \"we make the assumption that the training of the embedding vectors approximates random sampling similar to what is described in Proposition 1\". This does not make any sense to me because the embedding vectors map each y deterministically to a single point, and so the distribution on the corresponding vectors is still a fixed discrete distribution. This gives me this impression that the proposed theory does not match what is used in the experiments.\n(The last sentence of section 3.1, which is commenting on this and could perhaps clarify the situation is ill formed with two verbs.)\n\n7- In the definitions: \"A discriminator is said to perform uninformative discrimination\" etc. -> It seems that the choice of the word uninformative would be misleading: an uninformative discrimination would be a discrimination that completely fails, while what the condition is saying it that it cannot perform perfect discrimination. I would thus suggest to call this \"imperfect discrimination\". \n\n\n8- It seems that the same embedding is used in X space and in Y space (from equations 6 and 7). Is there any reason for that? I would seem more natural to me to introduce two different embeddings since the objects are a priori different...\nActually I don't understand how the embeddings can be the same in the Vignere code case since time taken into account one one side.\n\n9- On the 5th line after equation (7), the paper says \"the embeddings... are trained to minimize L_GAN and L_cyc, meaning... and are easy to discriminate\" -> This last part of the sentence seems wrong to me. The discriminator is trying to maximize L_GAN and so minimizing w.r.t. to the embedding is precisely trying to prevent to the discriminator to tell apart too easily the true elements from the estimated ones.\nIn fact the regularization of the Jacobian that will be preventing the discriminator to vary too quickly in space is more likely to explain the fact that the discrimination is not too easy to do between the true and mapped embeddings. This might be connected to the discussion at the top of page 5. Since there are no experiments with alpha different than the default value = 10, this is difficult to assess.\n\n10-The Vigenere cipher is explained again at the end of section 4.2 when it has already been presented in section 1.1\n\n11- Concerning results in Table 2: I do not see why it would not be possible to compare the performance of the method with classical frequency analysis, at least for the character case.\n\n12- At the beginning of section 4.3, the text says that the log loss was replaced with the quadratic loss, but without giving any reason. Could you explain why.\n\n13- The only comparison of results with and without embeddings is presented in the curves of figure 3, for Brown-W with a vocabulary of 200 words. In that case it helps. Could the authors report systematically results about all cases? (I guess this might however be the only hard case...)\n\n14- It would be useful to have a brief reminder of the architecture of the neural network (right now the reader is just refered to Zhu et al., 2017): how many layers, how many convolution layers etc.\nThe same comment applies for the way the position of the letter/word in the text appear is in encoded in a feature that is provided as input to the neural network: it would be nice if the paper could provide a few details here and be more self contained. (The fact that the engineering of the time feature can \"dramatically\" improve the performance of the network should be an argument to convince the authors that hand-crafted feature have not fallen out of favor completely yet...)\n\n15- I disagree with the statement made in the conclusion that the proposed work \"empirically confirms [...] that the use of continuous relaxation of discrete variable facilitates [...] and prevents [...]\" because for me the proposed implementation does not use at all the theoretical idea of continuous relaxation proposed in the paper, unless there is a major point that I am missing.\n\n\n16- I have two issues with the proof in the appendix\n\na) after the first display of the last page the paper makes an additional assumption which is not announced in the statement of the theorem, which is that two specific inequality hold...\nUnless I am mistaken this assumption is never proven (later or earlier). Given that this inequality is just \"the right inequality to get the proof go through\" and given that there are no explanation for why this assumption is reasonable, to me this invalidates the proof. The step of going from G(S_y) to S_(G(y)) seems delicate...\n\nb) If we accept these inequalities, the determinant of the Jacobian (the notation is not defined) of F at (x_bar) disappears from the equations, as if it could be assumed to be greater than one. If this is indeed the case, please provide a justification of this step.\n\n17- A way to address the issue of trivial discrimination in GANs with discrete data has been proposed in\n \nLuc, P., Couprie, C., Chintala, S., & Verbeek, J. (2016). Semantic segmentation using adversarial networks. arXiv preprint arXiv:1611.08408.\nThe authors should probably reference this paper.\n\n\n18- Clarification of the Jacobian regularization: in equation (3), the Jacobian computed seems to be w.r.t D composed with F while in equation (8) it is only the Jacobian of D. Which equation is the correct one?\n\nTYPOS:\n\nProposition 1: the if-then statement is broken into two sentences separated by a full point and a carriage return.\n\nsec. 4.3 line 10 we use a cycle loss *with a regularization coefficient* lambda=1 (a piece of the sentence is missing)\n\nsec. 4.3 lines 12-13 the learning rates given are the same at startup and after \"warming up\"...\n\nIn the appendix: \n3rd line of proof of prop 1: I don' understand \"countably infinite finite sequences of vectors lying in the vertices of the simplex\" -> what is countable infinite here? The vertices?\n\n\n\n\n\n\n\n \n\n\n\n", "I appreciate the responses to my points made in the rebuttal. They address my various concerns quite well and the updated version of the paper is quite compelling.\n\nHere are a few comments in response to the rebuttal:\n\nI found the response to my point 2 quite interesting and worth including in the paper.\n\nAbout point 6: Thank you for this explanation. If I understand correctly, you are trusting that the stochastic updates \"simulates\" some random sampling of the points. But in that case why not considering also the method corresponding exactly to the theory in which the embeddings would actually be sampled just once in a neighborhood as preprocessing, as opposed to being learned (or yet resampled at each visit of a datapoint). I would seem a reasonable baseline + a reasonable validation of the theory...\n\n7. In my opinion: \"Partially uninformative\" sounds good. \"Uninformative\" sounds a bit too strong. But I would not want to necessarily impose this.\n\n8. Yes, I would find it useful to have the information that you also tested the formulation with different embeddings spaces.\n\nIn terms of the proof in appendix B, I would find useful to have a discussion of why Assumption of equation (6) is reasonable...", "The paper shows an application of GANs to deciphering text. The goal is to arrive at a ```\"hands free\" approach to this problem; i.e., an approach that does not require any knowledge of the language being deciphered such as letter frequency and such. The authors start from a CycleGAN architecture, which may be used to learn mapping between two probability spaces. They point out that using GANs for discrete distributions is a challenging problem since it can lead to uninformative discriminants. They propose to resolve this issue by using a continuous embedding space to approximate (or convert) the discrete random variables into continuous random variables. The new proposed algorithm, called CipherGAN, is then shown to be stable and achieve deciphering of substitution ciphers and Vigenere ciphers.\n\nI did not completely understand how the embedding was performed, so perhaps the authors could elaborate on that a bit more. Apart from that, the paper is well written and well motivated. It used some recent ideas in deep learning such as Cycle GANs and shows how to tweak them to make them work for discrete problems and also make them more stable. One comment would be that the paper is decidedly an applied paper (and not much theory) since certain steps in the algorithm (such as training the discriminator loss along with the Lipschitz conditioning term) are included because it was experimentally observed to lead to stability. ", "The paper proposed to replace the 2-dim convolutions in CycleGAN by one dimension variant and reduce the filter sizes to 1, while leave the generator convex embedding and using L2 loss function. \n\nThe proposed simple change help with the dealing of discrete GAN. The benefit of increased stability by adding Jacobian norm regularization term to the discriminator's loss is nice. \n\nThe paper is well written. A few minor ones to improve: \n* The original GAN was proposed/stated as min_max, while in Equation 1 didn't defined F and was not clear about min_{F}. Similar for Equations 2 and 3. \n* Define abbreviation when first appear, e.g. WGAN (Wasserstein ...). \n* Clarify x- and y- axis label in Figure 3. ", "8. Indeed, we do use the same space for X and Y (it’s only the distribution over the spaces that changes). This is possible since in shift and Vigenere we replace each token of the input space with a different token from the same space. I.e. “ABC” -> “GHI” for a shift or “ABC” -> “GGI” for Vigenere with key “656”. This is why we use the same embeddings for both the plaintext data space and ciphertext space. Perhaps we should note that we did experiment with separating the embedding spaces and found little improvement?\n\n9. We have corrected to ‘maximizing’ L_GAN. We analyze the effects of the Jacobian regularization effects in Section 2 as well as in Figure 3 (right). You are correct that Jacobian regularization certainly helps with the problem and we cite three papers which mention this; but in our experiments (see Figure 3 comparison between embeddings and softmax) we found that Jacobian regularization was complemented by our relaxed sampling technique.\n\n10. Thank you for pointing this out, we have removed Section 4.2\n\n11. We completely agree that comparison to standard frequency analysis should be shown and have added this to the table. As is made clear, CipherGAN out-performs frequency analysis by a large margin (>20%). CipherGAN was able to crack all ciphers to nearly flawless accuracy (save Vigenere Brown 200, which is an extremely difficult case we use to stress test the technique).\n\n12. Absolutely; we have added some of the details from Mao et al.\n\n13. You are correct, the only relevant experiment was on Vigenere with Brown 200 since it challenged the network’s ability the most and exposed the divergence in performance between the two techniques.\n\n14. We have added a full description of the architecture in the Appendix.\n\n15. Hopefully the previous clarification resolves this critique.\n\n16. Both your points are correct, the previous version of the paper had a proof that was ‘in between’ two directions. One being an analogue to Ian’s proof in the original GAN paper, and the other being an asymptotic argument that ended up being more elegant and easy to follow. In the updated paper we hope you find the new proof clearly articulated and thoroughly justified. Your point in a) about G(S_y) to S_(G(y)) appears in Lemma 1; the note about inequalities is clarified using Corollary 2; the note about the Jacobian is now stated in the premises of the proposition and we now only require the Jacobian to be near 1 and show that as it approaches 1 the upper and lower bounds squeeze to the same maximal value in the same place.\n\n17. Thank you, we have added a citation in the discrete gan section.\n\n18. Very good catch, thank you. We have correct Eq. 3\n\nWe have also addressed the typos pointed out by the reviewer.\n\nThe reviewer’s principal concern seems to stem from the assumption of embeddings approximating sampling. We hope our clarification that our embeddings are non-fixed points and that experiments with Concrete samples produce nearly indistinguishable results give the review confidence in our methods. Additionally we hope the new proof convinces the reviewer and address the previous concerns (which arose from the proof being incomplete at the time of submission). We hope that the reviewer finds confidence in both the theoretical contributions and the success of the experiments in order to raise the rating to one of acceptance.\n\nAgain, we sincerely appreciate such a detailed and exemplary critique of our work. Please inform us of any other changes that would improve our work.", "We would like to thank Reviewer 1 for providing such a high quality and clear review that has allowed us to greatly improve our paper. We hope the new draft of the paper and the clarifications and improvements made below serve to increase your rating of our work.\n\nWe address each point below:\n\n1. We completely agree that this was an overstatement and have replaced the line with the following \"Across a number of domains, the use of hand-crafted features has often been replaced by automatic feature extraction directly from data using end-to-end learning frameworks (Goodfellow et al., 2016).\" We qualify the statement, restricting it to ‘a number of’ domains of application, while acknowledging that automated feature extraction is not ubiquitous and removing any notion of superiority/inferiority of techniques.\n\n2. The issue of scalability is indeed an important one; as past algorithms have scaled exponentially in the length of key and size of vocabulary. Prior work has generally relied upon ngram frequencies whose space grows exponentially with the vocabulary size rapidly sparsifying occurrences; leading to rapidly decreasing information in these statistics. The other facet of increasing vocabulary size is the applicability to more modern techniques such as block ciphers where the vocabulary is expanded to hundreds or thousands of elements. We intended to show that our method doesn’t completely collapse as vocab space increases (while frequency analysis rapidly does, as we show in new baseline comparisons). We feel this is a valuable and important feature of the work.\n\n3. We clarified the discussion making things a little less verbose and trying to improve the flow of ideas. The curvature we refer to is that of the discriminator output with respect to its inputs (we’ve tried to clarify this); the curvature of this region is important since it represents the strength of the training signal received by the generator. We were trying to make the point that WGAN’s method of regularizing between the generated data and the true data may miss regions of the simplex that our model regularly traverses. We also note that others have pointed this out and have found benefits of regularizing more ‘globally’ or more ‘broadly’ across the simplex (by this we mean other than exclusively between generated data and true data; we have tried to make this clearer in the paper). We hope the changes are an improvement and that it reads more intelligibly. Thank you for raising this concern.\n\n4. Thank you for catching this, that was indeed a typo. We’ve corrected to clarify that it is the curvature regularization being applied (i.e. the WGAN regularization technique of forcing the norm of the Jacobian to 1)\n\n5. We introduce WGAN because it is the inspiration for the last term of our L_GAN loss. The key insight we draw from WGAN is their use of a Jacobian norm regularization term (we also refer to it as ‘curvature regularization’ since it is clearer). \n\n6. Thank you for pointing this out; we’ve done our best to make the precise use in our paper clear. The embeddings are not fixed during training, instead, they are parameters that are tuned throughout training. It is the stochasticity of these points that leads us to the suggestion that these points estimate random samples around fixed points. We came to this conclusion after observing that, as training progresses, the embeddings appear to ‘settle’ and remain bound within a tight region, yet are still moving. Perhaps an analogy to Hamiltonian MCMC or Metropolis-adjusted Langevin sampling as a comparison between noisy gradient updates and gradient-based sampling would improve the argument? We’ve updated the paper to include a clearer motivation of why we suggest jointly-trained embedding vectors might approximate sampling about fixed points. We’ve updated the last sentence of section 3.1 to clarify precisely how we arrived at our conclusion.\n\n7. We refer to this behaviour of the discriminator as uninformative since it says: if we can re-discretize an element to the correct token, but the discriminator evaluates it as incorrect, then the discriminator is not informing on the underlying task when acting on the continuous space. We don’t mean to say that it is ‘entirely’ uninformative of task, only that it demonstrates uninformative behaviour. We are willing to update the name to ‘partially uninformative discrimination’ if the reviewer feels this is an abuse of language?\n\nContinued in next comment.", "We thank Reviewer 3 for their suggested improvements.\n\nWe have incorporated all three suggestions into the latest draft of the paper. Please inform us of any changes that would further improve the work.\n\nThank you again for your review.\n", "We thank Reviewer 2 for their helpful comments.\n\nThank you for pointing out the lack of clarity w.r.t. the embeddings. We have added more discussion about precisely how we treat the embeddings.\n\nWe hope that the reviewer will be able to assign more confidence to their review given the changes. Please inform us of any further changes that would improve the paper.\n\nAgain, we sincerely thank the reviewer for their time and support.\n" ]
[ 7, -1, 7, 8, -1, -1, -1, -1 ]
[ 4, -1, 1, 4, -1, -1, -1, -1 ]
[ "iclr_2018_BkeqO7x0-", "HkGl-GcZf", "iclr_2018_BkeqO7x0-", "iclr_2018_BkeqO7x0-", "By1ReMc-M", "S1skfxRxM", "ryn4mW9ef", "SykysFulM" ]
iclr_2018_Sy-dQG-Rb
Neural Speed Reading via Skim-RNN
Inspired by the principles of speed reading, we introduce Skim-RNN, a recurrent neural network (RNN) that dynamically decides to update only a small fraction of the hidden state for relatively unimportant input tokens. Skim-RNN gives a significant computational advantage over an RNN that always updates the entire hidden state. Skim-RNN uses the same input and output interfaces as a standard RNN and can be easily used instead of RNNs in existing models. In our experiments, we show that Skim-RNN can achieve significantly reduced computational cost without losing accuracy compared to standard RNNs across five different natural language tasks. In addition, we demonstrate that the trade-off between accuracy and speed of Skim-RNN can be dynamically controlled during inference time in a stable manner. Our analysis also shows that Skim-RNN running on a single CPU offers lower latency compared to standard RNNs on GPUs.
accepted-poster-papers
this submission proposes an efficient parametrization of a recurrent neural net by using two transition functions (one large and one small) to reduce the amount of computation (though, without actual improvement on GPU.) the reviewers found the submission very positive. please, do not forget to include all the result and discussion on the proposed approach's relationship to VCRNN which was presented at the same conference just a year ago.
train
[ "BkjQVC8Sz", "H1dhmCIrz", "rJiTSpSSf", "BkXOd1q4G", "r1izCPYlG", "HJpgrTKxf", "rkZtyy5gf", "HyDWxCXNf", "SyMwQNmEG", "SkC43pf4G", "ByGiVpuQM", "BynOEa_Qf", "SJNL4ad7M", "SJDcXGVJM", "SytImi7kG", "BycR50MyG" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "public", "author", "public" ]
[ "NT", "We note that we could not increase FLOP reduction of VCRNN by controlling the hyperparameters on SQuAD. Also, VCRNN performs worse than vanilla RNN (LSTM) without any gain in FLOP reduction, which we believe is due to the difficulty in training (biased gradient, etc.).\n\nWe believe that this supports our claim that Skim-RNN has some crucial advantages over VCRNN that we discussed in our previous comment (and in the related work of the current revision).\n", "In addition to the SST experiment that we reported previously, we also just finished experimenting VCRNN on SQuAD.\nUsing RNN+Att explained in the paper as a base model (where the RNN is replaced with either Skim-LSTM or VCRNN), VCRNN obtained F1=74.9% and EM=65.4% with very little FLOP reduction (less than 1.01x), which is worse than F1=75.7% and EM=66.7% with the FLOP reduction of 1.4x by Skim-LSTM.\n\nWe made a revision to the paper that includes these discussions and the experimental results of VCRNN.\n\nEDIT on Jan 25: We reported a wrong number for VCRNN's FLOP reduction; it should be <1.01x, not 1.4x.", "Thank you for the comment, and as you suggested, we report a comparison between Skim-RNN and VCRNN on Stanford Sentiment Treebank (SST). Since VCRNN’s accuracy had a high variance, we ran the experiment 5 times for both models with different random initialization of the weights. Skim-RNN (d=100, d’=5) obtained an average accuracy of 85.6% (std=0.47%, max=86.4%) with average FLOP reduction of 2.4x, while VCRNN obtained an average accuracy of 81.9% (std=4.91%, max=85.7%) with average FLOP reduction of 2.6x. So there is a clear advantage of Skim-RNN over VCRNN on average accuracy (3.7% diff), max accuracy (0.7% diff), and stability (std) with similar FLOP reduction. We will be working on the rest of the tasks and make sure to report the results in a future revision. ", "The paper proposes a way to speed up the inference time of RNN via Skim mechanism where only a small part of hidden variable is updated once the model has decided a corresponding word token seems irrelevant w.r.t. a given task. While the proposed idea might be too simple, the authors show the importance of it via thorough experiments. It also seems to be easily integrated into existing RNN systems without heavy tuning as shown in the experiments. \n\n* One advantage of proposed idea claimed against the skip-RNN is that the Skim-RNN can generate the same length of output sequence given input sequence. It is not clear to me whether the output prediction on those skimmed tokens is made of the full hidden state (updated + copied) or a first few dimensions of the hidden state. I assume that the full hidden states are used for prediction. It is somehow interesting because it may mean the prediction heavily depends on small (d') part of the hidden state. In the second and third figures of Figure 10, the model made wrong decisions when the adjacent tokens were both skimmed although the target token was not skimmed, and it might be related to the above assumption. In this sense, it would be more beneficial if the skimming happens over consecutive tokens (focus on a region, not on an individual token).\n\n* This paper would gain more attention from practitioners because of its practical purpose. In a similar vein, it would be also good to have some comments on training time as well. In a general situation where there is no need of re-training, training time would be meaningless, however, if one requires updating the model on the fly, it would be also meaningful to have some intuition on training time.\n\n* One obvious way to reduce the computational complexity of RNN is to reduce the size of the hidden state. In this sense, it makes this manuscript more comprehensive if there are some comparisons with RNNs with limited-sized hidden dimensions (say 10 or 20). So that readers can check benefits of the skim RNN against skip-RNN and small-sized RNN.\n", "Summary: The paper proposes a learnable skimming mechanism for RNN. The model decides whether to send the word to a larger heavy-weight RNN or a light-weight RNN. The heavy-weight and the light-weight RNN each controls a portion of the hidden state. The paper finds that with the proposed skimming method, they achieve a significant reduction in terms of FLOPS. Although it doesn’t contribute to much speedup on modern GPU hardware, there is a good speedup on CPU, and it is more power efficient.\n\nContribution:\n- The paper proposes to use a small RNN to read unimportant text. Unlike (Yu et al., 2017), which skips the text, here the model decides between small and large RNN.\n\nPros:\n- Models that dynamically decide the amount of computation make intuitive sense and are of general interests.\n- The paper presents solid experimentation on various text classification and question answering datasets.\n- The proposed method has shown reasonable reduction in FLOPS and CPU speedup with no significant accuracy degradation (increase in accuracy in some tasks).\n- The paper is well written, and the presentation is good.\n\nCons:\n- Each model component is not novel. The authors propose to use Gumbel softmax, but does compare other gradient estimators. It would be good to use REINFORCE to do a fair comparison with (Yu et al., 2017 ) to see the benefit of using small RNN.\n- The authors report that training from scratch results in unstable skim rate, while Half pretrain seems to always work better than fully pretrained ones. This makes the success of training a bit adhoc, as one need to actively tune the number of pretraining steps.\n- Although there is difference from (Yu et al., 2017), the contribution of this paper is still incremental.\n\nQuestions:\n- Although it is out of the scope for this paper to achieve GPU level speedup, I am curious to know some numbers on GPU speedup.\n- One recommended task would probably be text summarization, in which the attended text can contribute to the output of the summary.\n\nConclusion:\n- Based on the comments above, I recommend Accept", "This paper proposes a skim-RNN, which skims unimportant inputs with a small RNN while normally processes important inputs with a standard RNN for fast inference.\n\nPros.\n-\tThe idea of switching small and standard RNNs for skimming and full reading respectively is quite simple and intuitive.\n-\tThe paper is clearly written with enough explanations about the proposal method and the novelty.\n-\tOne of the most difficult problems of this approach (non-differentiable) is elegantly solved by employing gumbel-softmax\n-\tThe effectiveness (mainly inference speed improvement with CPU) is validated by various experiments. The examples (Table 3 and Figure 6) show that the skimming process is appropriately performed (skimmed unimportant words while fully read relevant words etc.)\nCons.\n-\tThe idea is quite simple and the novelty is incremental by considering the difference from skip-RNN.\n-\tNo comments about computational costs during training with GPU (it would not increase the computational cost so much, but gumbel-softmax may require more iterations).\n\nComments:\n-\tSection 1, Introduction, 2nd paragraph: ‘peed’ -> ‘speed’(?)\n-\tEquation (5): It would be better to explain why it uses the Gumbel distribution. To make (5) behave like argmax, only temperature parameter seems to be enough.\n-\tSection 4.1: What is “global training step”?\n-\tSection 4.2, “We also observe that the F1 score of Skim-LSTM is more stable across different configurations and computational cost.”: This seems to be very interesting phenomena. Is there some discussion of why skim-LSTM is more stable?\n-\tSection 4.2, the last paragraph: “Table 6 shows” -> “Figure 6 shows”\n", "thanks for the detailed description, but they still do look quite similar. the \"partial update\" model is also not exactly what VCRNN does, in the sense that it's a very much crippled version of VCRNN without, e.g., saving any computation. it'll be important to carefully compare the full implementation of VCRNN against the skim-RNN on at least one task. after all, the VCRNN was proposed in the *same* venue just one year ago.\n\nplease feel free to make another revision however as early as you could.", "Thank you for mentioning a relevant paper that we missed!\n\nWe agree that both of Skim-RNN and VCRNN (as well as LSTM-Jump) are concerned with dynamically controlling the computational cost of RNN, and we will make sure to discuss VCRNN in our next revision. However, we would like to emphasize that there is a fundamental difference between them: VCRNN partially updates the hidden state (controlling the number of units to update at each time step), while Skim-RNN contains multiple RNNs that “share” a common hidden state with different regions on which they operate (choosing which RNN to use at each time step). This has two important implications.\n\nFirst, the nested RNNs in Skim-RNN have their own weights and thus can be considered as independent agents that interact with each other through the shared state. That is, Skim-RNN updates the shared portion of the hidden state differently (by using different RNNs) depending on importance of the token, whereas the affected (first few) dimensions in VCRNN are identically updated regardless of the importance of the input. We argue that this capability of Skim-RNN could be a crucial advantage, based on the following initial observation. Instead of having two independent nested RNNs, we experiment with a single RNN and a binary decision function whether to update the hidden state fully (100 dimensions) or partially (first 5 dimensions). On SST, “partial update” model (similar to VCRNN but binary decision instead) underperformed Skim-RNN by 2.0% with similar skimming rate.\n\nSecond, at each time step, VCRNN needs to make a d-way decision (where d is the hidden state size, usually hundreds), whereas Skim-RNN only requires binary decision. This means that computing exact gradient of VCRNN is even more intractable (d^L vs 2^L) than that of Skim-RNN, and subsequently the gradient estimation would be harder as well.\n\nExperimentally, the two papers focus on different domains. VCRNN experimented on one music modeling task and and two (bit and char level) language modeling tasks, while we experimented on four language classification tasks and two question answering tasks. We would like to also appeal that the reviewers have acknowledged our diverse experiments, analyses, and visualizations that are useful to understand, verify and interpret our model, which we believe is a meaningful contribution towards the community’s effort on reducing the computational cost of RNNs.\n \nAs it is past the rebuttal period, we would like to know if we can make a revision to the submission, and if so when the deadline is. In the meanwhile, we will do our best to update the paper and/or provide additional results as soon as possible with VCRNN considered.\n", "Note: this is not an official meta-review\n\nthe idea in this paper looks very similar to the idea from <VARIABLE COMPUTATION IN RECURRENT NEURAL NETWORKS> which was presented at ICLR'17: https://arxiv.org/abs/1611.06188. Especially, looking at Fig. 1's of both papers clearly indicate the similarities between these two approaches.\n\ni'd like the authors to clarify how they differ, and would like to ask the reviewers to read https://arxiv.org/abs/1611.06188 and see how this affects your judgement of the submission.", "Thank you for your insightful and supportive comments; we discuss additional experiments inspired your suggestions and make a few clarifications.\n\n\nSuggestions:\n- Training cost with GPU: Thank you for the suggestion, and we report training cost in two dimensions: memory and time. Assuming d/d’=100/20 on SQuAD, memory consumption is only ~5% more than vanilla RNN. Since Skim-RNN needs to compute outputs for both RNNs (big and small) during training, it requires more time for the same number of training steps. For instance, on SQuAD, Skim-LSTM takes 8 hours of training whereas LSTM takes 5 hours until convergence. However, in terms of number of training steps, they both require approximately 18k steps. \n\n- Why Gumbel-softmax and not just temperature: we used Gumbel-Softmax mainly due to its theoretical guarantee shown in Jang et al (2017). We experimented with temperature annealing only, and found that the accuracy is ~0.5% lower on SQuAD and convergence is a little slower. While there is some advantage of Gumbel-softmax, It seems temperature annealing is also an effective technique. \n\n- Typos: thank you for correcting them and we have fixed them in the current revision. \n\n\nClarifications:\n- Sec 4.1 global training step: We meant just “training step”, and we fixed it in the most recent revision.\n\n- Sec 4.2 stableness: We actually meant that LSTM accuracy dips (is unstable) when we use smaller hidden state size to reduce FLOP, while Skim-LSTM accuracy does not dip even with high reduction in FLOP.\n", "Thank you for your insightful and supportive comments; we discuss additional experiments following your suggestions and make a few clarifications.\n\nSuggestions:\n- Other gradient methods: Thank you for your suggestion, and we found that REINFORCE substantially underperforms (less than 20% accuracy on SST) Gumbel-Softmax within 50k steps of training. We suspect that this is due to the high variance of REINFORCE, which becomes even worse in our case where the sample space exponentially increases with the sequence length. We found that temperature annealing is not as bad as REINFORCE, but the accuracy is still ~0.5% lower than Gumbel-Softmax and the convergence is slower. We will include this in any final version of the paper.\n\n\n- Text summarization: We agree that it is an appropriate application for Skim-RNN. We will consider it for potential future work.\n\n\nClarifications:\n- Adhoc training due to required pretraining: while pretraining definitely helps in QA, we would like to emphasize that no-pretraining still performs well (classification results are without pretraining), and there is no added cost of pretraining; that is, pretraining + finetuning has a similar training time to training from scratch. \n\n- GPU speed up: Theoretically Skim-RNN could have speed up on GPU. However, because parallelization has log-time cost, this would be negligible compared to other costs. \n", "Thank you for your insightful and supportive comments; we make a few clarifications and discuss additional experiments inspired by your suggestions.\n\nClarification:\n- Output of skimmed tokens: The output of skimmed tokens is the full hidden state (concatenating updated and copied parts).\n\nSuggestions:\n- Focusing on region: Thank you for your suggestion, and we will consider this approach in future work.\n\n- Training time: Since Skim-RNN needs to compute outputs for both RNNs (big and small) during training, it requires more time for the same number of training steps. For instance, in SQuAD, Skim-LSTM takes 8 hours of training whereas LSTM takes 5 hours until convergence. However, in terms of number of training steps, they both require approximately 18k steps. We will include this in any final version of the paper.\n\n- Comparison with small hidden size: When the hidden size becomes 10 (from 100) for vanilla RNN, there is 3.4% accuracy drop in SST (7.1x less FLOP) and 6.1% accuracy drop in Rotten Tomatoes (7.1x less FLOP). There is a clear trade-off between accuracy and FLOP when smaller hidden size is used. We are currently experimenting with other datasets and will include them in the next revision.\n", "Hi,\nThanks for your response.", "Hi,\nThank you for your interest in our paper and your suggestion! \nI think copying 1 to d-d' all the time is equivalent to copying d'+1 to d (without loss of generality).\nSo I believe what you are suggesting is something like having two small RNNs that operate on different parts of the hidden state.\nIn fact, we also think it is an interesting direction to explore (we also mention this as potential future work in the conclusion), though we did not report it mainly because we have not observed clear advantage by doing so in our experiments yet.", "It is a very interesting paper. I really enjoyed reading it. Actually, I have a naive question about skimming part. According to Figure 1, the hidden state (d’ + 1 to d) of the word “and” is copied from the previous hidden state directly, while the first part (1 to d’) is updated by a smaller RNN. I am curious about this setting. Why did you update this model in this way? Can I copy the (1 to d - d’) part and update the remaining part of this model? " ]
[ -1, -1, -1, -1, 7, 7, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 3, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "H1dhmCIrz", "rJiTSpSSf", "iclr_2018_Sy-dQG-Rb", "HyDWxCXNf", "iclr_2018_Sy-dQG-Rb", "iclr_2018_Sy-dQG-Rb", "iclr_2018_Sy-dQG-Rb", "SyMwQNmEG", "SkC43pf4G", "iclr_2018_Sy-dQG-Rb", "rkZtyy5gf", "HJpgrTKxf", "r1izCPYlG", "SytImi7kG", "BycR50MyG", "iclr_2018_Sy-dQG-Rb" ]
iclr_2018_SyJS-OgR-
Multi-level Residual Networks from Dynamical Systems View
Deep residual networks (ResNets) and their variants are widely used in many computer vision applications and natural language processing tasks. However, the theoretical principles for designing and training ResNets are still not fully understood. Recently, several points of view have emerged to try to interpret ResNet theoretically, such as unraveled view, unrolled iterative estimation and dynamical systems view. In this paper, we adopt the dynamical systems point of view, and analyze the lesioning properties of ResNet both theoretically and experimentally. Based on these analyses, we additionally propose a novel method for accelerating ResNet training. We apply the proposed method to train ResNets and Wide ResNets for three image classification benchmarks, reducing training time by more than 40\% with superior or on-par accuracy.
accepted-poster-papers
this submission proposes a learning algorithm for resnets based on their interpreration of them as a discrete approximation to a continuous-time dynamical system. all the reviewers have found the submission to be clearly written, well motivated and have proposed an interesting and effective learning algorithm for resnets.
test
[ "rJiJWZtHz", "rk40-nDlz", "SyuwCCKlz", "HJzVc2sxf", "HkrMrL2mG", "rk51SIhmz", "ByShEI27z", "SkSwN8n7M" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "> We are currently working on the experiments of ImageNet\n\nAny update on this front ?\nImproved ImageNet training time would significantly increase the impact of this paper.", "This paper interprets deep residual network as a dynamic system, and proposes a novel training algorithm to train it in a constructive way. On three image classification datasets, the proposed algorithm speeds up the training process without sacrificing accuracy. The paper is interesting and easy to follow. \n\nI have several comments:\n1.\tIt would be interesting to see a comparison with Stochastic Depth, which is also able to speed up the training process, and gives better generalization performance. Moreover, is it possible to combine the proposed method with Stochastic Depth to obtain further improved efficiency?\n2.\tThe mollifying networks [1] is related to the proposed method as it also starts with shorter networks, and ends with deeper models. It would be interesting to see a comparison or discussion. \n[1] C Gulcehre, Mollifying Networks, 2016\n3.\tCould you show the curves (on Figure 6 or another plot) for training a short ResNet (same depth as your starting model) and a deep ResNet (same depth as your final model) without using your approach?", "I enjoyed reading the paper. This is a very well written paper, the authors propose a method for speeding up the training time of Residual Networks based on the dynamical system view interpretation of ResNets. In general I have a positive opinion about the paper, however, I’d like to ask for some clarifications.\n\nI’m not fully convinced by the interpretation of Eq. 5: “… d is inversely proportional to the norm of the residual modules G(Yj)”. Since F(Yj) is not a constant, I think that d is inversely proportional to ||G(Yj)||/||F(Yj)||, however, in the interpretation the dependence on ||F(Yj)|| is ignored. Could the authors comment on that?\n\nSection 4. 1 “ Each cycle itself can be regarded as a training process, thus we need to reset the learning rate value at the beginning of each training cycle and anneal the learning rate during that cycle.” Is there any empirical evidence for this? What would happen if the learning rate is not reset at the beginning of each cycle? \n\nQuestions with respect to dynamical systems point of view: Eq. 4 assumes small value of h. However, for ResNet there is no guarantee that the h would be small (e. g. in Appendix C the values between 0.25 and 1 are used). Would the authors be willing to comment on the importance of the value of h? In figure 1, pooling (strided convolutions) are not depicted between network stages. I have one question w.r.t. feature maps dimensionality changes inside a CNN: how does pooling (or strided convolution) fit into dynamical systems view?\n\nTable 3 and 4. I assume that the training time unit is a minute, I couldn’t find this information in the paper. Is the batch size the same for all models (100 for CIFAR and 32 for STL-10)? I understand that the models with different #Blocks have different capacity, for clarity, would it be possible to add # of parameters to each model? For multilevel method, would it be possible to show intermediate results in Table 3 and 4, e. g. at the end of cycle 1 and 2? I see these results in Figure 6, however, the plots are condensed and it is difficult to see the exact number at the end of each cycle. \n\nThe citation (E, 2017) seems to be wrong, could the authors check it?\n", "\n\nThis paper proposes a new method to train residual networks in which one starts by training shallow ResNets, doubling the depth and warm starting from the previous smaller model in a certain way, and iterating. The authors relate this idea to a recent dynamical systems view of ResNets in which residual blocks are viewed as taking steps in an Euler discretization of a certain differential equation. This interpretation plays a role in the proposed training method by informing how the “step sizes” in the Euler discretization should change when doubling the depth of the network. The punchline of the paper is that the authors are able to achieve similar performance as “full ResNet training” but with significantly reduced training time.\n\nOverall, the proposed method is novel — even though this idea of going from shallow to deep is natural for residual networks, tying the idea to the dynamical systems perspective is elegant. Moreover the paper is clearly written. Experimental results are decent — there are clear speedups to be had based on the authors' experiments. However it is unclear if these gains in training speed are significant enough for people to flock to using this (more complicated) method of training.\n\nI only have a few small questions/comments:\n* A more naive way to do multi-level training would be to again iteratively double the depth, but perhaps not halve the step size. This might be a good baseline to compare against to demonstrate the value of the dynamical systems viewpoint.\n* One thing I’m unclear on is how convergence was assessed… my understanding is that the training proceeds for a fixed number of epochs (?) - but shouldn’t this also depend on the depth in some way? \n* Would the speedups be more dramatic for a larger dataset like Imagenet?\n* Finally, not being very familiar with multigrid methods from the numerical methods literature — I would have liked to hear about whether there are deeper connections to these methods.\n\n\n", "We would like to thank the reviewer for the detailed comments and suggestions for the manuscript.\n\n(1) Lu et al. [1] introduced stochastic dynamic system perspective and interpreted Stochastic Depth method as an approximation to a stochastic dynamic system. Combining multi-level method with stochastic dynamic system view is one of our future research directions.\n\n(2) We thank the reviewer for pointing out the mollifying network paper. We have added it in the related work section. The mollifying network starts with a linearized network with a smoothed objective function, and evolves to a non-linear network and the original objective function. Both mollifying network and our proposed method go from simple networks to complex ones. But mollifying network solves a smoothed problem, while our method solves the same underlying differential equations, but at different levels of approximation. Also, the purpose of mollifying network is to make the optimization easier, while ours is to speed up training.\n\n(3) Short and deep ResNet curves: Please see Fig.11 in appendix D in the updated manuscript.\n\n[1] Lu, Yiping, et al. \"Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equations.\" arXiv preprint arXiv:1710.10121 (2017).\n", "We would like to thank the reviewer for the detailed comments and suggestions for the manuscript.\n\n(1) According to our theoretical interpretation, F represents the underlying ODE and does not depend on d. In Appendix C, we also empirically show that the depth of the network does not affect the underlying differential equation.\n\n(2) Resetting learning rate: We ran experiments comparing resetting and not resetting the learning rate at the beginning of each cycle. The results are shown in Appendix D, Figure 10. Resetting the learning rate at the beginning of each cycle gives better validation accuracy in the updated manuscript.\n\n(3) The value of h: In this paper, we formulate ResNet as a forward Euler discretization of an ODE. For forward Euler, h times the norm of convolution kernels should to be small to ensure stability. In practice, h is absorbed by the convolution kernels and stability is achieved by regularizing the convolution kernels.\n\n(4) Moving between different blocks in the network is equivalent to changing the resolution when solving a time dependent differential equation. Algorithms commonly use high resolution at early times and coarsen the image for later times, similar to different units of the ResNet. \n\n(5) Number of parameters: Please see Table 5 in appendix A in the updated manuscript.\n\n(6) E is the last name of Weinan E (https://web.math.princeton.edu/~weinan/).\n", "We would like to thank the reviewer for the detailed comments and suggestions for the manuscript.\n\n(1) According to the dynamical systems view, by halving the step size, the underlying differential system is the same before and after interpolation. Section 3.2 and Appendix C validate this interpretation empirically. In practice, the step size h will be absorbed by the convolution kernels over the course of normal backpropagation if it is not properly halved during multi-level training.\n\n(2) Yes, we used a fixed number of epochs for different models. You are right that technically the training steps should be dependent on the depth in some way. However, in the literature, researchers commonly use fixed number of epochs to compare models with varying depths or size. For example in [1], training terminates at 64k iterations on CIFAR-10 for all models with number of layers ranging from 20 to 1202.\n\n(3) We are currently working on the experiments of ImageNet, and we are trying our best to include the results in the final version.\n\n(4) Our methods are closely linked to grid continuation techniques. These techniques are commonly used in optimal control problems in the context of fluid flow and path planning. The basic idea is to use the continuous underlying structure (the pde or ode) in order to gradually discretize the problem on a increasingly finer mesh. See [2] for more details.\n\n[1] He, Kaiming, et al. \"Deep residual learning for image recognition.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.\n\n[2] E. Allgower and K. Georg, Numerical continuation methods, Springer Verlag, 1990.\n", "Dear reviewers,\n\nThanks for your comments and suggestions. We have upload a revision and added model details in Appendix A and more experimental results in Appendix D." ]
[ -1, 7, 7, 7, -1, -1, -1, -1 ]
[ -1, 4, 3, 4, -1, -1, -1, -1 ]
[ "ByShEI27z", "iclr_2018_SyJS-OgR-", "iclr_2018_SyJS-OgR-", "iclr_2018_SyJS-OgR-", "rk40-nDlz", "SyuwCCKlz", "HJzVc2sxf", "iclr_2018_SyJS-OgR-" ]
iclr_2018_HktJec1RZ
Towards Neural Phrase-based Machine Translation
In this paper, we present Neural Phrase-based Machine Translation (NPMT). Our method explicitly models the phrase structures in output sequences using Sleep-WAke Networks (SWAN), a recently proposed segmentation-based sequence modeling method. To mitigate the monotonic alignment requirement of SWAN, we introduce a new layer to perform (soft) local reordering of input sequences. Different from existing neural machine translation (NMT) approaches, NPMT does not use attention-based decoding mechanisms. Instead, it directly outputs phrases in a sequential order and can decode in linear time. Our experiments show that NPMT achieves superior performances on IWSLT 2014 German-English/English-German and IWSLT 2015 English-Vietnamese machine translation tasks compared with strong NMT baselines. We also observe that our method produces meaningful phrases in output languages.
accepted-poster-papers
this submission introduces soft local reordering to the recently proposed SWAN layer [Wang et al., 2017] to make it suitable for machine translation. although only in small-scale experiments, the results are convincing.
val
[ "Sy2fyR7gG", "r1IRR2Yez", "r1PrkGheM", "rkMojlVzG", "SyGiIgEff", "SJzkqxNfz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper introduces a neural translation model that automatically discovers phrases. This idea is very interesting and tries to marry phrase-based statistical machine translation with neural methods in a principled way. However, the clarity of the paper could be improved.\n\nThe local reordering layer has the ability to swap inputs, however, how do you ensure that it actually does swap inputs rather than ignoring some inputs and duplicating others?\n\nAre all segments translated independently, or do you carry over the hidden state of the decoder RNN between segments? In Figure 1 both a BRNN and SWAN layer are shown, is there another RNN in the SWAN layer, or does the BRNN emit the final outputs after the segments have been determined?", "Authors proposed a new neural-network based machine translation method that generates the target sentence by generating multiple partial segments in the target sentence from different positions in the source information. The model is based on the SWAN architecture which is previously proposed, and an additional \"local reordering\" layer to reshuffle source information to adjust those positions to the target sentence.\n\nUsing the SWAN architecture looks more reasonable than the conventional attention mechanism when the ground-truth word alignment is monotone. Also, the concept of local reordering mechanism looks well to improve the basic SWAN model to reconfigure it to the situation of machine translation tasks.\n\nThe \"window size\" of the local reordering layer looks like the \"distortion limit\" used in traditional phrase-based statistical machine translation methods, and this hyperparameter may impose a similar issue with that of the distortion limit into the proposed model; small window sizes may drop information about long dependency. For example, verbs in German sentences sometimes move to the tail of the sentence and they introduce a dependency between some distant words in the sentence. Since reordering windows restrict the context of each position to a limited number of neighbors, it may not capture distant information enough. I expected that some observations about this point will be unveiled in the paper, but unfortunately, the paper described only a few BLEU scores with different window sizes which have not enough information about it. It is useful for all followers of this paper to provide some observations about this point.\nIn addition, it could be very meaningful to provide some experimental results on linguistically distant language pairs, such as Japanese and English, or simply reversing word orders in either source or target sentences (this might work to simulate the case of distant reordering).\n\nAuthors argued some differences between conventional attention mechanism and the local reordering mechanism, but it is somewhat unclear that which ones are the definite difference between those approaches.\n\nA super interesting and mysterious point of the proposed method is that it achieves better BLEU than conventional methods despite no any global language models (Table 1 row 8), and the language model options (Table 1 row 9 and footnote 4) may reduce the model accuracy as well as it works not so effectively. This phenomenon definitely goes against the intuitions about developing most of the conventional machine translation models. Specifically, it is unclear how the model correctly treats word connections between segments without any global language model. Authors should pay attention to explain more detailed analysis about this point in the paper.\n\nEq. (1) is incorrect. According to Fig. 2, the conditional probability in the product operator should be revised to p(a_t | x_{1:t}, a_{1:t-1}), and the independence approximation to remove a_{1:t-1} from the conditions should also be noted in the paper.\nNevertheless, the condition x_{1:t} could not be reduced because the source position is always conditioned by all previous positions through an RNN.\n\n", "This paper introduces a new architecture for end to end neural machine translation. Inspired by the phrase based approach, the translation process is decomposed as follows : source words are embedded and then reordered; a bilstm then encodes the reordered source; a sleep wake network finally generates the target sequence as a phrase sequence built from left to right. \n\nThis kind of approach is more related to ngram based machine translation than conventional phrase based one. \n\nThe idea is nice. The proposed approach does not rely on attention based model. This opens nice perpectives for better and faster inference. \n\nMy first concern is about the architecture description. For instance, the swan part is not really stand alone. For reader who does not already know this net, I'm not sure this is really clear. Moreover, there is no link between notations used for the swan part and the ones used in the reordering part. \n\nThen, one question arises. Why don't you consider the reordering of the whole source sentence. Maybe you could motivate your choice at this point. This is the main contribution of the paper, since swan already exists.\n\nFinally, the experimental part shows nice improvements but: 1/ you must provide baseline results with a well tuned phrase based mt system; 2/ the datasets are small ones, as well as the vocabularies, you should try with larger datasets and bpe for sake of comparison. ", "Thank you for your valuable comments. We address the comments and questions below:\n1. The local reordering layer has the ability to swap inputs, however, how do you ensure that it actually does swap inputs rather than ignoring some inputs and duplicating others?\n<Response>: We do not have a guarantee that the layer forces to swap inputs as it is data driven. In Appendix A, we show an example translating from \"can you translate it ?\" to \"können es übersetzen?\" to show that some input information is swapped. Note that the example needs to be reordered from \"translate it\" to \"es übersetzen\". Each row of Figure 3 represents a window of size 7 that is centered at a source sentence word. We can observe that the gates mostly focus on the central word since the first part of the sentence only requires monotonic alignment. Interestingly, the model outputs \"$\" (empty) when the model has the word \"translate\" in the center of the window. Then, the model outputs \"es\" when the model encounters \"it\". Finally, in the last window (top row), the model not only has a large gate value to the center input \"?\", but the model also has a relatively large gate value to the word \"translate\" in order to output the translation \"übersetzen ?\". This shows an example of the reordering effect achieved by using the gating mechanism of the reordering layer.\n \n2. Are all segments translated independently, or do you carry over the hidden state of the decoder RNN between segments?\n<Response>: Yes, all the segments are translated independently. We do not carry over the hidden states between segments. Hence, the decoding can be parallelized. We are highlighting this part in the second to last paragraph of Section 2.2.\n \n3. In Figure 1 both a BRNN and a SWAN layer are shown, is there another RNN in the SWAN layer, or does the BRNN emit the final outputs after the segments have been determined?\n<Response>: In Figure 1, the reordering layer and BRNN can be considered as the encoder of an input sequence. The SWAN is the decoder, which contains another unidirectional RNN for p(a_t|x_t) in Eq. (1). The BRNN emits x_t to SWAN. We added some clarification in defining x_t to address it.", "Thank you for your valuable comments. We address the comments and questions below:\n1. For instance, the swan part is not really stand alone. For reader who does not already know this net, I'm not sure this is really clear.\n<Response>: We add some detailed explanations to address the confusion from Reviewer2 (see responses to Reviewer2) and make SWAN more stand alone. For example, we added an explanation of SWAN via a probabilistic generative model in section 2.2.\n \n2. There is no link between notations used for the swan part and the ones used in the reordering part\n<Response>: We clarify the description of symbols h and x, and indicate the connections between the two using the bi-directional RNN in Figure 1 (a).\n \n3. Why don't you consider the reordering of the whole source sentence?\n<Response>: Our proposed reordering layer is not limited to local reordering. Empirically, we found the results do not improve when we increase the window sizes in our experiments, see Appendix B for details. It might be due to the choice of language pairs, which are relative monotonic.\n \n4. The experimental part shows nice improvements but: 1/ you must provide baseline results with a well tuned phrase based mt system\n<Response>: We have looked into this direction for comparison. However, in the IWSLT 2014 competition, test set tst2014 is not revealed. Hence, we cannot directly compare results with the all the teams on [Cettolo 2014]. Given in IWSLT 2015 English-German task, NMT outperforms Phrase-Based MT by a large margin (up to 5.2 BLEU score) [Luong 2015]. Our NPMT further outperforms the state-of-the-art NMT types of systems by up to 2.7 BLEU score in English-German task.\n\nReference:\n[Cettolo 2014] Mauro Cettolo, Jan Niehues, Sebastian St¨uker, Luisa Bentivogli, and Marcello Federico. Report on the 11th IWSLT evaluation campaign, IWSLT 2014. In Proceedings of IWSLT, 2014.\n[Luong 2015] Minh-Thang Luong and Christopher D. Manning. Stanford Neural Machine Translation Systems for Spoken Language Domain. IWSLT’15. \n\n5. 2/ the datasets are small ones, as well as the vocabularies, you should try with larger datasets and bpe for sake of comparison.\n<Response>: We are actively working on improving the speed of the system and exploring this approach on WMT datasets with bpe vocabularies. We plan to open source our implementations to expedite this direction.", "Thank you for your valuable comments. We address the comments and questions below:\n1. The \"window size\" of the local reordering layer looks like the \"distortion limit\" used in traditional phrase-based statistical machine translation methods, and this hyperparameter may impose a similar issue with that of the distortion limit into the proposed model.\n<Response>: Thanks a lot for your suggestion. We add the reference [Brown 1993] and discussion to the end of Section 2.3. We believe the limit of local reordering is mitigated by using bidirectional RNN after that. Thus it is not very clear how to analyze the exact behavior of the local reordering layer. We are currently actively investigating new ways of doing so.\n \n2. It could be very meaningful to provide some experimental results on linguistically distant language pairs, such as Japanese and English, or simply reversing word orders in either source or target sentences.\n<Response>: Thanks a lot for your suggestion. This is definitely one important direction we should investigate in future work.\n \n3. Authors argued some differences between conventional attention mechanism and the local reordering mechanism, but it is somewhat unclear that which ones are the definite difference between those approaches.\n<Response>: We reiterate and reorganize the important differences here:\nFirst, we do not have a query to begin with as in standard attention mechanisms. Second, unlike standard attention, which is top-down from a decoder state to encoder states, the reordering operation is bottom-up. Third, the weights {w_i}_{i=0}^{2\\tau} capture the relative positions of the input elements, whereas the weights are the same for different queries and encoder hidden states in the attention mechanism (no positional information). The reordering layer performs locally similar to a convolutional layer and the positional information is encoded by a different parameter w_i for each relative position i in the window. Fourth, we do not normalize the weights for the input elements e_{t-\\tau}, ..., e_t, ..., e_{t+\\tau}. This provides the reordering capability and can potentially turn off everything if needed. Finally, the gate of any position i in the reordering window is determined by all input elements e_{t-\\tau},…, e_t, …, e_{t+\\tau} in the window.\n \n4. Equation (1) is incorrect. According to Fig. 2, the conditional probability in the product operator should be revised to p(a_t | x_{1:t}, a_{1:t-1}), and the independence approximation to remove a_{1:t-1} from the conditions should also be noted in the paper. Nevertheless, the condition x_{1:t} could not be reduced because the source position is always conditioned by all previous positions through an RNN.\n<Response>: We respectfully disagree with this assessment. The Eq. (1) is not an approximation; it is the way we model the output. This is motivated by Eqs. (2) and (3) of the CTC paper [Graves 2006], p(y_{1:T}|x_{1:T’}) = sum_{a_{1:T’}} p(a_{1:T}|x_{1:T’}) marginalizes over the set of all possible segmentations. And a_{1:T’} is a collection of the segments that, when concatenated, leads to y_{1:T}. We also have p(a_{1:T}|x_{1:T’}) = \\prod{t=1}^{T’} p(a_t|x_t) given the assumption that the outputs at different times are conditional independent given the input state x_t. Put it in another way, our approach can be described via a fully generative model:\n\nFor t=1, T’:\n\tUsing x_t as the initial state, sample target words from RNN until we reach the end of segment symbol. This gives us segment a_t.\nFinally, concatenate {a_1, ...a_T’} to obtain an output y_{1:T}. \n\nSince there are more than one way to obtain the same y_{1:T}, its probability becomes p(y_{1:T}|x_{1:T’}) = sum_{a_{1:T’}, where a_{1:T’}\\in S_y} p(a_{1:T}|x_{1:T’}), the Eq. (1) in our paper. This explanation is also added in the updated paper.\n\n[Graves 2006] Graves, Alex, et al. \"Connectionist temporal classification: labeling unsegmented sequence data with recurrent neural networks.\" Proceedings of the 23rd international conference on Machine learning. ACM, 2006.\n \t\n5. NPMT achieves better BLEU than conventional methods despite no any global language models (Table 1 row 8), and the language model options (Table 1 row 9 and footnote 4) may reduce the model accuracy as well as it works not so effectively.\n<Response>: We also believe this is a super interesting and exciting observation. Our current understanding is that phrases are important building blocks of the whole target sentence and these phrases are relatively independent. With the help of being able to see the entire input sentence through the encoder, the performance can still be quite good without modeling the connection between the phrases. This is exciting because decoding can be done in linear time and can also be parallelized. We also show that adding an n-gram LM during beam search did help improve performance (Table 1 row 9). " ]
[ 6, 6, 8, -1, -1, -1 ]
[ 3, 4, 5, -1, -1, -1 ]
[ "iclr_2018_HktJec1RZ", "iclr_2018_HktJec1RZ", "iclr_2018_HktJec1RZ", "Sy2fyR7gG", "r1PrkGheM", "r1IRR2Yez" ]
iclr_2018_ByJHuTgA-
On the State of the Art of Evaluation in Neural Language Models
Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.
accepted-poster-papers
this submission demonstrates an existing loop-hole (?) in rushing out new neural language models by carefully (and expensively) running hyperparameter tuning of baseline approaches. i feel this is an important contribution, but as pointed out by some reviewers, i would have liked to see whether the conclusion stands even with a more realistic data (as pointed out by some in the field quite harshly, perplexity on PTB should not be considered seriously, and i believe the same for the other two corpora used in this submission.) that said, it's an important paper in general which will work as an alarm to the current practice in the field, and i recommend it to be accepted.
train
[ "S1Mw8jBef", "rJTcBCtxG", "HkGW8A2gG", "rJwjH7HzM", "SkNXbLNzf", "HJOf_f2Wz", "HJfg_G2bz", "BJM9Df2WM", "BJXUZl2Wf", "SJeVnLdbG", "H14DMe7WG", "HkGngSGZz", "BkqLXA1Zz", "r1naAI6eG", "r19qHo2gM", "HJV5juheG", "Sk5vVdhez", "HkATqf5xz", "HksEzG5gf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "author", "author", "public", "author", "public", "public", "author", "author", "public", "public", "author", "public" ]
[ "The submitted manuscript describes an exercise in performance comparison for neural language models under standardization of the hyperparameter tuning and model selection strategies and costs. This type of study is important to give perspective to non-standardized performance scores reported across separate publications, and indeed the results here are interesting as they favour relatively simpler structures.\n\nI have a favourable impression of this paper but would hope another reviewer is more familiar with the specific application domain than I am.", "The authors did extensive tuning of the parameters for several recurrent neural architectures. The results are interesting. However the corpus the authors choose are quite small, the variance of the estimate will be quite high, I suspect whether the same conclusions could be drawn.\n\nIt would be more convincing if there are experiments on the billion word corpus or other larger datasets, or at least on a corpus with 50 million tokens. This will use significant resources and is much more difficult, but it's also really valuable, because it's much more close to real world usage of language models. And less tuning is needed for these larger datasets. \n\nFinally it's better to do some experiments on machine translation or speech recognition and see how the improvement on BLEU or WER could get. ", "The authors perform a comprehensive validation of LSTM-based word and character language models, establishing that recent claims that other structures can consistently outperform the older stacked LSTM architecture result from failure to fully explore the hyperparameter space. Instead, with more thorough hyperparameter search, LSTMs are found to achieve state-of-the-art results on many of these language modeling tasks.\nThis is a significant result in language modeling and a milestone in deep learning reproducibility research. The paper is clearly motivated and authoritative in its conclusions but it's somewhat lacking in detailed model or experiment descriptions.\n\nSome further points:\n\n- There are several hyperparameters set to the \"standard\" or \"default\" value, like Adam's beta parameter and the batch size/BPTT length. Even if it would be prohibitive to include them in the overall hyperparameter search, the community is curious about their effect and it would be interesting to hear if the authors' experience suggests that these choices are indeed reasonably well-justified.\n\n- The description of the model is ambiguous on at least two points. First, it wasn't completely clear to me what the down-projection is (if it's simply projecting down from the LSTM hidden size to the embedding size, it wouldn't represent a hyperparameter the tuner can set, so I'm assuming it's separate and prior to the conventional output projection). Second, the phrase \"additive skip connections combining outputs of all layers\" has a couple possible interpretations (e.g., skip connections that jump from each layer to the last layer or (my assumption) skip connections between every pair of layers?).\n\n- Fully evaluating the \"claims of Collins et al. (2016), that capacities of various cells are very similar and their apparent\ndifferences result from trainability and regularisation\" would likely involve adding a fourth cell to the hyperparameter sweep, one whose design is more arbitrary and is neither the result of human nor machine optimization.\n\n- The reformulation of the problem of deciding embedding and hidden sizes into one of allocating a fixed parameter budget towards the embedding and recurrent layers represents a significant conceptual step forward in understanding the causes of variation in model performance.\n\n- The plot in Figure 2 is clear and persuasive, but for reproducibility purposes it would also be nice to see an example set of strong hyperparameters in a table. The history of hyperparameter proposals and their perplexities would also make for a fantastic dataset for exploring the structure of RNN hyperparameter spaces. For instance, it would be helpful for future work to know which hyperparameters' effects are most nearly independent of other hyperparameters.\n\n- The choice between tied and clipped (Sak et al., 2014) LSTM gates, and their comparison to standard untied LSTM gates, is discussed only minimally, although it represents a significant difference between this paper and the most \"standard\" or \"conventional\" LSTM implementation (e.g., as provided in optimized GPU libraries). In addition to further discussion on this point, this result also suggests evaluating other recently proposed \"minor changes\" to the LSTM architecture such as multiplicative LSTM (Krause et al., 2016)\n\n- It would also have been nice to see a comparison between the variational/recurrent dropout parameterization \"in which there is further sharing of masks between gates\" and the one with \"independent noise for the gates,\" as described in the footnote. There has been some confusion in the literature as to which of these parameterizations is better or more standard; simply justifying the choice of parameterization a little more would also help.", "Changelist:\n\n- Better NAS results with more tuning on Wikitext-2 and Enwik8. The story is the same, still lagging other models.\n- Tiny adjustments related to down-projections that hopefully clarify things.", "I wish/encourage the authors would post the hyperparams, it would make reproducing the results much easier, especially for the academic community which may not have the resources to run full hyperparameter searches (even if the scripts are released).", "We thank AnonReviewer1 for their review.\n\nWe would like to point out that the state-of-the-art results and model comparisons are only part of the message. More importantly, we argue that the way model evaluation is performed is often unsatisfactory. Evaluation at a single hyperparameter setting, failing to control for dominant sources of variation make results unreliable and slow down progress.\n", "We feel that AnonReviewer3 might have missed that the main message of the paper was that evaluation - as it's generally performed - is unreliable. Our results suggest that state-of-the-art results are only superficially considered, and variance and parameter sensitivity are likewise given short shrift.\n\nThe main criticism seems to center on evaluating models on datasets that are too small which increases evaluation variance and the results are thus not trustworthy. That is a very good summary of the main message of the paper! We agree that small datasets are problematic, but one cannot refute previous results that were obtained on small datasets using large datasets. Furthermore, we do hyperparameter tuning and a careful analysis of the variance. Furthermore, the third dataset (enwik8) is a large character based corpus and we still improve previously reported LSTM results by a substantial margin.\n\nFinally, to do this kind of study we chose language modelling because of its relevance to all kinds recurrent neural models while being simpler than machine translation and speech recognition models. We have demonstrated evaluation problems in this simple and relevant setting. It is unclear why the reviewer requests results on MT and ASR.\n", "We thank AnonReviewer2 for the thoughtful and detailed review, let us address the points brought up one by one in the original order (we will likewise clarify these points in the paper):\n\n- Some hyperparameters were indeed left at \"default\" values because our tuner cannot efficiently tune a large set of hyperparameters. Still we did tuning studies with lower and higher BPTT lengths, batch sizes and including Adam parameters (beta1, beta2, epsilon) and with other optimizers to make sure that our intuition about what hyperparameters are most important is correct. We did a tuning study with all hyperparameters (about 40 hyperparameters in total) to catch any unexpected parameter combinations even if it was a long shot due to the aforementioned tuner inefficiency.\n\n- Yes, the down-projection is simply projecting down from the LSTM hidden size to the embedding size. The ratio of the embedding size and cell size is a tuneable. The cell and embedding sizes are computed from the budget and this input_embedding_ratio hyperparameter. As the paper puts it: \"The tuner is given control over the presence and size of the down-projection, and thus over the tradeoff between the number of embedding vs. recurrent cell parameters. Consequently, the cells’ hidden size and the embedding size is determined by the actual parameter budget, depth and the input embedding ratio hyperparameter.\"\n\n- Yes, we didn't find a very different cell with promising results in the literature.\n\n- No comment.\n\n- We are working on factoring out the code from a larger system and providing training scripts with the tuned hyperparameters.\n\n- The Multiplicative LSTM is indeed interesting. We did some preliminary investigation and could not make it perform very well. In the end, it was excluded to avoid adding further multipliers to our already very high resource consumption.\n\n- We used shared masks because of implementation convenience and for computational considerations.\n", "Thank you for taking the time to write the review.\n\nThe down-projection is indeed the former version: it projects the output of the top LSTM (plus skip connections) to output_embedding_size. We didn't try the suggested variant.\n\nYes, depth 1 and 2 LSTMs did not need skip connections but depth 4 suffered without them according to preliminary experiments. Alas, we have no further insight on this.", "I would strongly recommend this paper be accepted for publication. It tackles and uncovers many important discussions regarding our models, our datasets, their sensitivity to hyperparameters, and the process of thoroughly comparing models and searching for hyper parameters. This helps inform a broader discussion about how we can ensure that the scientific process for our field is best followed. Thoroughly analyzing and forcing a reconsideration of the impact of proposed RNN model architectures (LSTM, RHN, NASCell) on these tasks is worth the price of admission by itself, let alone the many other learnings and investigations.\n\nInformal Rating: 8\nConfidence: 5\n(work directly in this field and have recreated many aspects of the results since publication)\n\n= Hyper parameters =\n\nI'll reply to an earlier comment (search for \"we have been asked for the hyperparameter settings on numerous occasions\") as I'm interested in continuing discussion regarding your hesitance to release hyper parameters. Overall I am glad that you decided that you will release training scripts with the tuned hyperparameters as I genuinely think this will benefit the community going forward.\n\n= Down projection =\n\nTo clarify, is this a specific and separate layer (a dense layer that takes the output of the LSTM, h, and projects it from |h| to the embedding size |e|) or is the final LSTM accepting an input of size |h| and internally down projecting it to an output of size |e|? I imagine it would be the former given that your single layer LSTMs appear to still use down projection, hence having a larger |h| than |e|? Did you experiment with modifying the last LSTM layer's sizings (which may breaking your skip connections but would be more \"parameter efficient\")?\n\n= Skip connections =\n\nDid you investigate models that didn't use skip connections for the RNNs? We have found in our work that such skip connections did not appear to be required, especially for models that are under four or so layers (though we have trained 6 or 7 layer models too - it just gets finicky at that stage). You and your team might have insights on this that we do not?", "Indeed we have been asked for the hyperparameter settings on numerous occasions. Originally, we did not provide these details as the main message of the paper was not about the state of the results but model evaluation, but there is another, more fundemental reason too: any single hyperparameter setting would make it easy to compare a derivative work to our well tuned baseline, but at best that could prove that the new model is better (it could never prove that it's worse). More, two new models each evaluated with those hyperparameters would still be incomparable.\n\nFor these reasons, we think that presently there is no way around tuning, and there is limited utility in publishing hyperparameter settings.\n\nThat said, we are working on factoring out the code from a larger system and providing training scripts with the tuned hyperparameters.", "The main contribution of this paper is to show with extensive hyper-parameter tuning, LSTM can achieve a state of the art result for language modeling. However, it seems that the authors didn't give the concrete hyper-parameters for reproducing their results. This paper is lack of technical novelty and totally of a experimental work; thus the experiment setting is very important. Only showing HP tuning can get state of the art results if not enough, since it is a common sense that hyper-parameter tuning can improve performance for any machine learning models. I think the paper should at least describe their hyper-parameters setting (for example, learning rate, hidden size, dropout ratio, weight decay, etc) for getting their results or releasing the code for the community if possible.", "PTB is very much a standard baseline in the space that people routinely publish perf-based papers on, so I think it's a little harsh to knock people for publishing on it, especially when they make the effort to get results on multiple larger datasets, such as Wikitext-2.\n\nThis is not a paper about the utility of hyperparameter optimization, the fact that that works has been well established. It's a paper about how hyperparameter optimization wasn't properly used on a bunch of standard benchmarks in this space, which has already proven very valuable. I really don't think it's necessary to request results on MT or ASR ", "The hyperparameters differ only in boring ways: Wikitext-2 needs a bit less intra layer and state dropout. This is very likely to be due to corpus size. Down-projection sizes are also a bit different due to the vocabulary size mismatch (when there is a down-projection at all).\n\nI'm not sure there are hyperparameters that work well on both, but yes, we could tune for combined (in whatever way) performance on a number of datasets. By doing this, we could learn more about how hyperparameters are best specified so that they are reasonably independent from datasets and also from other hyperparameters.", "This is exactly what we did. The presence and size of the down-projection was a tuned hyperparameter. Section 7.1 discusses for which models it was useful and for which it wasn't. ", "4-layer LSTM 24M is not the universal best setting for both PTB and Wikitext-2...so I guess the lesson to be learned here is that we need to consider down-projection as a hyper-parameter, and this bottleneck-structure may or may not be useful.", "Very nice paper!\n\nI was wondering whether you could provide more details on the transfer experiment where you tuned the hyperparameters of the LSTM on the PTB and then used those parameters to train a model on Wikitext-2. Do the tuned hyperparameters differ in interesting ways? Are different parameters needed because the vocabulary size is different or because the corpus size is different? Are there parameters that give decent performance across both tasks? Could you tune on both tasks simultaneously?", "Section 7.1 discusses the effect of down-projection in general for various models (depth/budget). Section 7.3 uses a 4-layer LSTM with 24M weights as an example, for which the down-projection is universally suboptimal.\n\nWe agree that Section 7.3 is not very clear on this.", "I found two confusing statements from the paper:\n\nSection 7.1:\nDown-projection was found to be very beneficial by the tuner for some depth/budget combinations.\nOn Penn Treebank, it improved results by about 2–5 perplexity points at depths 1 and 2 at 10M, and\ndepth 1 at 24M\n\nSection 7.3:\nOmitting input embedding ratio because\nthe tuner found having a down-projection suboptimal almost non-conditionally for this model\n\nI think the first one is suggesting down-projection to be beneficial and the second one is suggesting that tuner finds it suboptimal..." ]
[ 7, 5, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 2, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ByJHuTgA-", "iclr_2018_ByJHuTgA-", "iclr_2018_ByJHuTgA-", "iclr_2018_ByJHuTgA-", "iclr_2018_ByJHuTgA-", "S1Mw8jBef", "rJTcBCtxG", "HkGW8A2gG", "SJeVnLdbG", "iclr_2018_ByJHuTgA-", "HkGngSGZz", "iclr_2018_ByJHuTgA-", "rJTcBCtxG", "Sk5vVdhez", "HJV5juheG", "HkATqf5xz", "iclr_2018_ByJHuTgA-", "HksEzG5gf", "iclr_2018_ByJHuTgA-" ]
iclr_2018_rkfOvGbCW
Memory-based Parameter Adaptation
Deep neural networks have excelled on a wide range of problems, from vision to language and game playing. Neural networks very gradually incorporate information into weights as they process data, requiring very low learning rates. If the training distribution shifts, the network is slow to adapt, and when it does adapt, it typically performs badly on the training distribution before the shift. Our method, Memory-based Parameter Adaptation, stores examples in memory and then uses a context-based lookup to directly modify the weights of a neural network. Much higher learning rates can be used for this local adaptation, reneging the need for many iterations over similar data before good predictions can be made. As our method is memory-based, it alleviates several shortcomings of neural networks, such as catastrophic forgetting, fast, stable acquisition of new knowledge, learning with an imbalanced class labels, and fast learning during evaluation. We demonstrate this on a range of supervised tasks: large-scale image classification and language modelling.
accepted-poster-papers
the proposed approach nicely incorporates various ideas from recent work into a single meta-learning (or domain adaptation or incremental learning or ...) framework. although better empirical comparison to existing (however recent they are) approaches would have made it stronger, the reviewers all found this submission to be worth publication, with which i agree.
val
[ "HylhQnUNG", "HkJsPxmxG", "rktPEKveG", "ByEeLZ5xz", "H1cEqB6mz", "SJAlIN6mf", "HJLvPs3mz", "H1ymHgbmz", "Skiov_-zz", "HkbqcOZfG", "BJQOOO-GG", "H1rrLdWfM", "rk86VuZGz", "ByYMHubGz", "ByINRE1zM", "rJyW0n4eM", "B1tJwiMef" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "author", "public", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "Dear Authors and AC\n\nThank you for your detailed answers -- having to split in two comments due to length shows how seriously you take it :)\nBetween them and the fact that my mind kept wandering back to the ideas in this paper during the holidays, I am happy to maintain my score of 8 - Top 50% papers.", "Overall, the idea of this paper is simple but interesting. Via weighted mean NLL over retrieved neighbors, one can update parameters of output network for a given query input. The MAP interpretation provides a flexible Bayesian explanation about this MbPA.\n\nThe paper is written well, and the proposed method is evaluated on a number of relevant applications (e.g., continuing learning, incremental learning, unbalanced data, and domain shifts.)\n\nHere are some comments:\n1 MbPA is built upon memory. How large should it be? Is it efficient to retrieve neighbors for a given query?\n2 For each test, how many steps of MbPA do we need in general? Furthermore, it is a bit unfair for me to retrain deep model, based on test inputs. It seems that, you are implicitly using test data to fit model.\n", "This paper proposes a non-parametric episodic memory that can be used for the rapid acquisition of new knowledge while preserving the old ones. More specially, it locally adapts the parameters of a network using the episodic memory structure. \n\nStrength:\n+ The paper works on a relevant and interesting problem.\n+ The experiment sections are very thorough and I like the fact that the authors selected different tasks to compare their models with. \n+ The paper is well-written except for sections 2 and 3. \nWeakness and Questions:\n- Even though the paper addresses the interesting and challenging problem of slow adaption when distribution shifts, their episodic memory is quite similar (if not same as) to the Pritzel et al., 2017. \n- In addition, as the author mentioned in the text, their model is also similar to the Kirkpatrick et al., 2017, Finn et al., 2017, Krause et al., 2017. That would be great if the author can list \"explicitly\" the contribution of the paper with comparing with those. Right now, the text mentioned some of the similarity but it spreads across different sections and parts. \n- The proposed model does adaption during the test time, but other papers such as Li & Hoiem, 2016 handles the shift across domain in the train time. Can authors say sth about the motivation behind adaptation during test time vs. training time? \n- There are some inconsistencies in the text about the parameters and formulations:\n -- what is second subscript in {v_i}_i? (page 2, 3rd paragraph)\n -- in Equation 4, what is the difference between x_c and x?\n -- What happened to $x$ in Eq 5?\n -- The \"−\" in Eq. 7 doesn't make sense. \n- Section 2.2, after equation 7, the text is not that clear.\n- Paper is well beyond the 8-page limit and should be fitted to be 8 pages.\n- In order to make the experiments reproducible, the paper needs to contain full details (in the appendix) about the setup and hyperparameters of the experiments. \n\nOthers:\nDo the authors plan to release the codes?\n\n\n------------------------------------\n------------------------------------\nUpdate after rebuttal:\nThanks for the revised version and answering my concerns. \nIn the revised version, the writing has been improved and the contribution of the paper is more obvious. \nGiven the authors' responses and the changes, I have increased my review score.\n\nA couple of comments and questions:\n1. Can you explain how/why $x_c$ is replaced by $h_k$ in eq_7? \n2. In the same equation (7), how $\\log p(v_k| h_k,\\theta_x, x)$ will be calculated? I have some intuition but not sure. Can you please explain?\n3. in equation(8), what happened to $x$ in log p(..)?\n4. How figure 2 is plotted? based on a real experiment? if yes, what was the setting? if not, how?\n5. It'd be very useful to the community if the authors decide to release their codes. \n\n", "This article introduces a new method to improve neural network performances on tasks ranging from continual learning (non-stationary target distribution, appearance of new classes, adaptation to new tasks, etc) to better handling of class imbalance, via a hybrid architecture between nearest neighbours and neural net.\nAfter an introduction summarizing their goal, the authors introduce their Model-based parameter adaptation: this hybrid architecture enriches classical deep architectures with a non-parametric “episodic” memory, which is filled at training time with (possibly learned) encodings of training examples and then polled at inference time to refine the neural network parameters with a few steps of gradient in a direction determined by the closest neighbours in memory to the input being processed. The authors justify this inference-time SGD update with three different interpretations: one linked in Maximum A Posteriori optimization, another to Elastic Weight Regularisation (the current state of the art in continual learning), and one generalising attention mechanisms (although to be honest that later was more elusive to this reviewer). The mandatory literature review on the abundant recent uses of memory in neural networks is then followed by experiments on continual learning tasks involving permuted MNIST tasks, ImageNET incremental inclusion of classes, ImageNet unbalanced, and two language modeling tasks. \n\nThis is an overall very interesting idea, which has the merit of being rather simple in its execution and can be combined with many other methods: it is fully compatible with any optimiser (e.g. ADAM) and can be tacked on top of EWC (which the authors do). The justification is clear, the examples reasonably thorough. It is a very solid paper, which this reviewer believes to be of real interest to the ICLR community.\n\n\nThe following important clarifications from the authors could make it even better:\n* Algorithm 1 in its current form seems to imply an infinite memory, which the experiments make clear is not the case. Therefore: how does the algorithm decide what entries to discard when the memory fills up?\n* In most non-trivial settings, the parameter $gamma$ of the encoding is learned, and therefore older entries in the memory lose any ability to be compared to more recent encodings. How do the authors handle this obsolescence of the memory, other than the trivial scheme of relying on KNN to only match recent entries?\n* Because gamma needs to be “recent”, this means “theta” is also recent: could the authors give a good intuition on how the two sets of parameters can evolve at different enough timescales to really make the episodic memory relevant? Is it anything else than relying on the fact that the lower levels of a neural net converge before the upper levels?\n* Table 1: could the authors explain why the pre-trained Parametric (and then Mixture) models have the best AUC in the low-data regime, whereas MbPA was designed very much to be superior in such regimes?\n* Paragraph below equation (5), page 3: why not including the regularisation term, whereas the authors just went to great pain to explain it? Rationale? Not including it is also akin to using an improper non-information prior on theta^x independent of theta, which is quite a strong choice to be made “by default”.\n* The extra complexity of choosing the learning rate alpha_M and the number of MpAB steps is worrying this reviewer somewhat. In practice, in Section 4.1the authors explain using grid search to tune the parameters. Is this reviewer correct in understanding that this search is done across all tasks, as opposed to only the first task? And if so, doesn’t this grid search introduce an information leak by bringing information from the whole pre-determined set of task, therefore undermining the very “continuous learning” aim? How do the algorithm performs if the grid search is done only on the first task?\n* Figure 3: the text could clarify that the accuracy is measured across all tasks seen so far. It would be interesting to add a figure (in the Appendix) showing the evolution of the accuracy *per task*, not just the aggregated accuracy. \n* In the related works linking neural networks to encoded episodic memory, the authors might want to include the stream of research on HMAX of Anselmi et al 2014 (https://arxiv.org/pdf/1311.4158.pdf) , Leibo et al 2015 (https://arxiv.org/abs/1512.08457), and Blundell et al 2016 (https://arxiv.org/pdf/1606.04460.pdf ).\n\nMinor typos:\n* Figure 4: the title of the key says “New/Old” but then the lines read, in order, “Old” then “New” -- it would be nicer to have them in the same order.\n* Section 5: missing period between \"ephemeral gradient modifications\" and \"Further\".\n* Section 4.2, parenthesis should be \"perform well across all 1000 classes\", not \"all 100 classes\".\n \nWith the above clarifications, this article could become a very remarked contribution.\n", "I think I understand the difference, however I still don't see exactly why one cannot match the other.\nIf a word occurs twice in the memory buffer, then gradient computed based on memory will get more strongly influenced by those terms. The relative weight is bounded by 1 in MbPA compared to cache, however the on the other hand, its directly changing the network weights which has non-linear effects on prediction. It is likely a question of empirics.", "We have rephrased this as the wording was ambiguous. What we found is that the cache model and MbPA benefit slightly different types of words. Namely, MbPA is better at predicting less frequent words. Although they both operate on the same recent past, we think this is because the cache model sums over its attention on a per-word basis, which means that frequently occuring not-so-relevant words in the cache are boosted. Whereas MbPA optimizes over a neighbourhood of K nearest neighbours. We have added this set of results to the appendix. In the context of language modelling, boosting recent words is quite a strong structural prior in itself [1] and so we don’t expect MbPA to necessarily improve performance over a cache but the combination is certainly more powerful.\n\n[1] Speech recognition and the frequency of recently used words: a modified Markov model for natural language. Kuhn, Roland. 1988\n", "Thank you all for the very helpful comments on our submission and links to related works. \n\nTo this end, we have uploaded a revised version incorporating changes mentioned in the comments, reviews and replies below. This includes more detailed comparisons with other works, concretely outlining our contributions, details of hyper parameter selection and an expanded reference section. Various typos have been fixed and clarifications added. \n\nWe look forward to further comments and discussions. \n\n", "If MbPA has a circular buffer of same size as Neural cache, I don't clearly understand how the 'focus purely on most recent terms' works. Focus on recent terms make sense as in their results the benefit of increasing memory fell very quickly. But then a small memory in MbPA should be able to match that. So perhaps some tweaking of parameters might improve results?\n\nIt will be interesting to apply this technique to some applications I am looking at, I hope the code becomes available soon", "\n3) The proposed model does adaptation during the test time, but other papers such as Li & Hoiem, 2016 handles the shift across domain in the train time. Can authors say sth about the motivation behind adaptation during test time vs. training time? \n\n* The work “Learning without forgetting” by Li & Hoiem is a simple and effective method for avoiding catastrophic forgetting. However, in our view, it doesn’t guarantee that the internal representations would be preserved and doesn’t show any evidence in this direction.\n* Our motivation is two-fold. First, we want our model to be able to consolidate the knowledge and be able to perform well without relying on the memory content. The memory then serves to boost performance by focusing the weights on memory relevant to the prediction at hand. Second, adapting the model during training is computationally very demanding (e.g. language modeling, imagenet).\n* Further, adaptation at test time for language modelling has strong established baselines such as Krause et al 2017. We thus wanted a comparable setting to the reported baselines. \n\n4) Paper is well beyond the 8-page limit and should be fitted to be 8 pages.\n\n* ICLR has a soft page limit. We are aware that the text is long, but we didn’t want to leave out details on the experimental settings. We will take this comment into account and edit the text as needed after the clarifications mentioned here are added in, in an attempt to reduce the length of the paper by moving a few things into an appendix. \n\n5) In order to make the experiments reproducible, the paper needs to contain full details (in the appendix) about the setup and hyperparameters of the experiments.\n\n* We currently include details on the hyper-parameter selection procedure, and provide the best performing options. We will further clarify if anything is missing and add details to the appendix.\n\nFurther, thank you for pointing out typos and inconsistencies. \n\n* We will correct this in the paper and clarify subscripts and the text in section 2.2. \n* The negative sign in eq 7 is a typo. \n* x is the input being regressed or classified whereas x_c (\"c\" is context; we will clarify this) is the input that was used to create embedding h_c stored in memory (with value v_c).\n\nWe will update the text to take into account all clarifications above. ", "Thank you for comment and links to relevant work. Please find our response below (in order of the papers linked). \n\n(1) \"One Sentence One Model for Neural Machine Translation\":\n* We were not aware of this work. Thank you for the reference. The method is indeed very related in spirit to our approach, however it has several important differences.\n\n* The objective of their work is, given a test sentence, to bias the model by fine-tuning it on a relevant context available in the training set. The crucial difference is that our method concentrates on enhancing powerful parametric models to include a fast adaptation mechanism to cope with changes in the task at hand (i.e. when task itself changes). For this reason, our work tackles a larger range of problems than (1). Further, on incremental and continual learning we assume we do not have access to the original training data again and thus (1) would not be applicable. The use of memory in MbPA alleviates the need for access to the (potentially large) training data. Below, we address the comparison in the language domain. \n\n* Locally fitting a context retrieved from a training set is only useful when the train and test distributions are the same. As the authors of (1) state, their method becomes very good when highly similar sentences are available in the training set. In the case of language modeling, given a partial sentence, the distribution over next words is naturally multi-modal. The important point of our approach is to quickly leverage on the information available in the observed part of test set in an online fashion, to capture distributional shifts (e.g. specifics of the writing style, frequent use of some specific set of words) to disambiguate similar contexts present in the training data.\n\n* Another setting in which (1) could play an important role is when the capacity of the model is not large enough to properly fit the training set. For instance, replicating the approach in (1) in the ImageNet task: if we fully train a ResNet on the whole training set and then apply the local fitting at test time (as in (1)), very little gain would be observed. As it is rare to find very similar images in the training set and the plain parametric model archives very high performance on the training set (about 95% top 1). \n\n* Other differences are: the weighting of the neighbors and the regularization term (obtained from our bayesian interpretation) to prevent overfitting the local neighbourhood. \n\n(2) \"Search Engine Guided Non-Parametric Neural Machine Translation\"\n\n* Thank you for raising this paper, we will certainly discuss similarities and differences to it in our related work. The core contribution of our paper, we feel, is how to incorporate information from memory into the final model predictions. In Gu et al. there are many interesting contributions, namely that of combining a non-differentiable search engine, differentiable attention, and incorporation of retrieved words into final predictions. The authors find that shallow mixing, i.e. interpolating probability distributions from memory and model, works best. We show in this paper that memory-based parameter adaptation is another competitive strategy to shallow mixing, and often working better (PTB for language modeling, ImageNet for image classification). As such, MbPA could be slotted into the search-engine guided translation model --- but we think this is best left for subsequent research projects. \n\nWe will update the paper to take into account these references and expand the literature review. ", "Thank you for your review. Please find below our response and clarifications.\n\n1) MbPA is built upon memory. How large should it be?\n\n* The optimal memory size is task dependent, but in general the larger the memory the better. However, performance saturates at a given point.\n* A nice property of the model (as shown in the continual and incremental learning setups) is that performance degrades gracefully as memory size decreases. For continual learning, even storing 1% of data seen on a task boosts performance significantly. \n* One important aspect to note is that the smaller the memory the more important it becomes to add regularization to prevent overfitting to the local context, as explained in Section 2.1. This is the case of the language modeling experiments.\n* For the ImageNet experiments we show how performance varies with memory size in Fig 6 (Appendix). We will include a similar evaluation for continual learning and language modeling tasks.\n\n2) Is it efficient to retrieve neighbors for a given query?\n\n* In this work it is the cost of an exact nearest neighbour search. Which is linear in memory size. We see that the cost of retrieving neighbours is negligible compared to the rest of the model (eg. the inner optimisation). For eg. on PTB language modeling with a cache size of 5000, the content based lookup is about ~20us, and each step of optimization is ~1ms on one GPU. \n* Fast approximate knn search can be used, but performance could degrade depending on the recall of the approximate search. This would be a nice direction for future work.\n* One of the advantages of not querying the memory at training time, is that we avoid this cost.\n\n3) For each test, how many steps of MbPA do we need in general? \n\n* This is a hyper-parameter of the model. Across all tasks, we observed that a small number of iteration is sufficient, between 5 and 20. However, we see noticeable gains with even 1 step. \n\n4) Furthermore, it is a bit unfair for me to retrain deep model, based on test inputs. It seems that, you are implicitly using test data to fit model.\n\n* Many algorithms have a clean split between train and test. They are unable to adapt to shifts in distribution. We are interested specifically in studying algorithms that are capable of adapting to domain shift. Or, to leverage the temporal correlation during an evaluation episode.\n* We only do this in the language model example, which deals with quickly adapting to a change in the data distribution at test time. The effect of online adaptation during test time has been long studied in this task, solutions dating back to Dynamic Evaluation (A. Graves’ thesis). Naturally, all these approaches use the test data in a causal way (as in online learning), meaning, only the examples that have been processed are available for training.\n* Note that we’re comparing with many models that also use the observed test samples to adapt their predictions. The data seen at each test example is thus consistent across all baselines. \n\nWe will update the text to take into account all clarifications above. \n", "Thank you for your review. Please find below our response and clarifications. The comment has been split into two to ensure we are under the comment character limit. \n\n1) Even though the paper addresses the interesting and challenging problem of slow adaptation when distribution shifts, their episodic memory is quite similar (if not same as) to the Pritzel et al., 2017. \n\n* Our memory module is indeed essentially the same as that of Pritzel et al, 2017, differing only on how the keys are obtained. The keys are embeddings computed from our parametric model (embedding + output networks) trained directly on the target task instead of relying on gradients through the memory. Note that several other works (cited in the manuscript) use very similar memory architectures. We do not claim the memory as one of our contributions, instead, the novelty lies in the use of the memory as a way of enhancing powerful parametric models. We will further clarify this in the text.\n\n2) In addition, as the author mentioned in the text, their model is also similar to the Kirkpatrick et al., 2017, Finn et al., 2017, Krause et al., 2017. That would be great if the author can list \"explicitly\" the contribution of the paper with comparing with those. Right now, the text mentioned some of the similarity but it spreads across different sections and parts. \n\n* We will include a detailed description of our contributions, and concentrate (and expand) the relation with previous work in Section 3.\n* The contributions of our work are: (i) proposing an architecture for enhancing powerful parametric models to include a fast adaptation mechanism to cope with changes in the task at hand (ii) we establish connections of our method with attention mechanisms frequently used for querying memories (iii) we present a bayesian interpretation of the method allowing a principled form regularization (iv) we evaluate the method in a range of different tasks: continual learning (pmnist), incremental learning (imagenet) and data distribution shifts (language), obtaining promising results.\n* The only similarity with Krause et al. is that we too use a memory buffer in the context of language modelling. Their method of using the memory is via a mixture of experts system to deal with recent words for language models. We do compare to this baseline for our LM experiments, however their method does not deal with the problem of distributional shifts and cannot be applied to continual or incremental learning set ups. \n* Finn et al. devises MAML - a way of doing meta-learning over a distribution of tasks. Both of our methods extend the classic fine-tuning technique used in domain adaptation type of ideas (e.g. fit a given neural network to a small set of new data). Their algorithm aims at learning an easily adaptable set of weights, such that given a small amount of training data for a given task following the training distribution, the fine-tuning procedure would effectively adapt the weights to this particular task. Their work does not use any memory or per-example adaptation and is not based on a continual (life-long) learning setting. In contrast, our work, aims at augmenting a powerful neural network with a fine-tuning procedure that is used at inference only. The idea is to enhance the performance of the parametric model while maintaining it's full training.\n* EWC, developed in Kirkpatrick et al. 2017 is powerful method of doing continual learning across tasks. The algorithm works by learning a new task with an additional loss forcing the model to stay close to the solution found on the previous task. This method makes no use of memory or local adaptation, requiring instead the storing of weights and fisher matrices for each task seen. We compare to this method for our continual learning tasks as a very competitive baseline. MbPA does not rely on storing past weights or fisher matrices. We show comparable performance with even 100 examples stored per task and show how these methods are orthogonal and can be combined. One similarity we do note is that adding a regularization term to the local loss of MbPA can be seen as a local version or approximation of the EWC loss term - i.e. forcing the model to stay close to the solution found at training time. \n", "Thank you for your review. Please find below our response and clarifications. The responses have been split to ensure we are under the ICLR comment character limit. \n\n1) Algorithm 1 in its current form seems to imply an infinite memory, which the experiments make clear is not the case. Therefore: how does the algorithm decide what entries to discard when the memory fills up?\n\n* In the current implementation we simply treat the memory as a circular buffer, in which we overwrite the oldest data as the memory gets full. We will clarify this on the text.\n* Deciding what to store (or overwrite) is indeed a very interesting question that we did not explore and will address in future work. We evaluated a few heuristics (e.g. storing only examples with high training loss) that did not perform better than the circular buffer described above.\n\n2) In most non-trivial settings, the parameter $gamma$ of the encoding is learned, and therefore older entries in the memory lose any ability to be compared to more recent encodings. How do the authors handle this obsolescence of the memory, other than the trivial scheme of relying on KNN to only match recent entries?\n\n* Having a stable (or slow changing) network is important for being able to have long term recall. This could be justified (as the reviewer mentions) by the fact that lower level parameters converge faster than those in the higher part of the network. Hence, it is inevitable some memory obsolescence in the beginning of training. This is also the case on humans as infant amnesia could be explained as memories stored with an old (not consolidated) network that cannot be recovered later in life. We will include a short comment further clarifying this important point.\n* An alternative approach would be to rely on replay of raw data (e.g. store the input images from pixels). A downside is that, unlike internal activations (embeddings), replaying raw data requires a large amount of storage. However many artificial systems do it (e.g. DQN for RL). If we store raw data, we could still base our look-ups on a distance in the embedding space in order to obtain a semantic (more relevant) metric. We would replay the memories to prevent catastrophic forgetting and periodically recompute the embeddings to keep them up to date. We did not implement this variant. \n\n3) Table 1: could the authors explain why the pre-trained Parametric (and then Mixture) models have the best AUC in the low-data regime, whereas MbPA was designed very much to be superior in such regimes?\n\n* Note that this happens only for the classes that were used during pre-training. The result makes sense: the initial parametric model performs very well on the classes that was pre-trained on. The memory is initially empty, so adapting the predictions of parametric model (via MbPA or the mixture model) using few examples slightly degrades its performance in the beginning. This quickly changes as more examples are collected.\n* On the other hand, for the new classes, relying on the memories massively improves performance even when few examples have been stored.\n\n4) Paragraph below equation (5), page 3: why not including the regularisation term, whereas the authors just went to great pain to explain it? Rationale? Not including it is also akin to using an improper non-information prior on theta^x independent of theta, which is quite a strong choice to be made “by default”.\n\n* We wrote it in this way for ease of explanation and developed later in Section 2.1, as we only talk about the bayesian interpretation then. We will change the text accordingly.", "\n\n5) The extra complexity of choosing the learning rate alpha_M and the number of MbPA steps is worrying this reviewer somewhat. In practice, in Section 4.1 the authors explain using grid search to tune the parameters. Is this reviewer correct in understanding that this search is done across all tasks, as opposed to only the first task? And if so, doesn’t this grid search introduce an information leak by bringing information from the whole pre-determined set of task, therefore undermining the very “continuous learning” aim? How do the algorithm performs if the grid search is done only on the first task?\n\n* We agree with the reviewer: setting the hyper-parameters would leak information from the future tasks. We do not do this in our experiments.\n* The hyper-parameters were obtained using different variants of permuted MNIST, following the standard practice for continual learning.\n* It is worth noting that we empirically found that MbPA is not very sensitive to the choice other parameters such as inner learning rate or number of steps (especially when combined with the regularization term or EWC). The tuning was required more for the EWC baseline where there is a tradeoff between learning new tasks and remembering old ones based on the weighting of the loss. For MbPA for CL we found any number of steps between 5-10 worked well with high learning rates between 0.1 and 1.0. \n* For MbPA, we reported several hyper-parameters (i.e memory size) to give a feel of the sensitivity of the algorithm. \n\n6) Figure 3: the text could clarify that the accuracy is measured across all tasks seen so far. It would be interesting to add a figure (in the Appendix) showing the evolution of the accuracy *per task*, not just the aggregated accuracy. \n\n* We will include this figure and clarify this in the text. \n* For the EWC baseline we find that (as mentioned above) the per-task curves are very different based on which tasks you care about more (e.g trivially setting the EWC penalty to a high value would give near perfect accuracy on the first task and no learning on the others). The only way to tune is to in fact, look at final average accuracy on (another, validation set) of permuted pixels and then apply that to the final test version. For MbPA, we found it shows gradual forgetting across tasks based on how many examples are stored per task. \n\n7) In the related works linking neural networks to encoded episodic memory, the authors might want to include the stream of research on HMAX of Anselmi et al 2014 (https://arxiv.org/pdf/1311.4158.pdf) , Leibo et al 2015 (https://arxiv.org/abs/1512.08457), and Blundell et al 2016 (https://arxiv.org/pdf/1606.04460.pdf ).\n\n* Thank you for the links to relevant work - we will include all these references.\n\nWe will update the text to take into account all clarifications above and typos mentioned. ", "there have been a few work from neural machine translation that are highly relevant to this work, such as \n\nhttps://arxiv.org/abs/1609.06490: per-example adaptation based on nearest neighbours without weighting\nhttps://arxiv.org/abs/1705.07267: memory-based nearest neighbour use without online adaptation of the parameters\n\nit would be good to see both (1) the discussion on how the proposed approach is more advantageous over these prevoius work (one of them over 1 year old), and (2) how the proposed approach compares against them.\n\nthere's a more recent related work (so perhaps no need to compare against) in \n\nhttps://arxiv.org/abs/1709.08878\n\nit would be nice to discuss how it's related, especially since the paper above conducted experiments on language modelling (similarly to this submission.)\n\nNote that this is not a meta review but just a comment.", "Thanks for your comment. Please see below our response.\n\n(1) A valid point is made about memory constraints, which can be clarified in the text. When comparing MbPA against the Neural Cache we do use a bounded memory in both cases. We swept over memory sizes from 500 - 20,000. E.g. for WikiText2 the optimal memory size for the neural cache was 6,000, whereas for MbPA it was 5,000. We will update the paper with further details of hyper-parameter sweeps and optimal parameters. For the language modeling experiments it is apparent the unbounded cache, which was posted on arXiv after the ICLR submission date, performs strictly worse than the bounded cache and so we prefer to compare our method versus the best performing variant. As for the point about performance for a given LSTM: on PTB, MbPA produced the best results in our experiments. On WikiText2, we find that MbPA does not focus purely on most recent terms and thus the combination of all three models (LSTM baseline, MbPA and neural cache) produced the largest drop of 15 perplexity. \n\n(2) MbPA and MAML are indeed similar in that they aim to fine-tune or adapt a network with relevant data. The general idea of adapting a network to a relevant context is also not unique to these methods but an old idea: speaker adaptation, dynamic evaluation, Dyna2, etc.\nThe contribution of MAML is to train a base model through the task-specific optimization, to obtain a set of ‘easily tunable’ parameters that can be adapted to the task at hand, while MbPA adapts its parameters in an online fashion using its episodic memory.\n\nIn the continual learning setup (eg. permuted MNIST) - one does not have the ability to re-train on a previous task and thus there is then no way of applying MAML in this setting without access to this task oracle (which would give MAML privileged information).\n\nThe same is true for language modeling where there are in fact no clear “tasks”, but a changing data distribution at test time which MbPA’s context based lookup can cope with. Instead for LM we compare MbPA to dynamic evaluation, which is a more relevant alternative method of local fitting. Conversely, applying MbPA to the MAML tasks would not work well on the first visit, as its memory would have no experience from this un-seen task, but would be able to cope with the task at test time. \n\n(3) This is a great list of references that we will be sure to describe it in an expanded literature review. Most however seem to have been published around or after the ICLR deadline and thus have not been addressed. We discuss the unbounded cache in comment (1) above.\n", "1) You compare against the cache model of Grave et al, however their results depend on the size of the cache.\nThe comparisons doesn't seem fair, as the neural cache has bounded memory, but I didn't catch the amount of memory deployed in your model. Furthermore as your results suggest that for a given LSTM that neural cache seemed to be better. Can you provide some details about the size of cache used versus the amount of memory your model used.\n\n2) Since the method seems very related to MAML (Finn, Abbeel, and Levine 2017), a comparison against it would be good to see, where both methods are applicable.\n\n3) There has been other recent work on online modeling and adaptation using history/memory. A discussion about relevance/similarity/differences between your model wrt these models would be great.\n\nUnbounded cache model for online language modeling with open vocabulary\nhttps://arxiv.org/pdf/1711.02604.pdf by Grave, Cisse and Joulin\n\nImproving One-Shot Learning through Fusing Side Information\nhttps://arxiv.org/pdf/1710.08347.pdf by Tsai and Salakhutdinov \n(This one uses attention on history, and doesn't seem fundamentally different from memory)\n\nMeta-Learning via Feature-Label Memory Network\nhttps://arxiv.org/pdf/1710.07110.pdf by Mureja, Park and Yoo\n\nLabel Organized Memory Augmented Neural Network \nhttps://arxiv.org/pdf/1707.01461.pdf by Shankar and Sarawagi\n\nOnline Adaptation of Convolutional Neural Networks for Video Object Segmentation\nhttps://arxiv.org/pdf/1706.09364.pdf by Voigtlaender and Leibe\n\n" ]
[ -1, 6, 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "ByEeLZ5xz", "iclr_2018_rkfOvGbCW", "iclr_2018_rkfOvGbCW", "iclr_2018_rkfOvGbCW", "SJAlIN6mf", "H1ymHgbmz", "iclr_2018_rkfOvGbCW", "rJyW0n4eM", "rktPEKveG", "ByINRE1zM", "HkJsPxmxG", "rktPEKveG", "ByEeLZ5xz", "ByEeLZ5xz", "iclr_2018_rkfOvGbCW", "B1tJwiMef", "iclr_2018_rkfOvGbCW" ]
iclr_2018_HJJ23bW0b
Initialization matters: Orthogonal Predictive State Recurrent Neural Networks
Learning to predict complex time-series data is a fundamental challenge in a range of disciplines including Machine Learning, Robotics, and Natural Language Processing. Predictive State Recurrent Neural Networks (PSRNNs) (Downey et al.) are a state-of-the-art approach for modeling time-series data which combine the benefits of probabilistic filters and Recurrent Neural Networks into a single model. PSRNNs leverage the concept of Hilbert Space Embeddings of distributions (Smola et al.) to embed predictive states into a Reproducing Kernel Hilbert Space, then estimate, predict, and update these embedded states using Kernel Bayes Rule. Practical implementations of PSRNNs are made possible by the machinery of Random Features, where input features are mapped into a new space where dot products approximate the kernel well. Unfortunately PSRNNs often require a large number of RFs to obtain good results, resulting in large models which are slow to execute and slow to train. Orthogonal Random Features (ORFs) (Choromanski et al.) is an improvement on RFs which has been shown to decrease the number of RFs required for pointwise kernel approximation. Unfortunately, it is not clear that ORFs can be applied to PSRNNs, as PSRNNs rely on Kernel Ridge Regression as a core component of their learning algorithm, and the theoretical guarantees of ORF do not apply in this setting. In this paper, we extend the theory of ORFs to Kernel Ridge Regression and show that ORFs can be used to obtain Orthogonal PSRNNs (OPSRNNs), which are smaller and faster than PSRNNs. In particular, we show that OPSRNN models clearly outperform LSTMs and furthermore, can achieve accuracy similar to PSRNNs with an order of magnitude smaller number of features needed.
accepted-poster-papers
this submission presents the positive impact of using orthogonal random features instead of unstructured random features for predictive state recurrent neural nets. there's been some sentiment by the reviewers that the contribution is rather limited, but after further discussion with another AC and PC's, we have concluded that it may be limited but a solid follow-up on the previous work on predictive state RNN.
train
[ "ByofVOOgG", "HJzahgcgf", "rJujSJjgG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I was very confused by some parts of the paper that are simple copy-past from the paper of Downey et al. which has been accepted for publication in NIPS. In particular, in section 3, several sentences are taken as they are from the Downey et al.’s paper. Some examples :\n\n« provide a compact representation of a dynamical system\nby representing state as a set of predictions of features of future observations. » \n\n« a predictive state is defined as… , where… is a vector of features of future observations and ... is a vector of\nfeatures of historical observations. The features are selected such that ... determines the distribution\nof future observations … Filtering is the process of mapping a predictive state… »\nEven the footnote has been copied & pasted: « For convenience we assume that the system is k-observable: that is, the distribution of all future observations\nis determined by the distribution of the next k observations. (Note: not by the next k observations\nthemselves.) At the cost of additional notation, this restriction could easily be lifted. »\n«  This approach is fast, statistically consistent, and reduces to simple\nlinear algebra operations. » \n\nNormally, I should have stopped reviewing, but I decided to continue since those parts only concerned the preliminaries part.\n\nA key element in PSRNN is to used as an initialization a kernel ridge regression. The main result here, is to show that using orthogonal random features approximates well the original kernel comparing to random fourrier features as considered in PSRNN. This result is formally stated and proved in the paper.\n\nThe paper comes with some experiments in order to empirically demonstrate the superiority orthogonal random features over RFF. Three data sets are considered (Swimmer, Mocap and Handwriting). \n\nI found it that the contribution of the paper is very limited. The connexion to PSRNN is very tenuous since the main results are about the regression part. in Theorems 2 and 3 there are no mention to PSRNN.\n\nAlso the experiment is not very convincing. The datasets are too small with observations in low dimensions, and I found it not very fair to consider LSTM in such settings.\n\nSome minor remarks:\n\n- p3: We use RFs-> RFFs\n- p5: ||X||, you mean |X| the size of the dataset\n- p12: Eq (9). You need to add « with probability $1-\\rho$ as in Avron’s paper.\n- p12: the derivation of Eq (10) from Eq (9) needs to be detailed. \n\n\nI thank the author for their detailed answers. Some points have been clarified but other still raise issues. In particular, I continue thinking that the contribution is limited. Accordingly, I did not change my scores.", "The paper tackles the problem of training predictive state recurrent neural networks (PSRNN), which \nuses large kernel ridge regression (KRR) problems as a subprimitive, and makes two main contributions:\n- the suggestion to use orthogonal random features (ORFs) in lieu of standard random fourier features (RFFs) to reduce the size of the KRR problems\n- a novel analysis of the risk of KRR using ORFs which shows that the risk of ORFs is no larger than that of using RFFs\n\nThe contribution to the practice of PSRNNs seems significant (to my non-expert eyes): when back-propagation through time is used, using ORFs to do the two-stage KRR training needed visibly outperforms using standard RFMs to do the KRR. I would like the authors to have provided results on more than the current three datasets, as well as an explanation of how meaningful the MSEs are in each dataset (is a MSE of 0.2 meaningful for the Swimmer Dataset, for instance? the reader does not know apriori). \n\nThe contribution in terms of the theory of using random features to perform kernel ridge regression is novel, and interesting. Specifically, the author argue that the moment-generating function for the pointwise kernel approximation error of ORF features grows slower than the moment-generating function for the pointwise kernel approximation error of RFM features, which implies that error bounds derived using the MGF of the RFM features will also hold for ORF features. This is a weaker result than their claim that ORFs satisfy better error, but close enough to be of interest and certainly indicates that their method is principled. Unfortunately, the proof of this result is poorly written:\n- equation (20) takes a long time to parse --- more effort should be put into making this clear\n- give a reference for the expressions given for A(k,n) in 24 and 25\n- (27) and (28) should be explained in more detail.\nMy staying power was exhausted around equation 31. The proof should be broken up into several manageable lemmas instead of its current monolithic and taxing form. \n", "This paper investigates the Predictive State Recurrent Neural Networks (PSRNN) model that embed the predictive states in a Reproducible Hilbert Kernel Space and then update the predictive states given new observation in this space.\nWhile PSRNN usually uses random features to project the map the states in a new space where dot product approximates the kernel well, the authors proposes to leverage orthogonal random features.\n\nIn particular, authors provide theoretical guarantee and show that the model using orthogonal features has a smaller upper bound on the failure probability regarding the empirical risk than the model using unstructured randomness. \n\nAuthors then empirically validate their model on several small-scale datasets where they compare their model with PSRNN and LSTM. They observe that PSRNN with orthogonal random features leads to lower MSE on test set than both PSRNN and LSTM and seem to reach lower value earlier in training.\n\nQuestion:\n-\tWhat is the cost of constructing orthogonal random features compared to RF?\n-\tWhat is the definition of H the Hadamard matrix in the discrete orthogonal joint definition?\n-\tWhat are the hyperparameters values use for the LSTM\n-\tEmpirical evaluations seem to use relatively small datasets composed by few dozens of temporal trajectories. Did you consider larger dataset for evaluation? \n-\tHow did you select the maximum number of epochs in Figure 5? It seems that the validation error is still decreasing after 25 epochs?\n\nPros:\n-\tProvide theoretical guarantee for the use of orthogonal random features in the context of PSRNN\nCons:\n-\tEmpirical evaluation only on small scale datasets.\n" ]
[ 4, 8, 7 ]
[ 5, 4, 2 ]
[ "iclr_2018_HJJ23bW0b", "iclr_2018_HJJ23bW0b", "iclr_2018_HJJ23bW0b" ]
iclr_2018_rJUYGxbCW
PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples
Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models. What makes them so special in the eyes of image classifiers? In this paper, we show empirically that adversarial examples mainly lie in the low probability regions of the training distribution, regardless of attack types and targeted models. Using statistical hypothesis testing, we find that modern neural density models are surprisingly good at detecting imperceptible image perturbations. Based on this discovery, we devised PixelDefend, a new approach that purifies a maliciously perturbed image by moving it back towards the distribution seen in the training data. The purified image is then run through an unmodified classifier, making our method agnostic to both the classifier and the attacking method. As a result, PixelDefend can be used to protect already deployed models and be combined with other model-specific defenses. Experiments show that our method greatly improves resilience across a wide variety of state-of-the-art attacking methods, increasing accuracy on the strongest attack from 63% to 84% for Fashion MNIST and from 32% to 70% for CIFAR-10.
accepted-poster-papers
The paper studies the use of PixelCNN density models for the detection of adversarial images, which tend to lie in low-probability parts of image space. The work is novel, relevant to the ICLR community, and appears to be technically sound. A downside of the paper is its limited empirical evaluation: there evidence suggesting that defenses against adversarial examples that work well on MNIST/CIFAR do not necessarily transfer well to much higher-dimensional datasets, for instance, ImageNet. The paper could, therefore, would benefit from empirical evaluations of the defense on a dataset like ImageNet.
train
[ "HJ9WQx6JG", "rJ4_WfuxM", "rJbiu3lbM", "BkMlqBTfG", "rklJKHaMf", "Sk-nvSTMf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "\nI read the rebuttal and thank the authors for the thoughtful responses and revisions. The updated Figure 2 and Section 4.4. addresses my primary concerns. Upwardly revising my review.\n\n====================\n\nThe authors describe a method for detecting adversarial examples by measuring the likelihood in terms of a generative model of an image. Furthermore, the authors prescribe a method for cleaning or 'santizing' an adversarial image through employing a generative model. The authors demonstrate some success in restoring images that have been adversarially perturbed with this technique.\n\nThe idea of using a generative model (PixelCNN) to assess whether a given image has been adversarially perturbed is a very interesting and understandable finding that may contribute quite nicely to the adversarial literature. One limitation of this method, however, is our ability to build successful generative models for high resolution images. However, I would be curious to know if the authors tried their method on high resolution images, regardless?\n\nMajor comments:\n1) Cross validation. Figure 2a is quite interesting and compelling. It is not clear from the figure if the 'clean' (nor the other data for that matter) is from the *training* or *testing* data for the PixelCNN model. I would *hope* that this is from the *testing* data indicating that these are the likelihood on unseen images?\n\nThat said, it would be interesting to see the *training* data on this plot as well to see if there are any systematic shifts that might make the distribution of adversarial examples less discernible.\n\n2) Adversary to PixelCNN. It is not clear why a PixelCNN may not be adversarially attacked, nor if such a model would be able to guard against an adversarial attack. I am not sure how well viable of strategy this may be but it is worth understanding or addressing to determine how viable this method for guarding actually is.\n\n3) Restorative effects of PixelDefend. I would like to see individual examples of (a) adversarial perturbation for a given image and (b) PixelDefend perturbation for that adversarial image. In particular, I would like to see how close (a) is the negative of (b). This would give me more confidence that this techniques is successfully guarding against the original attack.\n\nI am willing to adjust my rating upward if the authors are able to address some of the points above in a substantive manner. \n\n", "The paper describes the creative application of a density estimation model to clean up adversarial examples before applying and image model (for classification, in this setup). The basic idea is that the image is first moved back to the probable region of images before applying the classifier. For images, the successful PiexlCNN model is used as a density estimator and is applied to clean up the image before the classification is attempted.\n\nThe proposed method is very intuitive, but might be expensive if a naive implementation of PixelCNN is used for the cleaning. The approach is novel. It is useful that the density estimator model does not have to rely on the labels. Also, it might even be trained on a different dataset potentially.\n\nThe con is that the proposed methodology still does not solve the problem of adversarial examples completely.\n\nMinor nitpick: In section 2.1, it is suggested that DeepFool was the first optimization based attack to minimize the perturbation wrt the original image. In fact the much earler (2013) \"Intriguing Propoerties ... \" paper relied on the same formulation (minimizing perturbation under several constraints: changed detection and pixel intensities are being in the given range).", "The authors propose to use a generative model of images to detect and defend against adverarial examples. White-box attacks against standard models for image recognition (Resnet and VGG) are considered, and a generative model (a PixelCNN) is trained on the same data as the classifiers. The authors first show that adversarial examples created by the white-box attacks correspond to low likelihood region (according to the pixelCNN), which first gives a classification rule for detecting adversarial examples.\n\nThen, to turn the genrative model into a defensive algorithm, the authors propose to preprocess test images by approximately maximizing the likelihood under similar constraints as the attacker of images, to \"project\" adversarial examples back to high-density regions (as estimated by the generative model). As a heuristic method, the authors propose to greedily maximize the likelihood of the incoming images pixel-by-pixel, which is possible because of the specific form of the PixelCNN likelihood in the context of l-infty attacks. An \"adaptive\" version of the algorithm, in which the preprocessing is used only when the likelihood of an example is below a certain threshold, is also proposed.\n\nExperiments are carried out on Fashion MNIST and CIFAR-10. At a high level, the message is that projecting the image into a high density region is sufficient to correct for a significant portions of the mistakes made on adversarial examples. The main result is that this approach based on generative models seems to work even on against the strongest attacks.\n\nOverall, the idea proposed in the paper, using a generative model to detect and filter out spurious patterns that can appear in adversarial examples, is rather intuitive. The experimental result that adversarial examples can somehow be corrected by a generative model is also interesting. The design choice of PixelCNN, which allows for a greedy optimization seems reasonable in that setting.\n\nWhereas the paper is an interesting step forward, the paper still doesn't provide definitive arguments in favor of using such approaches in practice. There is a significant loss in accuracy on clean examples (2% on CIFAR-10 for a resnet), and more generally against weaker opponents such as the fast gradient sign. Thus, in reality, the experiments show that the pipeline generative model + classifier is robust against the strongest white box methods for this classifier, but on the other hand these methods do not transfer well to new models. This somewhat weakens the result, since robustness against these methods that do not transfer well is achieved by changing the model. \n", "Thanks for pointing out our mistake of quoting DeepFool as the first optimization based attack to minimize the perturbation w.r.t. the original image. We have corrected it in the revised version.", "Thank you for your review! Here is our answer to the following concern:\n\nQ: In reality, the experiments show that the pipeline generative model + classifier is robust against the strongest white box methods for this classifier, but on the other hand these (stronger attacking) methods do not transfer well to new models. This somewhat weakens the result, since robustness against these methods that do not transfer well is achieved by changing the model. \nA: Our assumption is that in real world circumstance we seldom have the option to change our classification model in response to the adversarial attack being used. We tend to think that the underlying classification model is generally going to be fixed and reasonably hard to change (i.e., in deployed autonomous driving cars), where the adversary can easily test out the system to decide which attack to use against it. Therefore it is important to defend against the strongest available attack.\n", "Thank you for your review! We would like to address each of your concerns point-by-point:\n\nQ: Generative models of high resolution images.\nA: We agree that testing PixelDefend on high resolution images is an important direction for future work. Although we haven’t tried PixelDefend on higher resolution images than CIFAR-10, we are hopeful that the PixelCNN can capture the approximate distributional properties to distinguish adversarial image even at resolutions where generating convincing samples becomes difficult. Experiments in the paper already provide one such piece of evidence: The samples given by PixelCNN on CIFAR-10 are already bad as judged by humans (see Figure. 9), while the samples for Fashion-MNIST (see Figure. 8) are almost indistinguishable from the training dataset. However, PixelDefend on CIFAR-10 is just as effective as PixelDefend on Fashion-MNIST.\n\nQ: Training or testing on Figure. 2a.\nA: These are indeed likelihoods of *testing* data on unseen images. We have revised the figure to add likelihoods of *training* data as well.\n\nQ: Adversary to PixelDefend\nA: We agree that the above arguments are not definitive and consider theoretical justifications to be an important direction for future research. In Section 4.4, we gave a discussion and provided some empirical results for an attack on PixelDefend. The arguments can be briefly summarized as:\n * A naive attack of the PixelDefend purification process requires back-propagating thousands of repeated PixelCNN computations. This can lead to gradient vanishing problems, as validated by our experiments.\n * Maximizing the PixelCNN density with gradient-based methods is very difficult (as shown in Figure. 5). Therefore such methods are not very amenable to generating adversarial images to fool a PixelCNN via gradient-based techniques.\n * The PixelCNN is trained independent of labels. Therefore, the perturbation direction that leads to higher probability images has a smaller correlation with the perturbation direction that results in misclassification. This arguably makes attacking more difficult.\n\nQ: Restorative effects of PixelDefend.\nA: The goal of PixelDefend is not to undo the adversarial perturbations, but simply to avoid the problems they cause on the underlying classifier by pushing the image towards the nearest high probability mode of the distribution. These changes may not in general undo the adversarial changes, but (as our results show) will push the images towards the classification region for the original underlying class.\n" ]
[ 7, 7, 7, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_rJUYGxbCW", "iclr_2018_rJUYGxbCW", "iclr_2018_rJUYGxbCW", "rJ4_WfuxM", "rJbiu3lbM", "HJ9WQx6JG" ]
iclr_2018_Bys4ob-Rb
Certified Defenses against Adversarial Examples
While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, we study this problem for neural networks with one hidden layer. We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value. Second, as this certificate is differentiable, we jointly optimize it with the network parameters, providing an adaptive regularizer that encourages robustness against all attacks. On MNIST, our approach produces a network and a certificate that no that perturbs each pixel by at most ϵ=0.1 can cause more than 35% test error.
accepted-poster-papers
The paper presents a differentiable upper bound on the performance of classifier on an adversarially perturbed example (with small perturbation in the L-infinity sense). The paper presents novel ideas, is well-written, and appears technically sound. It will likely be of interest to the ICLR community. The only downside of the paper is its limited empirical evaluation: there is evidence suggesting that defenses against adversarial examples that work well on MNIST/CIFAR do not necessarily transfer well to much higher-dimensional datasets, for instance, ImageNet. The paper could, therefore, would benefit from empirical evaluations of the defenses on a dataset like ImageNet.
train
[ "SJlhZp8gf", "BJVgLg9xf", "SkwJQwogM", "rJ3juc1MM", "rkaLdqJMM", "rkjMdqkGM", "r1kqwc1Gz", "By245E-Wf", "r15EJeKyG", "SyBzxMU1z", "SJT_HG81G", "Byvbr5WJM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "public", "author", "public" ]
[ "This paper develops a new differentiable upper bound on the performance of classifier when the adversarial input in l_infinity is assumed to be applied.\nWhile the attack model is quite general, the current bound is only valid for linear and NN with one hidden layer model, so the result is quite restrictive.\n\nHowever the new bound is an \"upper\" bound of the worst-case performance which is very different from the conventional sampling based \"lower\" bounds. Therefore minimizing this upper bound together with a classification loss makes perfect sense and provides a theoretically sound approach to train a robust classifier.\nThis paper provides a gradient of this new upper bound with respect to model parameters so we can apply the usual first order optimization scheme to this joint optimization (loss + upper bound).\nIn conclusion, I recommend this paper to be accepted, since it presents a new and feasible direction of a principled approach to train a robust classifier, and the paper is clearly written and easy to follow.\n \nThere are possible future directions to be developed.\n\n1. Apply the sum-of-squares (SOS) method.\nThe paper's SDP relaxation is the straightforward relaxation of Quadratic Program (QP), and in terms of SOS relaxation hierarchy, it is the first hierarchy. One can increase the complexity going beyond the first hierarchy, and this should provides a computationally more challenging but tighter upper bound.\nThe paper already mentions about this direction and it would be interesting to see the experimental results.\n\n2. Develop a similar relaxation for deep neural networks.\nThe author already mentioned that they are pursuing this direction. While developing the result to the general deep neural networks might be hard, residual networks maybe fine thanks to its structure.", "The authors propose a new defense against security attacks on neural networks. The attack model involves a standard l_inf norm constraint. Remarkably, the approach outputs a security certificate (security guarantee) on the algorithm, which makes it appealing for security use in practice. Furthermore, the authors include an approximation of the certificate into their objective function, thus training networks that are more robust against attacks. The approach is evaluated for several attacks on MNIST data.\n\nFirst of all, the paper is very well written and structured. As standard in the security community, the attack model is precisely formalized (I find this missing in several other ML papers on the topic). The certificate is derived with rigorous and sound math. An innovative approximation based on insight into a relation to the MAXCUT algorithm is shown. An innovative training criterion based on that certificate is proposed. Both the performance of the new training objective and the tightness of the cerificate are analyzed empirically showing that good agreement with the theory and good results in terms of robustness against several attacks.\n\nIn summary, this is an innovative paper that treats the subject with rigorous mathematical formalism and is successful in the empirical evaluation. For me, it is a clear accept. The only drawback I see is the missing theoretical and empirical comparison to the recent NIPS 2017 paper by Hein et al.\n", "This paper derived an upper bound on adversarial perturbation for neural networks with one hidden layer. The upper bound is derived via (1) theorem of middle value; (2) replace the middle value by the maximum (eq 4); (3) replace the maximum of the gradient value (locally) by the global maximal value (eq 5); (4) this leads to a non-convex quadratic program, and then the authors did a convex relaxation similar to maxcut to upper bound the function by a SDP, which then can be solved in polynomial time.\n\nThe main idea of using upper bound (as opposed to lower bound) is reasonable. However, I find there are some limitations/weakness of the proposed method:\n1. The method is likely not extendable to more complicated and more practical networks, beyond the ones discussed in the paper (ie with one hidden layer)\n2. SDP while tractable, would still require very expensive computation to solve exactly.\n3. The relaxation seems a bit loose - in particular, in above step 2 and 3, the authors replace the gradient value by a global upper bound on that, which to me seems can be pretty loose.", "Thanks for the comments and thoughtful suggestions. Adding to the recommendations about the future work:\n\n1. Sum-of-squares (SOS) method -- It is indeed interesting to check whether the higher degree of SOS gives us sufficient tightness to significantly improve the results. An obvious bottleneck in trying this out is the expensive computation. Given that our objective is similar to MAXCUT, for which it is currently unknown whether higher degree SOS relaxations give better approximation ratios, it is apriori unclear how much we could gain. \n\n2. Develop a similar relaxation for deeper networks -- We agree with the reviewer that this is an interesting direction to pursue. In fact, we have already begun implementing an algorithm that works for arbitrary depth networks. As described in the response to reviewer 1, the basic idea is that the adversarial loss for arbitrary ReLU networks can be written as a non-convex quadratic program, which can then be relaxed to an SDP and trained with similar ideas to the present paper. As the reviewer mentions, it’s possible that resnets have additional structure that can be exploited efficiently, but our current proposal handles resnets as well. \n", "Thanks for your interest in our work and the pointer to the relevant recent work by Hein and Andriushschenko. We have fixed this omission and include a discussion of the paper in the newest uploaded version of our work. To summarize the comparison: Firstly, their work focuses on perturbations in l-2 norm while ours considers the l-infty norm. Hence, there is no direct way to compare the experimental results. Theoretically, the general bound proposed for any l-p norm perturbation is similar to what we have in our work. However, the main challenge is to efficiently evaluate this bound. Hein and Andriushschenko show how to do this for p=2. In our work, we consider the attack model where p = \\infty. This makes a significant difference in the computations involved.\n", "Thank you for the comments! From our understanding of your review, it seems that there are three concerns, which we highlight and address below. \n\n1. “The method is likely not extendable to more complicated and more practical networks, beyond the ones discussed in the paper (ie with one hidden layer)”\n\nOur general approach for obtaining networks with certified robustness can in fact extend to deeper networks. We have already begun implementing an algorithm that works for arbitrary depth networks. The basic idea is that the adversarial loss for arbitrary ReLU networks can be written as a non-convex quadratic program, which can then be relaxed to an SDP and trained with similar ideas to the present paper. We would be happy to give details if it would be helpful.\n\t\t\n\n2. “SDP while tractable, would still require very expensive computation to solve exactly.”\n\nWe would like to stress that we do not need to solve the SDP exactly. As discussed in Section 4 of our paper, our network can be trained via gradient descent on the dual. Even with inexact minimization of these dual variables, we get valid certificates. We acknowledge that training our model is slower than training regular networks, but it is not nearly as bad as if one had to exactly solve the SDP.\n\nWe also note that at test time, the trained dual variables directly provide a certificate (with no extra computation) and hence checking robustness at test time is no slower than generating predictions for a regular network. \n \n\n3. \"The relaxation seems a bit loose - in particular, in above step 2 and 3, the authors replace the gradient value by a global upper bound on that, which to me seems can be pretty loose.\"\n\nAn important insight from our experiments is the following: while our SDP bound can be quite loose on arbitrary networks, optimizing against this SDP certificate leads to networks where this certificate is substantially tighter (as seen in Figure 3). Minimizing the SDP upper bound forces the optimizer to avoid where the bound is loose, as such points have higher objective values. Hence, the general looseness of the relaxation does not impede the utility of the relaxation as a way of obtaining provably robust networks.\n", "Thank you for your interest in our work! A crucial point of difference between the NIPS 2016 paper by Bastani et al., and our work is the following: Bastani et al. provide certificates only for values of \\epsilon that are small enough to ensure that the entire L-infty ball lies within the same linear region, i.e the ReLUs do not switch signs across the L-infty ball. For the networks and values of \\epsilon we consider in our work, we found that most of the ReLUs cross signs and the L-infty balls do *not* lie within the same linear region. In contrast, our work provides certificates for all values of \\epsilon. \n\nAnother difference is that our training procedure minimizes a true upper bound on the adversarial loss, which is not the case in Bastani et al.’s work (like most other prior work). \n\nWe have updated the discussion of related work to include this paper. \n", "I am trying to understand the relation of this paper and \n\nMeasuring neural net robustness with constraints\nby Bastani et al. NIPS 2016\n\nIn that paper, as far as I understand, the authors do the following: They are given some input image x1 and some neural network ( multiple layers, only ReLU activations) and lets say the output is also of the form sgn( w^T x ) so a linear function of the previous layer. \nThe paper obtains a polytope around x1 that provably will produce the same output as x1. Further they require that all the relus keep the same sign (which is a limitation but it critically allows them to get linear inequalities in x). \n\nSo they can now solve an LP to find the l_inf best possible epsilon that guarantees the same output label around x1. \nIn that sense this is a certifiable region around a given input, for multiple layer neural nets. \n\nHow does the authors' work compare ?\n\n", "Thanks for the comments and questions. \n\n1. When run on the CPU, our training takes about 1.5 times the training time of Madry et al. \nScipy's max eigen vector computation runs on the CPU by default. We estimate that using a GPU implementation of Lancoz should also result in our training taking about 1.5 times the time taken by Madry et al. on the GPU. In general, our training is much slower than normal training. However, it's possible to speed up things using simple tricks. For example, using warm starts for the max eigen vector computations, by initializing with the solution of the previous iteration. \n\n2. For multi-layered networks, optimizing for the worst-case adversarial example subject to ReLU constraints can be written as a different Quadratic Program (where the ReLU constraints are encoded by quadratic constraints). This Quadratic Program can then be relaxed to a semidefinite program, like in our paper. We are currently exploring this idea empirically. \n\n3. The Madry et al. paper considers experiments with 10^5 restarts to exhaustively understand the optimization landscape. The actual attacks use between 1 and 20 restarts. [Table 1 on page 12]. On the networks that we considered in our paper, we didn't find any decrease in accuracy on increasing the number of random restarts beyond 5. \n\nA really small value of reg parameter results in a network that has high clean test accuracy but low robustness, and similarly a very large value led to a network that had really low clean accuracy. However, for intermediate values of regularization, we observed that the classification loss (multiclass hinge) and regularization loss balance each other such that the worst case adversarial accuracy @ \\eps = 0.1 remains nearly the same. [the adversarial accuracy depends on the ratio of the multi class hinge loss to regularization loss which remains constant when the reg parameter is on the order of 0.05; we report results for this value in the paper].", "Great work!\n\nSome questions:\n1. I'm curious -- how does the training time compare to that of Madry et. al.? Also, how much longer does this take than just normal training -- given that you have to compute the maximum eigen vector at each update? \n\n2. Also, thoughts on generalizing this to multi-layered networks?\n\n3. The Madry et. al. paper seems to consider 10^5 adv. examples or so, for training and attack. Are 5 random restarts sufficient to arrive at strong conclusions?\n \nGiven that the number of linear regions around a single point is fairly large, looks like 5 would be small. But, since the comparisons are fair (5 restarts for each defense), this seems very promising. \n\nAlso, how does regularization affect the accuracy?", "Hi Seong Joon Oh, \n\nThanks for the pointer to the work by Matthias Hein, Maksym Andriushchenko. We'll cite this work in the next version of our paper. They propose a general bound on perturbations in the p-norm, that are necessary to cause misclassification, but only show how to compute this bound (for two layer neural networks) when p = 2.\nIn our work, we consider the attack model where p = \\infty. This makes a significant difference in the computations involved, where spectral-type bounds hold for p = 2; but optimizing over the L_\\infty ball is typically more complex; we show how to efficiently do this in our work. \n\nAnother key point of difference in that Hein and Andruishchenko use a \"proxy\" (Equation 6) for the actual lower bound in the proposed training algorithm.\nIn general, there is no discussion on how this proxy relates to the actual derived bound (Equation 5). In our work, we propose a training algorithm that efficiently maximizes the lower bound on perturbation proposed (or equivalently, in the language of our paper, minimizing an upper bound on the adversarial loss). ", "Thanks for the interesting paper! \nI wanted to leave a pointer to another NIPS'17 accepted paper:\n\nhttps://arxiv.org/abs/1705.08475\nFormal Guarantees on the Robustness of a Classifier against Adversarial Manipulation\nMatthias Hein, Maksym Andriushchenko\n\nLike the submission, Hein et al. also derive Lipschitz bound for neural networks with 1 hidden layer (see sec2.3) which looks similar to eq4&5 in the submission (modulo Lp norm used etc.). Indeed, this submission goes one step further to derive more bounds, but it would still be nice to discuss the difference.\n\nAlso small note: the main paper is 10 pages. In my view many intermediate inequalities (or proofs) can be deferred to appendix -- to highlight the main argument." ]
[ 8, 8, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Bys4ob-Rb", "iclr_2018_Bys4ob-Rb", "iclr_2018_Bys4ob-Rb", "SJlhZp8gf", "BJVgLg9xf", "SkwJQwogM", "By245E-Wf", "iclr_2018_Bys4ob-Rb", "iclr_2018_Bys4ob-Rb", "iclr_2018_Bys4ob-Rb", "Byvbr5WJM", "iclr_2018_Bys4ob-Rb" ]
iclr_2018_BkJ3ibb0-
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models
In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against such attacks. Defense-GAN is trained to model the distribution of unperturbed images. At inference time, it finds a close output to a given image which does not contain the adversarial changes. This output is then fed to the classifier. Our proposed method can be used with any classification model and does not modify the classifier structure or training procedure. It can also be used as a defense against any attack as it does not assume knowledge of the process for generating the adversarial examples. We empirically show that Defense-GAN is consistently effective against different attack methods and improves on existing defense strategies.
accepted-poster-papers
The paper studied defenses against adversarial examples by training a GAN and, at inference time, finding the GAN-generated sample that is nearest to the (adversarial) input example. Next, it classifies the generated example rather than the input example. This defense is interesting and novel. The CelebA experiments the authors added in their revision suggest that the defense can be effective on high-resolution RGB images.
train
[ "H17TwR4rM", "By-CxBKgz", "BympCwwgf", "rJOVWxjez", "SkbvmBamf", "Hy120kU7f", "Bkw8Ck8QG", "r1MMCyImM", "SyS0aJ8Xz", "r1D5pJ87f", "Bkbgpk87z", "S1bEhkU7G", "B1wgPVOzG", "S1c64RJzz", "ryW5rcl-f", "SkdMUQaAZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "public", "public", "public", "public" ]
[ "B) C) Thanks for the additional experiments, I think they make the paper stronger. In particular they validate that scaling is proportional to L but not (linear in) to image size, and that the method works in RGB.\nD) OK.\nA) E) I still think that these additional experiments would help, but I am now marginally convinced that the authors expectations are correct.", "This paper presents Defense-GAN: a GAN that used at test time to map the input generate an image (G(z)) close (in MSE(G(z), x)) to the input image (x), by applying several steps of gradient descent of this MSE. The GAN is a WGAN trained on the train set (only to keep the generator). The goal of the whole approach is to be robust to adversarial examples, without having to change the (downstream task) classifier, only swapping in the G(z) for the x.\n\n+ The paper is easy to follow.\n+ It seems (but I am not an expert in adversarial examples) to cite the relevant litterature (that I know of) and compare to reasonably established attacks and defenses.\n+ Simple/directly applicable approach that seems to work experimentally, but\n- A missing baseline is to take the nearest neighbour of the (perturbed) x from the training set.\n- Only MNIST-sized images, and MNIST-like (60k train set, 10 labels) datasets: MNIST and F-MNIST.\n- Between 0.043sec and 0.825 sec to reconstruct an MNIST-sized image.\n? MagNet results were very often worse than no defense in Table 4, could you comment on that?\n- In white-box attacks, it seems to me like L steps of gradient descent on MSE(G(z), x) should be directly extended to L steps of (at least) FGSM-based attacks, at least as a control.", "This paper presents a method to cope with adversarial examples in classification tasks, leveraging a generative model of the inputs. Given an accurate generative model of the input, this approach first projects the input onto the manifold learned by the generative model (the idea being that inputs on this manifold reflect the non-adversarial input distribution). This projected input is then used to produce the classification probabilities. The authors test their method on various adversarially constructed inputs (with varying degrees of noise). \n\nQuestions/Comments:\n\n- I am interested in unpacking the improvement of Defense-GAN over the MagNet auto-encoder based method. Is the MagNet auto-encoder suffering lower accuracy because the projection of an adversarial image is based on an encoding function that is learned only on true data? If the decoder from the MagNet approach were treated purely as a generative model, and the same optimization-based projection approach (proposed in this work) was followed, would the results be comparable? \n\n- Is there anything special about the GAN approach, versus other generative approaches? \n\n- In the black-box vs. white-box scenarios, can the attacker know the GAN parameters? Is that what is meant by the \"defense network\" (in experiments bullet 2)?\n\n- How computationally expensive is this approach take compared to MagNet or other adversarial approaches? \n\nQuality: The method appears to be technically correct.\n\nClarity: This paper clearly written; both method and experiments are presented well. \n\nOriginality: I am not familiar enough with adversarial learning to assess the novelty of this approach. \n\nSignificance: I believe the main contribution of this method is the optimization-based approach to project onto a generative model's manifold. I think this kernel has the potential to be explored further (e.g. computational speed-up, projection metrics).", "The authors describe a new defense mechanism against adversarial attacks on classifiers (e.g., FGSM). They propose utilizing Generative Adversarial Networks (GAN), which are usually used for training generative models for an unknown distribution, but have a natural adversarial interpretation. In particular, a GAN consists of a generator NN G which maps a random vector z to an example x, and a discriminator NN D which seeks to discriminate between an examples produced by G and examples drawn from the true distribution. The GAN is trained to minimize the max min loss of D on this discrimination task, thereby producing a G (in the limit) whose outputs are indistinguishable from the true distribution by the best discriminator. \n\nUtilizing a trained GAN, the authors propose the following defense at inference time. Given a sample x (which has been adversarially perturbed), first project x onto the range of G by solving the minimization problem z* = argmin_z ||G(z) - x||_2. This is done by SGD. Then apply any classifier trained on the true distribution on the resulting x* = G(z*). \n\nIn the case of existing black-box attacks, the authors argue (convincingly) that the method is both flexible and empirically effective. In particular, the defense can be applied in conjunction with any classifier (including already hardened classifiers), and does not assume any specific attack model. Nevertheless, it appears to be effective against FGSM attacks, and competitive with adversarial training specifically to defend against FGSM. \n\nThe authors provide less-convincing evidence that the defense is effective against white-box attacks. In particular, the method is shown to be robust against FGSM, RAND+FGSM, and CW white-box attacks. However, it is not clear to me that the method is invulnerable to novel white-box attacks. In particular, it seems that the attacker can design an x which projects onto some desired x* (using some other method entirely), which then fools the classifier downstream.\n\nNevertheless, the method is shown to be an effective tool for hardening any classifier against existing black-box attacks \n(which is arguably of great practical value). It is novel and should generate further research with respect to understanding its vulnerabilities more completely. \n\nMinor Comments:\nThe sentence starting “Unless otherwise specified…” at the top of page 7 is confusing given the actual contents of Tables 1 and 2, which are clarified only by looking at Table 5 in the appendix. This should be fixed. \n", "We have posted a revision with an additional Appendix (F) for new white-box experiments on the CelebA dataset, as well as minor changes to the text.", "We thank the anonymous commenter.\nWe have added some additional results on the CelebA dataset in Appendix F.\nRegarding the suggested new attack methods, we note that:\n1- We believe that this same exact point was raised by AnonReviewer3, and we kindly refer the commenter to part A of our reply to AnonReviewer3. \n2- It is not clear to us how to “output a wrong set of Z_L” and how to find an input x that will meet this criterion. \n(If by “output a wrong set of Z_L” the reviewer means to inject adversarial noise directly on the set of Z_L, then the attacker has gained access and infiltrated an intermediate step of the system and might as well directly modify the classifier output. This type of attacks was never considered in this literature).\n3- We believe that the commenter mistakenly assumes the seed to be an external input accessible to and modifiable by the attacker. Even though, in Figure 1, the seed is depicted as input to the system, it is never assumed that the attacker can modify the random seed. ", "We thank the anonymous commenter.\nWe have modified the title of Appendix B to reflect our claim that attacks based on gradient-descent are difficult to perform. \nRegarding the modified CW optimization attack, our understanding is that the commenter is suggesting the following:\n\nMinimize (over x*, z*) CW loss(x, x*, G(z*)) + 0.1 ||G(z*) - x*||\n\nFirst of all, this problem is significantly more difficult to solve than the original CW formulation due to the dependence on x, x*, and G(z*). \nSecond, this formulation does not guarantee that when x* is input to the system, z* will be the output of the GD block, and an example “close” to an adversarial example is not necessarily adversarial itself. \nLastly, the random initialization of z in the GD block serves to add robustness and change the output every time.\n\nAll in all, we are extremely interested in further investigating new attack strategies as Defense-GAN was shown to be robust to existing attack models. ", "We thank the anonymous commenter. Due to the recentness of the paper referred to by the commenter, we have not had the time to analyze it in detail. However, as noted in the paper (page 3), the attacks are actually generated using gradient descent as is the case in all attacks used in our paper. \nThe mechanism considered in APE-GAN and that considered in our paper are very different. While MagNet and APE-GAN use a feedforward architecture for their “reconstruction” step, Defense-GAN employs an optimization based projection onto the range of the generator, which holds a good representation of the true data. ", "We thank the anonymous commenter. The paper referred to by the commenter deals with a synthetic spheres dataset which we believe is not applicable to the use of GANs. Our focus is on real-life datasets collected from real examples. Furthermore, due to the recentness of the paper, we have not had the time to analyze it in detail.", "We thank the reviewer for the insightful comments and discussions.\n\nA) Defense-GAN vs. MagNet vs. other generative approaches:\nWe believe that the MagNet auto-encoder suffers lower accuracy compared to Defense-GAN due to the fact that the “reconstruction” step in MagNet is a feed-forward network as opposed to an optimization-based projection as in Defense-GAN. Overall, the combination of MagNet and the classifier can be seen as one deeper classification network, and has a wide attack surface compared to Defense-GAN.\nAs suggested by the reviewer, if the MagNet decoder (or another generative approach) was treated as a generative model, and the same optimization-based projection approach was followed, the model with more representative power would perform better. From our experience, GANs tend to have more representative power, but this is still an active area of research and discussion. We believe that, since GANs are specifically designed to optimize for generative tasks, using a GAN in conjunction with our proposed optimization-based projection would outperform an encoder with the same projection method. However, this would be an interesting future research direction. In addition, we were able to show some theoretical guarantees regarding the use and representative power of GANs in equation (7).\n\nB) Black- and white-box attacks:\nIn our work and previous literature, it is assumed that in black-box scenarios the attacker does not know the classifier network nor the defense mechanism (and any parameters thereof). The only information the attacker can use is the classifier output. \nIn white-box scenarios, the attacker knows the entire system including the classifier network, defense mechanisms, and all parameters (which in our case, include GAN parameters). By “defense network” in Experiments bullet 2, we mean the generator network. \n\nC) Computational complexity:\nDefense-GAN adds inference-time complexity to the classifier. As discussed in Appendix G (Appendix F in the original version of the paper), this complexity depends on L, the number of GD steps used to reconstruct images, and (to a lesser extent) R, the number of random restarts. At training time, Defense-GAN requires training a GAN, but no retraining of the classifier is necessary.\nIn comparison, MagNet also adds inference-time complexity. However, the time overhead is much smaller than Defense-GAN as MagNet is simply a feedforward network. At training time, the overhead is similar to Defense-GAN (training the encoder, no retraining of the classifier).\nAdversarial training adds no inference-time complexity. However, training time can be significantly larger than for other methods since re-training the classifier is required (preceded by generating the adversarial examples to augment the training dataset).\n", "We appreciate the constructive criticism and detailed analysis of our paper.\n\nA) Nearest-neighbor baseline:\nTaking the nearest neighbor of the potentially perturbed x from the training set can be seen as a simple way of removing adversarial noise, and is tantamount to a 1-nearest-neighbor (1-NN) classifier. On MNIST, a 1-NN classifier achieves an 88.6% accuracy on FGSM adversarial examples with epsilon = 0.3, found using the B substitute network. Defense-GAN-Rec and Defense-GAN-Orig average about 92.5% across the four different classifier networks when the substitute model is fixed to B. Similar trends are found for other substitute models. There is an improvement of about 4% by using Defense-GAN. It is also worth noting that in the case of MNIST, a 1-NN classifier works reasonably well (achieving around 95% on clean images). This is not the case for more complex datasets: for example, if the problem at hand is face attributes classification, nearest neighbors may not necessarily belong to the same class, and therefore NN classifiers will perform poorly.\n\nB) Only MNIST-sized images:\nBased on the reviewer’s suggestion, we have added additional white-box results on the Large-scale CelebFaces Attributes (CelebA) dataset in the appendix of the paper. The results show that Defense-GAN can still be used with more complex datasets including larger and RGB images. For further details, please refer to Appendix F in the revised version.\n\nC) Time to reconstruct images:\nWe agree with the reviewer that Defense-GAN introduces additional inference time by reconstructing images using GD on the MSE loss. However, we show its effectiveness against various attacks, especially in comparison to other simpler defenses. Furthermore, we have not optimized the running time of our algorithm, as it was not the focus of this work. This is a worthwhile effort to pursue in the future by trying to better utilize computational resources. \nPer the reviewer’s comment, we have timed some reconstruction steps for CelebA images (which are 15.6 times larger than MNIST/F-MNIST). For R = 2, we have:\nL = 10, 0.132 sec\nL = 25, 0.106 sec\nL = 50, 0.210 sec\nL = 100, 0.413 sec\nL = 200, 0.824 sec\nThe reconstruction time for CelebA did not scale with the size of the image.\n\nD) MagNet results are sometimes worse than no defense in Table 4:\nEven though it seems counter-intuitive that a defense mechanism can sometimes cause a decrease in performance, this stems from the fact that white-box attackers also know the exact defense mechanism used. In the case of MagNet, the defense mechanism is another feedforward network which, in conjunction with the original classifier, can be viewed as a new deeper feedforward network. Attacks on this bigger network can sometimes be more successful than attacks on the original network. Furthermore, MagNet was not designed to be robust against white-box attacks.\n\nE) Using L steps of white-box FGSM:\nPer our understanding, the reviewer is suggesting using iterative FGSM. We do agree that for a fair comparison, L steps of iterative FGSM could be used. However, we note that CW is an iterative optimization-based attack, and is more powerful than iterative FGSM. Since we have shown robustness against CW attacks in Table 4, we believe iterative FGSM results will be similar.\n", "We thank the reviewer for the constructive review and comments.\n\nA) Regarding the effectiveness against white-box attacks:\nAs the reviewer has pointed out, we have shown the robustness of our method to existing white-box attacks such as FGSM, RAND+FGSM, and CW. Indeed, a good attack strategy could be to design an x which projects onto a desired x* = G(z*). However, this requires solving for:\n\nFind x s.t. the output of the gradient-descent block is z*. \n\nPer our understanding, the reviewer’s suggestion is the following:\nFind a desired x* in the range of the generator which fools the classifier.\nFind an x which projects onto x*, i.e., such that the output of the GD block is z*, where G(z*) = x*. \nStep 1 is a more challenging version of existing attacks, due to the constraint that the adversarial example should lie in the range of the generator. While step 1 could potentially be solvable, the real difficulty lies in step 2. In fact, it is not obvious how to find such an x given x*. What comes to mind is attempting to solve step 2 using an optimization framework, e.g.:\nMinimize (over x, z*) 1\nSubject to G(z*) = x*\n z* is the output of the GD block after L steps.\n\nWe have shown in Appendix B that solving this problem using GD gets more and more prohibitive as L increases.\nFurthermore, since we use random initializations of z, if the random seed is not accessible by the attacker, there is no guarantee that a fixed x will result in the same fixed z every time after L steps of GD on the MSE. \nDue to these factors, we believe that our method is robust to a wide range of gradient-based white-box attacks. However, we are very much interested in further research of novel attack methods.\n\nB) We have fixed the minor comments by specifically mentioning the classifier and substitute models for every Table and Figure throughout the paper.\n", "This paper shows that models trained on a synthetic dataset are vulnerable to small adversarial perturbations which lie on the data manifold. Thus at least for this dataset it seems like a perfect generator would not perturb the adversarial example at all. Can the authors comment what their proposed defense would do to fix these adversarial examples?\n\nhttps://openreview.net/forum?id=SyUkxxZ0b", "https://arxiv.org/pdf/1711.08478.pdf\nInstead of doing gradient descent, it might just help to attack directly.\nSee how easily APE-GAN cracks!!!!", "In your appendix you claim the combined model is hard to attack, but I suspect that might not be the case. \n\n1. CW is an optimization based attack. \n\n2. If you just set up the CW optimization attack, and find some local minima for z* that corresponds to an adversarial attack -- I suspect it might be pretty close to the z* you converge on after a few steps of GD. Perhaps worth a shot trying to just combine the two models and add ||G(z)-x|| as another term in the optimization objective. I suspect CW would work pretty well then. \n\nminimize CW loss function + 0.1*||z*-x|| \n\nsubject y=f(x)\n z*=G(z) or something like this. ", "Have you tested your method on other datasets? I wonder if it works with datasets such as CIFAR. \n\nMoreover, it's not clear whether this method can defend against existing attacks, without introducing new vulnerabilities. Here are some possible new attack methods:\n\n1- The generator can certainly output examples that are adversarial for the classifier. Hence, the attacker only needs to find out such examples and perturb the input image to make it similar to them.\n\n2- The attacker can target the minimization block, which uses \"L steps of Gradient Descent.\" By forcing it to output a wrong set of Z_L, the rest of the algorithm (combination of generator/classifier) becomes ineffective, i.e., the minimization block can be the bottleneck. \n\n3- The algorithm takes as input a seed, along with the image. Since for a given seed, the random number generator is deterministic, the attacker can test different seeds and use the one for which the algorithm fails. This attack may work even without perturbing the image. \n\n" ]
[ -1, 6, 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "Bkbgpk87z", "iclr_2018_BkJ3ibb0-", "iclr_2018_BkJ3ibb0-", "iclr_2018_BkJ3ibb0-", "iclr_2018_BkJ3ibb0-", "SkdMUQaAZ", "ryW5rcl-f", "S1c64RJzz", "B1wgPVOzG", "BympCwwgf", "By-CxBKgz", "rJOVWxjez", "iclr_2018_BkJ3ibb0-", "iclr_2018_BkJ3ibb0-", "iclr_2018_BkJ3ibb0-", "iclr_2018_BkJ3ibb0-" ]
iclr_2018_rkZvSe-RZ
Ensemble Adversarial Training: Attacks and Defenses
Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with strong robustness to black-box attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks.
accepted-poster-papers
The paper studies a defense against adversarial examples that re-trains convolutional networks on adversarial examples constructed to attack pre-trained networks. Whilst the proposed approach is not very original, the paper does present a solid empirical baseline for these kinds of defenses. In particular, it goes beyond the "toy" experiments that most other studies in this space perform by experimenting on ImageNet. This is important as there is evidence suggesting that defenses against adversarial examples that work well on MNIST/CIFAR do not necessarily transfer well to ImageNet. The importance of the baseline method studied in this paper is underlined by its frequent application in the recent NIPS competition on adversarial examples.
train
[ "BkM3vGDlf", "SJxF3VsxG", "S1suPTx-G", "rJgZKlFGf", "rySuOxYGG", "rynrIxYGz", "r1LmIgFGM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper proposes ensemble adversarial training, in which adversarial examples crafted on other static pre-trained models are used in the training phase. Their method makes deep networks robust to black-box attacks, which was empirically demonstrated.\n\nThis is an empirical paper. The ideas are simple and not surprising but seem reasonable and practically useful.\nEmpirical results look natural.\n\n[Strong points]\n* Proposed randomized white-box attacks are empirically shown to be stronger than original ones.\n* Proposed ensemble adversarial training empirically achieves smaller error rate for black-box attacks.\n\n[Weak points]\n* no theoretical guarantee for proposed methods.\n* Robustness of their ensemble adversarial training depends on what pre-trained models and attacks are used in the training phase.\n", "This paper describes computationally efficient methods for training adversarially robust deep neural networks for image classification. (These methods may extend to other machine learning models and domains as well, but that's beyond the scope of this paper.) \n\nThe former standard method for generating adversarially images quickly and using them in training was to do a single gradient step to increase the loss of the true label or decrease the loss of an alternate label. This paper shows that such training methods only lead to robustness against these \"weak\" adversarial examples, leaving the adversarially-trained models vulnerable to multi-step white-box attacks and black-box attacks (adversarial examples generated to attack alternate models).\n\nThere are two proposed solutions. The first is to generate additional adversarial examples from other models and use them in training. This seems to yield robustness against black-box attacks from held-out models as well. Of course, it requires that you have a somewhat diverse group of models to choose from. If that's the case, why not directly build an ensemble of all the models? An ensemble of neural networks can still be represented as a neural network, although a more computationally costly one. Thus, while this heuristic appears to be useful with current models against current attacks, I don't know how well it will hold up in the future.\n\nThe second solution is to add random noise before taking the gradient step. This yields more effective adversarial examples, both for attacking models and for training, because it relies less on the local gradient. This is another simple idea that appears to be effective. However, I would be interested to see a comparison to a 2-step gradient-based attack. R+Step-LL can be viewed as a 2-step attack: a random step followed by a gradient step. What if both steps were gradient steps instead? This interpolates between Step-LL and I-Step-LL, with an intermediate computational cost. It would be very interesting to know if R+Step-LL is more or less effective than 2+Step-LL, and how large the difference is.\n\nI like that this paper demonstrates the weakness of previous methods, including extensive experiments and a very nice visualization of the loss landscape in two adversarial dimensions. The proposed heuristics seem effective in practice, but they're somewhat ad hoc and there is no analysis of how these heuristics might or might not be vulnerable to future attacks.", "The paper proposes a modification to adversarial training. Instead of alternating between clean and examples generated on-the-fly by the fast gradient sign during training, the model training is performed by alternating clean examples and adversarial examples generated from pre-trained models. The motivation behind this change is that one-step method to generate adversarial examples fail at generating good adversarial examples when applied to models trained in the adversarial setting. In contrast, one-step methods applied to models trained only on natural data generate adversarial examples that transfer reasonably well, even on models trained with usual adversarial training. The authors also propose a slight modification to the fast gradient sign method, in which an adversarial example is created using a random perturbation and the current model's gradient, which seems to work better than the fast gradient sign method. Experiments with inception models on ImageNet show increased robustness both against \"black-box\" attacks using held-out models not used in ensemble adversarial training.\n\nOne advantage of the method is that it is extremely simple. It uses pre-trained models that are readily available, and gains robustness against several well-known adversaries widely considered in the state of the art. The experiments are carried out on ImageNet and are seriously conducted.\n\nOn the negative side, there is a significant loss in accuracy, and the models are more vulnerable to white-box attacks than using standard adversarial training. As the authors discuss in the conclusion, this leaves open the question as to whether the models are indeed more robust, or whether it is an artifact of the static black-box attack schemes that are considered in the paper, which measures how much a single model is robust to adversarial examples for other models that were trained independently. For instance, there are no experiments against what is called adaptive black-box adversaries; one could also imagine finding adversarial examples that are trained to fool all models in a predefined collection of models. In the end, while the work presented in the paper found its use in the recent NIPS competition on defending against adversarial examples, it is still unclear whether this kind of defence would make a difference in critical applications.\n\n", "Our work, which was available on arxiv this year, has already inspired follow up work from independent authors. In particular, the ensemble adversarially trained models that we publically released during the NIPS competition on adversarial defenses (https://www.kaggle.com/c/nips-2017-defense-against-adversarial-attack) are already being used in multiple papers as baselines for evaluating attacks and building defenses. We will provide a link to our released models in the final version of our paper.\n\nIn addition to the models that we released, we believe that our observations on gradient masking in adversarial training are also very useful to the community. Indeed, prior techniques that exhibited this phenomenon (e.g., distillation) were somewhat more “obvious”, in that the defense technique explicitly promotes flat gradients. Our advice to systematically evaluate defenses on both white-box and black-box attacks (in Section 4.1) is being followed in many recent papers (e.g., https://openreview.net/forum?id=SyJ7ClWCb, https://openreview.net/forum?id=rJzIBfZAb, https://openreview.net/forum?id=S18Su--CW).\n\nBelow, we describe some of the papers that build upon our publically-released ensemble adversarially trained models. Note that these papers contain references to an earlier (non-anonymized) arXiv version of our work.\n\nAn independent submission to ICLR (https://openreview.net/forum?id=HknbyQbC-, avg. score of 5.2) considers black-box attacks based on GANs. The authors evaluate their attack against ensemble adversarial training and find that our defense outperforms both standard adversarial training, as well as approaches with strong guarantees against white-box robustness, on both MNIST and CIFAR10. This provides further evidence that our defense generalizes to attacks unseen during training (and also that it works well on CIFAR10).\n\nAnother independent submission based on our work is https://openreview.net/forum?id=Sk9yuql0Z (avg. score of 6.4). This paper describes the defense that ranked 2nd in the final round of the recent NIPS competition on adversarial examples. It prepends our publically released ensemble adversarially trained model with randomized input transformations (image resizing and padding). The authors show that these transformations boost the robustness of the base model they are applied to, and are thus particularly effective when combined with ensemble adversarial training. \n\nFinally, a majority of the top-placed teams in the NIPS competition used similar strategies: they extended our ensemble adversarially trained models using techniques such as ensembling, randomized transforms, image compression, etc. We have added more information on the competition results in Section 4.2. The principal take-away is that defenses that built upon Ensemble Adversarial Training attained high robustness even against the strongest black-box attacks submitted to the competition.", "Thank you for the constructive review.\n\n> On the negative side, there is a significant loss in accuracy\n\nWhile the drop in accuracy (on ImageNet) is not zero, it is small for the Inception ResNet v2 model (0.6% top1 and 0.3% top5) and somewhat larger for Inception v3 (1.6-2.2% top1/top5). However, there are few other defenses proposed on ImageNet to compare these numbers against. One concurrent submission (https://openreview.net/forum?id=SyJ7ClWCb) also aims at increasing white-box robustness at the cost of a much larger decrease in clean accuracy (10-15% top1).\n\n> the models are more vulnerable to white-box attacks than using standard adversarial training\n\nWe would like to clarify that our models are not more vulnerable to white-box attacks than those learned with standard adversarial training. While this appears to be the case on single-step attacks, it is due to the absence of a gradient masking effect, which is an intended consequence of ensemble adversarial training.\nFor iterative white-box attacks on ImageNet, we find that the robustness increase with adversarial training (whether the standard version or our ensemble variant) is only marginal compared to standard training. This was already observed in the \"Adversarial Training at Scale\" paper of Kurakin et al., ICLR'17. \nWhile there have been some recent successes in hardening models against white-box attacks, the techniques required are expensive and not currently applicable to large-scale problems such as ImageNet. Incidentally, an independent submission (https://openreview.net/forum?id=HyydRMZC-) shows that ensemble adversarial training is more robust than other adversarial training variants against white-box “spatially transformed” adversarial examples.\n\n> one could also imagine finding adversarial examples that are trained to fool all models in a predefined collection of models\n\nThank you for bringing this to our attention, we have updated our manuscript to clarify that we evaluated our models against adversarial examples that evade a collection of models. Specifically, we applied various attacks (including multi-step attacks like Step-LL, Iter-LL, PGD, etc.) to an ensemble of all of our holdout models on ImageNet (Inception V4, ResNet v1 and ResNet v2) and then transferred these examples to the adversarially trained models. We did not find this to produce a stronger attack and have clarified this point in our paper.\n\n> there are no experiments against what is called adaptive black-box adversaries\n\nFor adaptive attacks, there are few baselines to evaluate defenses against. The “substitute model” attack of Papernot et al. (https://arxiv.org/abs/1602.02697) is hard to scale to ImageNet (this was attempted in https://arxiv.org/abs/1708.03999). For MNIST, Papernot et al. report that their attack is mostly ineffective against an adversarially trained model. \nThere is also a concurrent submission that proposes an adaptive attack that attempts to “crawl” the model’s decision boundary (https://openreview.net/forum?id=SyZI0GWCZ). The attack requires a large number of calls to the black-box model (~10,000 per image) and is optimized for the l2 metric. We have attempted to transpose this attack to the l-infinity metric considered in our paper, but have not yet been able to find a set of hyperparameters that produces adversarial examples with small perturbations (even for undefended models). Further work in this direction will be very valuable for the community, and we believe our ensemble adversarially trained models can serve as a good baseline for evaluating new attacks.\n\nFor instance, ensemble adversarial training is used a baseline in an independent submission (https://openreview.net/forum?id=HknbyQbC-) which considers black-box attacks based on GANs. The authors find that our defense outperforms both standard adversarial training, as well as approaches with strong guarantees against white-box robustness, on both MNIST and CIFAR10. This provides further evidence that our defense generalizes to unseen attacks (and also that it works well on CIFAR10).\n\nFinally, from a formal perspective, we discovered a natural connection between Ensemble Adversarial Training and Domain Adaptation, wherein a model is trained on multiple source distributions and evaluated on a different target distribution. Generalization bounds obtained in that litterature transfer to our setting, and allow us to express some formal guarantees for future adversaries that are not significantly more powerful than the ones considered during training (Section 3.4 and Appendix B in our revised manuscript).\nAlthough these bounds are not as strong as some of the formal guarantees obtained for simpler tasks, we believe these results and this connection will be interesting to the community, as they are independent of the noise model (e.g., l-infinity perturbations) being considered.", "Thank you for the constructive review.\n\n> Of course, it requires that you have a somewhat diverse group of models to choose from. If that's the case, why not directly build an ensemble of all the models\n\nA large diversity of pre-trained models is not necessary for ensemble adversarial training. The main goal of our approach is to decouple the attack (the method used to produce adversarial examples) from the defense (the model being trained) so as to avoid the gradient masking issue. In this sense, even using a single pre-trained model is valuable and we indeed found this to be very effective on MNIST (Appendix C.2 in our revised manuscript).\nOf course, using multiple models will only increase the diversity of adversarial examples encountered during training. As shown by Liu et al. (ICLR’17), applying the FGSM to different ImageNet models generates very diverse perturbations (the gradients of different models are often close to orthogonal) but these perturbations still transfer between the models. Thus, although different models can produce very diverse attacks, simply ensembling these models is not necessarily a good defense strategy, as the same adversarial examples will fool most of the models in the ensemble. For instance, if we ensemble all the pre-trained ImageNet models we used, except for Inception v4, and then use a black-box FGSM attack computed on Inception v4, the ensemble's robustness is only marginally better than that of a single undefended model.\nWhen using Ensemble Adversarial Training with the Inception v3 architecture, we found that the marginal benefit of adding more pre-trained models is relatively low, thus also corroborating the fact that the main benefit of Ensemble Adversarial Training is in decoupling the attack procedure from the model being trained. We have clarified this point in our paper.\n\n> It would be very interesting to know if R+Step-LL is more or less effective than 2+Step-LL, and how large the difference is.\n\nWe thank the reviewer for the question about the 2-step Iter-LL attack, as it yields another nice illustration of the gradient masking effect. It turns out that for non-defended models and ensemble-adversarially trained models, a 2-step iterative attack is stronger than R+Step-LL, as is to be expected (the difference is roughly 10% top1/top5 accuracy). However, for standard adversarial training on Inception v3, R+Step-LL is stronger than the 2-step Iter-LL attack (by about 7% top1/top5 accuracy). Thus, this shows that the local gradient of the adversarially trained model is worse than a random direction from an optimization perspective. We added these results to Table 2 in our paper.\n\n> The proposed heuristics seem effective in practice, but they're somewhat ad hoc and there is no analysis of how these heuristics might or might not be vulnerable to future attacks\n\nWe thank you for raising the question of formal guarantees for future attacks (indeed our models remain vulnerable to white-box l-infinity attacks). Following your suggestion, we draw a connection between Ensemble Adversarial Training and the formal generalization guarantees obtained for Domain Adaptation, wherein a model is trained on multiple source distributions and evaluated on a different target distribution (Section 3.4 and Appendix B in our revised manuscript). While the resulting bounds may not necessarily be meaningful in practice, they do show that Ensemble Adversarial Training can provide formal guarantees for future adversaries of “similar power” than the ones considered during training. Some works manage to provide stronger guarantees than ours for small datasets (e.g., against all bounded l-infinity attacks), using techniques that appear out of reach for ImageNet-scale tasks. Yet, even extending these guarantees to arbitrary adversaries is a daunting task, given that we do not know how to define or enumerate the right sets of adversarial metrics. We believe that this connection to Domain Adaptation will be interesting to the community, as the resulting bounds are independent of the noise model (e.g., l-infinity perturbations) being considered.\n\nThere is also an independent submission (https://openreview.net/forum?id=HknbyQbC-) that proposes a different type of black-box attacks based on GANs, that we did not consider in our paper. The authors evaluate their attack against ensemble adversarially trained models and find that our defense outperforms both standard adversarial training, as well as approaches with strong guarantees against white-box robustness, on both MNIST and CIFAR10. This provides further evidence that our defense generalizes to attacks unseen during training (and also that it works well on CIFAR10).", "Thank you for the constructive review.\n\n> Robustness of their ensemble adversarial training depends on what pre-trained models and attacks are used in the training phase\n\nWhile we agree that the robustness of ensemble adversarial training may depend on the choices of model architectures and attacks used, this is not fundamentally different from the meta-parameter choices faced with \"regular\" adversarial training, or even non-adversarial training. For instance, it has been shown that the choice of model architecture has a strong influence on how well regular adversarial training performs (e.g., see our MNIST experiments in Appendix C.2 of the revised manuscript). For the Inception v3 architecture, we find that ensemble adversarial training with two different sets of pre-trained models yield very similar results.\n\nRegarding the diversity of models used, we note that the main goal of ensemble adversarial training is to decouple the attack from the model being trained, in order to prevent gradient masking. Our MNIST experiments (Appendix C.2 of the revised manuscript) show that using a single pre-trained model with the same architecture than the model being trained is often a very effective form of ensemble adversarial training. We have emphasized the importance of decoupling gradients in our paper.\n\n> no theoretical guarantee for proposed methods.\n\nWe thank you for raising the question of formal guarantees for future attacks (indeed our models remain vulnerable to white-box l-infinity attacks). Following your suggestion, we draw a connection between Ensemble Adversarial Training and the formal generalization guarantees obtained for Domain Adaptation, wherein a model is trained on multiple source distributions and evaluated on a different target distribution (Section 3.4 and Appendix B in our revised manuscript). While the resulting bounds may not necessarily be meaningful in practice, they do show that Ensemble Adversarial Training can provide formal guarantees for future adversaries of “similar power” than the ones considered during training. Some works manage to provide stronger guarantees than ours for small datasets (e.g., against all bounded l-infinity attacks), using techniques that appear out of reach for ImageNet-scale tasks. Yet, even extending these guarantees to arbitrary adversaries is a daunting task, given that we do not know how to define or enumerate the right sets of adversarial metrics. We believe that this connection to Domain Adaptation will be interesting to the community, as the resulting bounds are independent of the noise model (e.g., l-infinity perturbations) being considered.\n" ]
[ 6, 6, 6, -1, -1, -1, -1 ]
[ 2, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_rkZvSe-RZ", "iclr_2018_rkZvSe-RZ", "iclr_2018_rkZvSe-RZ", "iclr_2018_rkZvSe-RZ", "S1suPTx-G", "SJxF3VsxG", "BkM3vGDlf" ]
iclr_2018_SJyVzQ-C-
Fraternal Dropout
Recurrent neural networks (RNNs) are important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In this paper we propose a simple technique called fraternal dropout that takes advantage of dropout to achieve this goal. Specifically, we propose to train two identical copies of an RNN (that share parameters) with different dropout masks while minimizing the difference between their (pre-softmax) predictions. In this way our regularization encourages the representations of RNNs to be invariant to dropout mask, thus being robust. We show that our regularization term is upper bounded by the expectation-linear dropout objective which has been shown to address the gap due to the difference between the train and inference phases of dropout. We evaluate our model and achieve state-of-the-art results in sequence modeling tasks on two benchmark datasets - Penn Treebank and Wikitext-2. We also show that our approach leads to performance improvement by a significant margin in image captioning (Microsoft COCO) and semi-supervised (CIFAR-10) tasks.
accepted-poster-papers
The paper studies a dropout variant, called fraternal dropout. The paper is somewhat incremental in that the proposed approach is closely related to expectation linear dropout. Having said that, fraternal dropout does improve a state-of-the-art language model on PTB and WikiText2 by ~0.5-1.7 perplexity points. The paper is well-written and appears technically sound. Some reviewers complain that the authors could have performed a more careful hyperparameter search on the fraternal dropout model. The authors appear to have partly addressed those concerns, which frankly, I don't really agree with either. By doing only a limited hyperparameter optimization, the authors are putting their "own" method at a disadvantage. If anything, the fact that their method gets strong performance despite this disadvantage (compared to very strong baseline models) is an argument in favor of fraternal dropout.
train
[ "SJGZIlkSz", "rkblPhrgf", "SkmNLstxG", "rJJ2RIigz", "SkuiLrp7M", "rJ6YkqBfz", "ryNO0J2Wz", "S1Yt6udbz", "HJ4xA__bz", "HkN3T__Wf" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author" ]
[ "The proposed method, fraternal dropout, is the version of self-ensembles (Pi model) for RNNs. The authors proved the fact that the regularization term for self-ensemble is worth for learning RNN models. The results of the paper show incredible performances from the previous state-of-the-art performances on language modeling. I tried to reproduce the performances, and it was easy to get performance with the released codes. Under the same optimizer (Averaged SGD), the performances of the proposed method are converged fast with the better values. I think that the experiments proving that self-ensemble methods also work on RNNs are worth.", "The authors present Fraternal dropout as an improvement over Expectation-linear dropout (ELD) in terms of convergence and demonstrate the utility of Fraternal dropout on a number of tasks and datasets.\n\nAt test time, more often than not, people apply dropout in deterministic mode while at training time masks are sampled randomly. The paper addresses this issue by trying to reduce the gap.\n\nI have 1.5 high level comments:\n\n- Dropout can be applied by averaging results corresponding to randomly sampled masks ('MC eval'). This should not be ignored, and preferrably included in the evaluation.\n\n- It could be made clearer why the proposed regularization would make the aforementioned gap smaller. Intuitively, the bias of the deterministic approximation (compared to the MC eval) should also play a role. It may be worth asking whether the bias changes? A possibility is that MC and deterministic evaluations meet halfway and with fraternal dropout MC eval is worse than without.\n\nDetails:\n\n- The notation is confusing: p() looks like a probability distribution, z looks like a latent variable, p^t and l^t have superscripts instead of Y having a subscript, z^t is a function of X. Wouldn't f(X_t) be preferrable to p^t(z_t)?\n\n- The experiments are set up and executed with care, but section 4 could be improved by providings details (as much as in section 5). The results on PTB and Wikitext-2 are really good. However, why not compare to ELD here? Section 5 leads the reader to believe that ELD would be equally good.\n\n- Section 5 could be the most interesting part of the paper. This is where different regularization methods are compared (by the way, this is not \"ablation\"). It is somewhat unfortunate that due to lack of computational resources the comparisons are made at a single hyperparameter setting.\n\nAll in all, the results of section 4 are clearly good, but are they better than those of ELD? Evaluation and interpretation of results in section 5 is made difficult by the omission of the most informative quantity which Fraternal dropout is supposed to be approximating.\n", "The paper proposes “fraternal dropout”, which passes the same input twice through a model with different dropout masks. The L2 norm of the differences is then used as an additional regulariser. As the authors note, this implicitly minimises the variance of the model under the dropout mask.\n\nThe method is well presented and adequately placed within the related work. The text is well written and easy to follow.\n\nI have only two concerns. The first is that the method is rather incremental and I am uncertain how it will stand the test of time and will be adopted.\n\nThe second is that of the experimental evaluation. They authors write that a full hyper parameter search was not conducted in the fear of having a more thorough evaluation than the base lines, erroneously reporting superior results.\n\nTo me, this is not an acceptable answer. IMHO, the evaluation should be thorough for both the base lines and the proposed method. If authors can get away with a sub standard evaluation because the competing method did, the field might converge to sub standard evaluations overall. This is clearly not in anyones interest. I am open to the author's comments on this, as I understand that spending weeks on tuning a competing method is also not unbiased and work that could be avoided if all software was published.\n", "The proposed method fraternal dropout is a stochastic alternative of the expectation-linear dropout method, where part of the objective is for the dropout mask to have low variance. The first order way to achieve lower variance is to have smaller weights. The second order is by having more evenly spread weights, so there is more concentration around the mean. As a result, it seems that at least part of the effect of explicitly reducing the variance is just stronger weight penalty. The effect of dropout in the first place is the opposite, where variance is introduced deliberately. So I would like to see some comparisons between this method and various dropout rates, and regular weight penalty combinations.\n\nThis work is very closely related to expectation linear dropout, except that you are now actually minimizing the variance: 1/2E[ ||f(s) - f(s')|| ] is used instead of E [ ||f(s) - f_bar|| ]. Eq 5 is very close to this, except the f_bar is not quite the mean, but the value with the mean dropout mask. So all the results should be compared with ELD.\n\nI do not think the method is theoretically well-motivated as presented, but the empirical results seem solid.\nIt is somewhat alarming how the analysis has little to do with the neural networks and how dropout works, let along RNNs, while the strength of the empirical results are all on RNNs.\n\nI feel the ideas interesting and valuable especially in light of strong empirical results, but the authors should do more to clarify what is actually happening.\n\nMinor: why use s_i and s_j, when there is never any reference to i and j? As far as I can tell, i and j serve as constants, more like s_1 and s_2.\n", "In line with our rebuttal, we have made the following changes to our paper to address the points raised by the reviewers:\n\nAs suggested by all the reviewers, we performed extensive grid search for an AWD-LSTM 3-layer architecture trained with either fraternal dropout (FD) or expectation linear dropout (ELD) regularizations, to further contrast the performance of these two methods. We have added Subsection 5.5 that summarizes these experiments. In these experiments we confirm that more extensive grid search leads to better results for our approach and FD converges faster than ELD.\n\nAs suggested by AnonReviewer2, Monte Carlo evaluation for RNNs with dropout in training mode was studied. We added a subsection in the appendix where our experiment is presented and our discussion with AnonReviewer2 from the rebuttal is summarized. We are thankful for the comments.\n\nAs suggested by AnonReviewer3, we described the reasons why we focus on RNNs for applying fraternal dropout as a regularizer (we added additional subsection in the appendix).\n\nWe also have made minor changes such as adding missing citations or reordering some parts of our paper.\n\nFinally, we would like to note that the code for language modeling using fraternal dropout is released ( github.com/double-blind-submission/fraternal-dropout ) and the link can be also found in our paper. Hence, our results may be easily replicated (including SOTA results for PTB and WT2 datasets) and/or tested in new configurations.\n", "Thank you for your quick response! As per your suggestion, we are currently running ELD experiments with grid search on the PTB dataset as well as our model with grid search. So far ELD is at par with the baseline (with AR and TAR) and better than the baseline model without AR and TAR regularizations as expected. Due to grid search, we have a slightly better score for fraternal dropout than the one reported in the paper (59.5 validation ppl on validation set, 30 runs), and ELD (60.7 validation ppl on validation set, 34 runs) is worse compared with fraternal dropout. Also, the convergence is better for fraternal dropout on the three layer model, as was also reported in the paper for the single layer model. We will update results (including scores for test set) as more grid runs finish. \n\nWe would like to thank you for your explanation about MC evaluation. We have run experiments on the PTB dataset. We started with a simple comparison that compares fraternal dropout with the averaged mask and the AWD-LSTM baseline with a single fixed mask that we call MC1. The MC1 model achieved on average 92.2 ppl on validation set and 89.2 ppl on test set. Hence, it would be hard to use MC1 model in practice because a single sample is inaccurate. We also checked MC eval for a larger number of models (MC50). We used 50 models since we were not able to fit more on a single GPU we used. The final results for MC50 were on average 64.4 on validation set and 62.1 on test set. Hence, worse than the baseline which uses the averaged mask (60.7 on validation, 58.8 on test). For comparison, the MC10 was also tested, the final results validation and test sets are 66.2 and 63.7, respectively. All experiments were performed used models without fine-tuning.\n\nRegarding the single hyper-parameter setting, we ran experiments with multiple seeds and found the variance between runs was small. However, only one architecture (number of hidden cells and embedding size) was tested in section 5. We agree that comparison with ELD should be also performed on at least one dataset where SOTA results are claimed and it is why we are currently running more ELD experiments as mentioned before. We picked PTB dataset since it is faster to train model on this data set.", "In point 1 of the rebuttal, it is argued that generating a sequence from the model cannot be done by first sampling dropout masks, then generating sequences with each mask and finally averaging the predictions. We agree on this. However, to estimate the probability assigned by the model to a given sequence averaging works fine. In fact, it is the most direct estimate for that probability directly corresponding to loss being optimized. Also, sampling from the model can simply be done by sampling a mask and generating a single sequence.\n\nYes, the alternative in point 2 would be broken.\n\nAs to the question 'why does fraternal dropout makes the gap smaller?', the question is not whether it does so. The question was how MC eval changes. Does it get worse?\n\nPlease do add the ELD results for the 'SOTA experiments' to the paper regardless of how they turn out.\n\nAs to the hyperparameters, comparing the models in section 5 at a single hyperparameter setting has very high variance.\n", "We thank you for your constructive comments on our work. We would like to make it clear that our goal was not to design a tractable version of the original ELD objective, but to actually minimize the variance in predictions. We show the relationship with ELD only because it is one of the methods similar to our method. \n\nWe will now attempt to clarify that despite the superficial similarity, our method is different from ELD. The original version of ELD proposes to minimize the difference between the expected prediction using different masks and the prediction from the expected mask. Since this term is intractable, the authors of ELD propose a feasible version which has been shown to work well, but the relation between these two variants is unclear. To further illustrate this difference, if we consider predictions made by a linear model: y = m.w.x + b, where m is a Bernoulli mask (with probability of 1 being 0.5), w is a scalar weight, x is a scalar input and b is a scalar bias. Then clearly the original ELD objective is 0 and so the model is not penalized. But notice the practical version of the ELD objective is non-zero, and so this version still penalizes the model. On the other hand, we show that our regularization, which is feasible, directly minimizes the variance in prediction and is upper bounded by the practical version of the ELD objective. \n\nRegarding the comment on enforcing small weights reducing variance, first this weight decay effect from our regularization would directly only happen for the output embedding weight matrix. Secondly, it is unlikely that the weight decay term alone can lead to SOTA results. Thus while this weight decay may in part be helping our model, it cannot be the only factor.\n\nThe reviewer seems to be concerned about the proposed method and its analysis being general while we apply it mainly on RNNs. The fraternal dropout method is indeed general and may be applied in feed-forward architectures (as shown in the paper for CIFAR-10 semisupervised example). However, we believe that it is more powerful in the case of RNNs because:\n1. Variance in prediction accumulates among time steps in RNNs and since we share parameters for all time steps, one may use the same kappa value at each step. In feed-forward networks the layers usually do not share parameters and hence one may want to use different kappa values for different layers (which may be hard to tune). The simple way to alleviate this problem is to apply the regularization term on the pre-softmax predictions only (as shown in the paper) or use the same kappa value for all layers. However, we believe that it may limit possible gains.\n2. The best performing RNN architectures (state-of-the-art) usually use some kind of dropout (embedding dropout, word dropout, weight dropout etc.), very often with high dropout rates (even larger than 50% for input word embedding in NLP tasks). However, this is not true for feed-forward networks. For instance, ResNet architectures very often do not use dropout at all (probably because batch normalization is often better to use). It can be seen in the fraternal dropout paper (semisupervised CIFAR-10 task) that when unlabeled data is not used the regular dropout hurts performance and using fraternal dropout seems to improve just a little.\n\nAdditionally, regarding the comment that our method does not take into consideration how dropout works, notice that dropout can be seen as a form of data augmentation method. From this perspective, our goal in general is to minimize the variance between predictions when different data augmentations are used. So indeed our method is general and does not depend on how dropout works (apart from the fact that the dropout masks are i.i.d.). For instance an analogous regularization technique may be proposed where difference in prediction for two different data augmentation is minimized (for example typically used random flip or crop). But this solution would be applicable for image data only and because it is not general enough was briefly mention it in the updated version of the paper.", "Thank you for the detailed comments. Yes, our paper addresses the gap between the train and evaluation mode of dropout. As you mentioned, a well known way to address this gap is to perform MC sampling of masks and average the predictions during evaluation, and this has been used for feed-forward networks. We would like to clarify that it is not straight-forward and feasible to apply this trick for RNNs. In feed-forward networks, we average the output prediction scores from different masks. However, in the case RNNs (for next step predictions), there is more than one way to perform such evaluation, but each one is problematic. These are as follows:\n1. Let’s consider that we use a different mask each time we want to generate a sequence, and then we average the prediction scores, and compute the argmax (at each time step) to get the actual generated sequence. In this case, notice it is not guaranteed that the predicted word at time step t due to averaging the predictions would lead to the next word (generated by the same process) if we were to feed the time step t output as input to the time step t+1. So this approach is not justified. For example, with different dropout masks, if the probability of 1st time step outputs are: I (40%), he (30%), she (30%), and the probability of the 2nd time step outputs are: am (30%), is (60%), was (10%). Then the averaged prediction score followed by argmax will result in the prediction “I is”, but this would be incorrect. A similar concern applies for output predictions varying in temporal length.\n2. Consider that we first make the prediction at time step 1 using different masks by averaging the prediction score. Then we use this output to feed as input to the time step 2, then use different masks at time step 2 to generate the output at time step 2, and so on. But in order to do so, because of the way RNNs work, we also need to feed the previous time hidden state to time step 2. One way would be to average the hidden states over different masks at time step 1. But the hidden space can in general be highly nonlinear, and it is not clear if averaging in this space is a good strategy. Besides, this strategy as a whole is more time consuming because we would need to sequentially make predictions with multiple masks at each time step.\n\nWe are not sure if we correctly understand the concern of why our method would not make the aforementioned gap smaller. From our perspective, it makes this gap smaller because we show our objective is directly equivalent to the variance in predictions using different dropout masks. Regarding the bias introduced in the objective due to our regularization, we are not sure how to evaluate it. Any suggestions on how to do this would be highly appreciated.\n\nThank you for your positive comment on our experiments. Regarding evaluating ELD for SOTA PTB and Wikitext-2, we show the comparison of our method with ELD in section 5 and point out that our convergence is faster. We do agree that a comparison for SOTA would also be interesting, and it may be possible that ELD is competitive in terms of final numbers with our method, but since it is slower in terms of convergence and it is computationally intensive to check all methods for SOTA, we only made comparisons with the related works in section 5. But we are currently running SOTA experiments for ELD and if we find the results to be different from what we claim in section 5, we will add them to our paper.\n\nWe provided all the hyper-parameter details for section 5 in the footnote because it was a one layer architecture that we used for our ablation studies. For experiments in section 4, we used the exact same architectures and hyper-parameters as provided by Merity et al 2017, so we do not mention them explicitly. Additionally, our code is now publicly available but it was not included in the submitted paper due to double blind submission ( github.com/double-blind-submission/fraternal-dropout ).\n", "Thank you for your comments. We would like to bring to the reviewer’s attention that Fraternal dropout is not intended to be a tractable version of the original ELD objective, rather the goal is to actually minimize the variance in predictions. These two are different objectives; hence we believe our method is not an incremental improvement over ELD. Specifically, the original ELD objective targets at enforcing the expected prediction over different masks to be roughly equal to the prediction using the expected mask. This is different from our objective of minimizing the variance across predictions using different masks because it would make each prediction from different mask be similar to the expected prediction over different masks. So our method does not make use of the expected mask anywhere in the computations. We hope this makes the subtle difference between the two methods clear. We establish the relationship between our method and the practical version of ELD (proposed by its authors) because of their similarity.\n\nWe completely agree that for a new model/architecture, a thorough grid search should be performed to ensure the evaluations are not sub-standard. However, we propose a regularization method that can be applied on top of existing models. Hence, we used a strong SOTA baseline, and added our single hyperparameter search on top of it. This means that we only tuned a very small subset of all the hyperparameters for our method; the rest were used “as it is” from the baseline model which was heavily tuned by Merity et al. The baseline we used in our paper was the previous SOTA. It is usually the case that hyper-parameter tuning alone does not substantially improve SOTA results on widely-studied benchmarks. Hence the improvement in SOTA we get is a result of our proposed regularization and additional grid search should only make the performance gap higher. Additionally our code is now publically available so everyone can reproduce all our results and check for other hyper-parameters ( github.com/double-blind-submission/fraternal-dropout )." ]
[ -1, 5, 5, 6, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 3, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJyVzQ-C-", "iclr_2018_SJyVzQ-C-", "iclr_2018_SJyVzQ-C-", "iclr_2018_SJyVzQ-C-", "iclr_2018_SJyVzQ-C-", "ryNO0J2Wz", "HJ4xA__bz", "rJJ2RIigz", "rkblPhrgf", "SkmNLstxG" ]
iclr_2018_SJcKhk-Ab
Can recurrent neural networks warp time?
Successful recurrent models such as long short-term memories (LSTMs) and gated recurrent units (GRUs) use \emph{ad hoc} gating mechanisms. Empirically these models have been found to improve the learning of medium to long term temporal dependencies and to help with vanishing gradient issues. We prove that learnable gates in a recurrent model formally provide \emph{quasi-invariance to general time transformations} in the input data. We recover part of the LSTM architecture from a simple axiomatic approach. This result leads to a new way of initializing gate biases in LSTMs and GRUs. Experimentally, this new \emph{chrono initialization} is shown to greatly improve learning of long term dependencies, with minimal implementation effort.
accepted-poster-papers
All the reviews like the theoretical result presented in the paper which relates the gating mechanism of LSTMS (and GRUs) to time invariance / warping. The theoretical result is great and is used to propose a heuristic for setting biases when time invariance scales are known. The experiments are not mind-boggling, but none of the reviewers seem to think that's a show stopper.
test
[ "Sk2_qmcxf", "rk10EE5Vf", "HyqtEE54z", "SkrzwsKeG", "Hyb2BDI4G", "HkIzPXqxM", "ry_xKKQEM", "S12wnhz4f", "HyGJjR9QG", "ryPwnxFXf", "HJUl3etXf", "HyuqsxtXG" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "Summary:\nThis paper shows that incorporating invariance to time transformations in recurrent networks naturally results in a gating mechanism used by LSTMs and their variants. This is then used to develop a simple bias initialization scheme for the gates when the range of temporal dependencies relevant for a problem can be estimated or are known. Experiments demonstrate that the proposed initialization speeds up learning on synthetic tasks, although benefits for next-step prediction tasks are limited.\n\nQuality and significance:\nThe core insight of the paper is the link between recurrent network design and its effect on how the network reacts to time transformations. This insight is simple, elegant and valuable in my opinion. \n\nIt is becoming increasingly apparent recently that the benefits of the gating and cell mechanisms introduced by the LSTM, now also used in feedforward networks, go beyond avoiding vanishing gradients. The particular structural elements also induce certain inductive biases which make learning or generalization easier in many cases. Understanding the link between model architecture and behavior is very useful for the field in general, and this paper contributes to this knowledge. In light of this, I think it is reasonable to ignore the fact that the proposed initialization does not provide benefits on Penn Treebank and text8. The real value of the paper is in providing an alternative way of thinking about LSTMs that is theoretically sound and intuitive. \n\nClarity:\nThe paper is well-written in general and easy to understand. A minor complaint is that there are an unnecessarily large number of paragraph breaks, especially on pages 3 and 4, which make reading slightly jarring.", "A generalization analysis, along with a curve showing precision vs warp range, with warps longer than those seen during training was added to the appendix.", "The generalization analysis, along with a curve showing precision vs warp range, with warps longer than those seen during training was added. We hope this is what you had in mind.", "tl;dr: \n - The paper has a really cool theoretical contribution. \n - The experiments do not directly test whether the theoretical insight holds in practice, but instead a derivate method is tested on various benchmarks. \n\nI must say that this paper has cleared up quite a few things for me. I have always been a skeptic wrt LSTM, since I myself did not fully understand when to prefer them over vanilla RNNs for reasons other than “they empirically work much better in many domains.” and “they are less prone to vanishing gradients”. \n\nSection 1 is a bliss: it provides a very useful candidate explanation under which conditions vanilla RNNs fail (or at least, do not efficiently generalise) in contrast to gated cells. I am sincerely happy about the write up and will point many people to it.\n\nThe major problem with the paper, in my eyes, is the lack of experiments specific to test the hypothesis. Obviously, quite a bit of effort has gone into the experimental section. The focus however is comparison to the state of the art in terms of raw performance. \n\nThat leaves me asking: are gated RNNs superior to vanilla RNNs if the data is warped?\nWell, I don’t know now. I only can say that there is reason to believe so. \n\nI *really* do encourage the authors to go back to the experiments and see if they can come up with an experiment to test the main hypothesis of the paper. E.g. one could make synthetic warpings, apply it to any data set and test if things work out as expected. Such a result would in my opinion be of much more use than the tiny increment in performance that is the main output of the paper as of now, and which will be stomped by some other trick in the months to come. It would be a shame if such a nice theoretical insight got under the carpet because of that. E.g. today we hold [Pascanu 2013] dear not because of the proposed method, but because of the theoretical analysis.\n\nSome minor points.\n- The authors could make use of less footnotes, and try to incorporate them into the text or appendix.\n- A table of results would be nice.\n- Some choices of the experimental section seem arbitrary, e.g. the use of optimiser and to not use clipping of gradients. In general, the evaluation of the hyper parameters is not rigorous.\n- “abruplty” -> “abruptly” on page 5, 2nd paragraph\n\n### References\n[Pascanu 2013] Pascanu, Razvan, Tomas Mikolov, and Yoshua Bengio. \"On the difficulty of training recurrent neural networks.\" International Conference on Machine Learning. 2013.", "Thanks for the answers. These results are very interesting and important. \n\nI urge the authors to include this extended analysis with any other discussions (of plots etc.) in the supplementary material. Since they have already done these experiments, it will be helpful for any future readers.\n\nI am willing to upgrade my rating if the authors add this analysis.\n", "The paper provides an interesting theoretical explanation on why gated RNN architectures such as LSTM and GRU work well in practice. The paper shows how \"gate values appear as time contraction or time dilation coefficients\". The authors also point out the connection between the gate biases and the range of time dependencies captured in the network. From that, they develop a simple yet effective initialization method which performs well on different datasets.\n\nPros:\n- The idea is interesting, it well explain the success of gated RNNs.\n- Writing: The paper is well written and easy to read. \n\nCons:\nExperiments: only small datasets were used in the experiments, it would be more convincing if the author could use larger datasets. One suggestion to make the experiment more complete is to gradually increase the initial value of the biases to see how it affect the performance. To use 'chrono initialization', one need to estimate the range of time dependency which could be difficult in practice. \n", "Losses are computed on the evaluation set for all setups. We chose not to plot both the train and evaluation curves as both exhibit very similar trends. The difference between the architectures is not an effect of overfitting.\n\nFor generalizing to longer warpings than those of the training set, all networks display reasonably good (but not perfect) generalization. Even with warps 10 times longer than the training set warps, the networks still have decent accuracy, decreasing from 100% to around 75%. We tested this today, by training the architectures with max_warp=50, and evaluating on sequences with warps from 100 to 500.\n\nInterestingly, plain RNNs and gated RNNs display a different pattern: overall, gated RNNs perform better, but their generalization performance decreases faster with warps 8x-10x max_warp, while plain RNN never have perfect accuracy (always below 80% even within the training set range) but have a flatter performance when going beyond the training set max_warp; the two curves cross at about 9x max_warp, at 75%-80% accuracy.", "I'm happy to see the added experiments. One question: In Fig. 1, are the reported losses computed on the test set? In any case, I think it would be interesting to include results for both training and test losses for these experiments.\n\nAnother question: what happens if the networks are tested on a test set with maximum_warping higher than that used during training?\n", "In response to the reviewers insightful comments, two experiments precisely testing the invariance properties of simple recurrent, leaky and gated networks have been added, which validate the theoretical claim.", "Thank you for your insightful review. The lack of an experiment to test the main theoretical result of the paper was indeed a major flaw of the original version. We have updated the paper accordingly, by adding experiments with pure warpings/pure paddings to the main track of the paper (pages 6-8), and moving less central experiments to the appendix. We hope this experiment is more or less what you had in mind. The results are in line with the theoretical derivations: plain RNNs cannot account for warpings, leaky RNNs can\naccount for uniform time scalings but not irregular warpings, and gated RNNs can adapt to irregular warpings.", "Thank you for your constructive comments! As for larger-scale\nexperiments, we had to make choices and decided to focus on the setup suggested by Reviewer 3, namely, to center the experiments specifically on the theoretical claims of Section 1. This is now included in the text, while MNIST, pMNIST and next-step prediction have been moved to the appendix instead. We are aware this is not what you suggested, but as pointed out by Reviewer 3, we think this is more in line with the central claim of this work.", "Thank you for the constructive review of our paper. We've noted your\nremarks on the limited benefits of the method for next-step prediction,\nalong with those of reviewer 3. Consequently, we have revised the paper to include an experiment on pure warping/padding that specifically tests the main theoretical claim made in the paper. Accordingly, MNIST, pMNIST and next-step prediction have been moved to the appendix instead." ]
[ 8, -1, -1, 8, -1, 8, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJcKhk-Ab", "iclr_2018_SJcKhk-Ab", "Hyb2BDI4G", "iclr_2018_SJcKhk-Ab", "ry_xKKQEM", "iclr_2018_SJcKhk-Ab", "S12wnhz4f", "HyuqsxtXG", "iclr_2018_SJcKhk-Ab", "SkrzwsKeG", "HkIzPXqxM", "Sk2_qmcxf" ]
iclr_2018_HyUNwulC-
Parallelizing Linear Recurrent Neural Nets Over Sequence Length
Recurrent neural networks (RNNs) are widely used to model sequential data but their non-linear dependencies between sequence elements prevent parallelizing training over sequence length. We show the training of RNNs with only linear sequential dependencies can be parallelized over the sequence length using the parallel scan algorithm, leading to rapid training on long sequences even with small minibatch size. We develop a parallel linear recurrence CUDA kernel and show that it can be applied to immediately speed up training and inference of several state of the art RNN architectures by up to 9x. We abstract recent work on linear RNNs into a new framework of linear surrogate RNNs and develop a linear surrogate model for the long short-term memory unit, the GILR-LSTM, that utilizes parallel linear recurrence. We extend sequence learning to new extremely long sequence regimes that were previously out of reach by successfully training a GILR-LSTM on a synthetic sequence classification task with a one million timestep dependency.
accepted-poster-papers
Paper presents a way in which linear RNNs can be computed (fprop, bprop) using parallel scan. They show big improvements in speedups and show application on really long sequences. Reviews were generally favorable.
val
[ "Hkr1wGOeG", "ry7sCqtgM", "SyAgjAtgG", "rJIWI2PWM", "BycaVhwZz", "r1T7-3PbG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper focuses on accelerating RNN by applying the method from Blelloch (1990). The application is straightforward and thus technical novelty of this paper is limited. But the results are impressive. \n\nOne concern is the proposed technique is only applied for few types of RNNs which may limit its applications in practice. Could the authors comment on this potential limitation?", "# Summary and Assessment\n\nThe paper addresses an important issue–that of making learning of recurrent networks tractable for sequence lengths well beyond 1’000s of time steps. A key problem here is that processing such sequences with ordinary RNNs requires a reduce operation, where the output of the net at time step t depends on the outputs of *all* its predecessor. \nThe authors now make a crucial observation, namely that a certain class of RNNs allows evaluation in a non-linear fashion through a so-called SCAN operator. Here, if certain conditions are satisfied, the calculation of the output can be parallelised massively.\nIn the following, the authors explore the landscape of RNNs satisfying the necessary conditions. The performance is investigated in terms of wall clock time. Further, experimental results of problems with previously untacked sequence lengths are reported.\n\nThe paper is certainly relevant, as it can pave the way towards the application of recurrent architectures to problems that have extremely long term dependencies.\nTo me, the execution seems sound. The experiments back up the claim.\n\n## Minor\n- I challenge the claim that thousands and millions of time steps are a common issue in “robotics, remote sensing, control systems, speech recognition, medicine and finance”, as claimed in the first paragraph of the introduction. IMHO, most problems in these domains get away with a few hundred time steps; nevertheless, I’d appreciate a few examples where this is a case to better justify the method.", "This paper abstracts two recently-proposed RNN variants into a family of RNNs called the Linear Surrogate RNNs which satisfy Blelloch's criteria for parallelizable sequential computation. The authors then propose an efficient parallel algorithm for this class of RNNs, which produces speedups over the existing implements of Quasi-RNN, SRU, and LSTM. Apart from efficiency results, the paper also contributes a comparison of model convergence on a long-term dependency task due to (Hochreiter and Schmidhuber, 1997). A novel linearized version of the LSTM outperforms traditional LSTM on this long-term dependency task, and raises questions about whether RNNs and LSTMs truly need the nonlinear structure.\n\nThe paper is written very well, with explanation (as opposed to obfuscation) as the goal. Linear Surrogate RNNs is an important concept that is useful to understand RNN variants today, and potentially other future novel architectures.\n\nThe paper provides argument and experimental evidence against the rotation used typically in RNNs. While this is an interesting insight, and worthy of further discussion, such a claim needs backing up with more large-scale experiments on real datasets.\n\nWhile the experiments on toy tasks is clearly useful, the paper could be significantly improved by adding experiments on real tasks such as language modelling.", "We contest the limited technical novelty of this work. It is true that parallel scan is \"a key primitive in many parallel algorithms\"[1] and has been heavily studied and optimized. Parallel linear recurrence is a lesser known application of the widely popular parallel scan algorithm. Neural nets are hugely dependent on high performance parallel computational primitives such as matrix multiplication and convolution. We believe the first application of this classic parallel algorithm to a field dependent on fast parallel algorithms is a novel idea; otherwise someone else would have published this paper in the previous 30+ years that both parallel linear recurrence and RNNs have existed.\n\nBeyond the new architectures introduced in the paper, we applied parallel linear recurrence (PLR) to SRU and QRNN and note that it could also be applied to strongly-typed RNNs. Further, we show that PLR can also accelerate (the currently uninvestigated) architectures involving on h_t = A_t h_{t-1} + x_t for square matrices A_t.\n\nThe broader question is \"how limiting is it that PLR cannot accelerate LSTMs, GRUs, vanilla RNNs, or other non-linear RNN models?\". We do not think this will limit the applicability of PLR within RNNs. A significant amount of recent research (listed below in [2]) has matched or surpassed the performance of non-linear RNNs with models with only linear sequential dependency. Given this body of research, our belief has shifted from \"RNNs depend on sequential non-linearity\" to \"there is no evidence that sequential non-linearity is necessary, and there is a fair amount of evidence it is not necessary\". With this in mind, we believe PLR's incompatibility with non-linear RNNs is not a major practical limitation as we expect linear surrogate RNNs to continue growing in popularity due to their fast training times and good performance. We also think this work will accelerate the growing popularity of linear surrogate RNNs.\n\n[1]\nhttp://people.cs.vt.edu/yongcao/teaching/cs5234/spring2013/slides/Lecture10.pdf\n\n[2]\nSequential models with linear dependendencies with experimental\nperformance on par with non-linear RNNs. Most models listed trained in\nsignificantly less time than non-linear RNN.\n\nStrongly-typed RN https://arxiv.org/abs/1602.02218 (language\nmodelling)\n\nByteNet https://arxiv.org/abs/1610.10099 (state of the art (SotA)\ncharacter level language model on Hutter Prize, SotA character to\ncharacter machine on WMT)\n\nQuasi-RNN https://arxiv.org/abs/1611.01576 (sentiment classification,\nlanguage modelling, machine translation)\n\nConvolutional Sequence to Sequence Learning\nhttps://arxiv.org/abs/1705.03122 (machine translation, outperforms\nLSTM)\n\nAttention Is All You Need https://arxiv.org/abs/1706.03762 (SotA\nmachine translation on WMT)\n\nWaveNet https://arxiv.org/abs/1609.03499 (high fidelity audio\ngeneration)\n\nSimple Recurrent Unit https://arxiv.org/abs/1709.02755 (matches or\noutperforms LSTM on sequence classification, question answering,\nlanguage modelling, machine translation, speech recognition). PLR\ncan significantly accelerate already fast SRU training.\n", "We agree that you can often \"get away with\" backprop through time\n(BPTT) truncated at several hundred time steps for many sequential\nproblems, even when the inherent sequence length of the data is very\nlong.\n\nSome problems which can benefit from additional sequence length:\n\n* Medical waveforms are often sampled at greater than 1KHz. This means\n relatively short recordings create very long sequences. These\n sequences may be used for a sequence classification task which makes\n it difficult to use truncated BPTT. Sequence classification on very\n long sequences must either handle the entire sequence, classify\n subsequences (suboptimal as label may only be determined by part of\n the sequence), or down-sample the sequence data (suboptimal because\n it loses information). The 2016 PhysioNet Challenge\n (https://physionet.org/challenge/2016/) involved classifying EEGs\n sampled at 2KHz for 5-120s for a total of 10K-240K events per\n sequence. It would be difficult to apply neural nets to such a\n problem without a technique to parallelize over timesteps. An even\n more extreme dataset is 90 minutes @ 30KHz (= 160 million steps) of\n neural recordings of a mouse: http://data.cortexlab.net/dualPhase3/ .\n In general, consider some task involving sensor data. Now consider the\n same phenomena but measured at 10X the frequency. There is now\n more information available, but this additional information is only accessible if\n the researcher has tools that can deal with a 10X longer sequence with\n 10X longer dependencies.\n\n* Example future machine learning task: Generate a (text) review of a\n 2+ hour movie, including comments on dialogue and\n cinematography. Even with significant downsampling of both frames\n and audio, a 2 hour movie contains 7200 frames at 1 frame/sec and an\n average of 9000 words\n (http://kaylinwalker.com/long-winded-actors-and-movies-with-the-most-dialogue/).\n We believe parallel sequential methods would be hugely useful for\n such a task.\n\n* I am not an expert, but I believe reinforcement learning on long\n episodes with sparse rewards could benefit from less episode truncation.\n", "We feel the very impressive performance of SRUs and QRNNs on a variety of large-scale tasks demonstrates the applicability and usefulness of our work. We could have replicated their results in language modelling and machine translation with faster training times, but we believe that showing large speedup factors for these models is sufficient evidence for the value of parallel linear recurrence.\n\nWe argue more strongly that a non-linearity recurrence is unnecessary than we do that \"rotation free\" RNNs are just as powerful as RNNs with non-diagonal weight matrices. However, SRUs are \"rotation free\" linear recurrences with performance equal or superior to LSTM and other non-linear RNNs on 6 sequence classification datasets, the SQuAD question answering dataset, Penn Treebank language modelling, Switchboard-1 speech recognition, and WMT English->German translation.\n" ]
[ 6, 7, 7, -1, -1, -1 ]
[ 3, 2, 4, -1, -1, -1 ]
[ "iclr_2018_HyUNwulC-", "iclr_2018_HyUNwulC-", "iclr_2018_HyUNwulC-", "Hkr1wGOeG", "ry7sCqtgM", "SyAgjAtgG" ]
iclr_2018_HkTEFfZRb
Attacking Binarized Neural Networks
Neural networks with low-precision weights and activations offer compelling efficiency advantages over their full-precision equivalents. The two most frequently discussed benefits of quantization are reduced memory consumption, and a faster forward pass when implemented with efficient bitwise operations. We propose a third benefit of very low-precision neural networks: improved robustness against some adversarial attacks, and in the worst case, performance that is on par with full-precision models. We focus on the very low-precision case where weights and activations are both quantized to ±1, and note that stochastically quantizing weights in just one layer can sharply reduce the impact of iterative attacks. We observe that non-scaled binary neural networks exhibit a similar effect to the original \emph{defensive distillation} procedure that led to \emph{gradient masking}, and a false notion of security. We address this by conducting both black-box and white-box experiments with binary models that do not artificially mask gradients.
accepted-poster-papers
Paper was well written and rebuttal was well thought out and convincing. The reviewers agree that the paper showed BNNs were good (relatively speaking) at resisting adversarial examples. Some question was raised about whether the methods would work on larger datasets and models. The authors offered some experiments in this regard in the rebuttal to this end. Also, a public comment appeared to follow up on CIFAR and report correlated results.
train
[ "BkSH7A5Qz", "Sy5hsUOlG", "H1EWH1KxG", "HkNP3wvWM", "BJO40TcmM", "ByKGyXMfz" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "We thank the reviewers for their positive and constructive feedback. We believe that we have addressed all of the main questions and concerns in the most recent revision of the paper. These are detailed below:\n\nR2 - Higher dimensional data\n\nTo confirm that our findings hold for higher dimensional data, and further show that performance on clean inputs does not necessarily translate to the same on adversarial examples, we conducted ImageNet experiments with AlexNet and ResNet-18 using 1-4 bits for weights and activations. Models were trained as in DoReFa-Net by Zhou et al., 2016, with the addition of L2 weight decay on the first layer (conv0) of all models which isn’t quantized. This L2 norm penalty is inspired by the boundary tilting perspective (Tanay & Griffin, 2016). A regularization constant of 1e-5 was used for AlexNet and 1e-4 for ResNets.\n\nWe found that:\n\nFor AlexNet, despite some accuracy degradation for clean inputs, all low-precision variants had the same or less top-1 error against FGSM, and all but two had less top-5 error across a typical range of perturbation magnitudes (epsilon in [2.0 - 16.0]). A binarized AlexNet had 4.5% and 8.0% less top-1 and top-5 error respectively than a full-precision equivalent for epsilon=2.0, and performed the same or better when sweeping across the full range of epsilon. \n\nResNet experienced a 6.5% and 4.2% reduction in top-1 and top-5 error respectively on clean inputs when going to 2-bits, however this performance gap shrinks to within +/- 0.2% for FGSM with epsilon=4.0 (in favour of low-precision for top-1 error, and in favour of full-precision for top-5). A binarized ResNet was slightly less optimal than the 2-bit case, but still managed to reduce the performance gap on clean inputs, resulting in 3.4% higher top-5 error, but 0.4% lower top-1 error, for FGSM with epsilon=4.0, as compared to full-precision.\n\nThe small 3x3 kernels in ResNets are less likely to be unique for very low bitwidths, as there are only 512 possible binary 3x3 kernels, vs 65k possible binary 4x4 kernels. This could explain some of the differences between binarizing AlexNet which uses a mix of large and small kernels, vs ResNet which makes exclusive use of small kernels.\n\nAlthough one of the main contributions of DoReFa-Net was to train with low bitwidth gradients, we’re reporting the 32-bit gradient case here as this is what was done originally in our paper, and is more representative of “black-box” vulnerability in an ideal FGSM transfer attack. Models trained with low bitwidth gradients (e.g 6-bits) further reduced top-5 error under FGSM by 6-7% on AlexNet, however this gain was found to be caused by gradient masking, as it was overcome by subsequently attacking with 32-bit gradients.\n\nOur preference is to report these results here informally to address the questions/concerns of R2, to keep the paper at a reasonable length, and because the use of mixed precision and larger dataset could be seen as a departure from the original scope of the paper (c.f. the instructions to authors). The majority of related works report CIFAR-10 and MNIST since adversarial robustness is generally an unsolved problem even in low-dimensions (e.g see https://github.com/MadryLab/mnist_challenge and https://github.com/MadryLab/cifar10_challenge). Several of the targeted attacks and adversarial training methods used in the paper (e.g PGD, CWL2, Papernot black-box transfer attack) are currently very slow to run on ImageNet.\n\nIn the time since submission, we have also been analyzing the difference between compression by low-precision parameters and compression by other means such as pruning. Pruning introduces sparsity which can result in loss of rank in weight matrices, effectively removing coordinate axis with which to express the decision boundary and causing it to collapse and lie near the data, leaving the model vulnerable to adversarial examples. Low-precision is less likely to result in loss of rank unless used in conjunction with very small kernels.\n\nR1 - Stochastic BNN vs Stochastic NN\n\nWe conducted but did not report some additional experiments for the full-precision case where weights are sampled from a Gaussian distribution with a learned mean. This type of model achieved 20% accuracy against 100 iterations of CWL2, compared to 70% for SBNNs. Having weights flip signs is more destructive and noisy wrt the progress of an iterative attack. Weights sampled from a Gaussian are less likely to change sign between iterations. In the literature outside low-precision implementations, typically stochastic NNs refers to stochastic activations (e.g. [Tang and Salakhutdinov 2013, Tapani et al. 2014]) and this is likely what R1 was referencing. While we agree with the reviewer that networks with stochastic activations are indeed a relevant comparison, we have not yet carried out the experiments.", "\nThis paper starts by gently going over the concept of adversarial attacks on neural networks (black box vs white box, reactive vs proactive, transfer of attacks, linearity hypothesis), as well as low-precision nets and their deployement advantages. \nAdversarial examples are introduced as a norm-measurable deviation from natural inputs to a system. We are reminded of adversarial training, and of the fact that binarized nets are highly non linear due to the nature of their weights and activations.\n\nThis paper then proposes to examine the robustness of binarized neural networks to adversarial attacks on MNIST and CIFAR-10.\n\nThe quantization scheme used here is v32 Conv2D -> ReLU -> BNorm -> sign -> bit Conv2D -> ReLU -> Scalar -> BNorm -> sign, but sign is really done with the stochastic quantization method of Courbariaux et al, even at test time (in order to make it more robust).\n\nWhat the experimental results show:\n- High capacity BNNs are usually more robust to white-box attacks than normal networks, probably because the gradient information that an adversary would use becomes very poor as training progresses.\n- BNNs are harder to properly train with adversarial examples because of the polarized weight distribution that they induce\n- Against black-box attacks, it seems there is little difference between NNs and BNNs.\n\nSome comments and questions:\n- In figure 1 I'm not sure what \"Scalar\" refers to, and it is not explained in the paper (nor could I find it explained in Papernot et al 2017a).\n- Do you adopt the \"Shift based Batch Normalizing Transform\" of Courbariaux et al? If not, why?\n- It might be worth at least _quickly_ explaining what the 'Carlini-Wagner L2 from CleverHans' is rather than simply offering a citation with no explanation. Idem for 'smooth substitute model black-box misclassificiation attack'. We often assume our readers know most of what we know, but I find this is often not the case and can discourage the many newcomers of our field.\n- \"Running these attacks to 1000 iterations [...], therefore we believe this targeted attack represents a fairly substantial level of effort on behalf of the adversary.\" while true for us researchers, computational difficulty will not be a criterion to stop for state actor or multinational tech companies, unless it can be proven that e.g. the number of iterations needs to grow exponentially (or in some other unreasonable way) in order to get reliable attacks.\n- \"MLP with binary units 3 better\", 'as in Fig.' is missing before '3', or something of the sort.\n- You say \"We postpone a formal explanation of this outlier for the discussion.\" but really you're explaining it in the next paragraph (unless there's also another explanation I'm missing). Training BNNs with adversarial examples is hard.\n- You compare stochastic BNNs with deterministic NNs, but not with stochastic NNs. What do you think would happen? Some of arguments that you make in favour of BNNs could also maybe be applied to stochastic NNs.\n\nMy opinions on this paper:\n- Novelty: it certainly seems to be the first time someone has tackled BNNs and adversarial examples\n- Relevance: BNNs can be a huge deal when deploying applications, it makes sense to study their vulnerabilities\n- Ease of understanding: To me the paper was mostly easy to understand, yet, considering there is no page limit in this conference, I would have buffed up the appendix, e.g. to include more details about the attacks used and how various hyperparameters affect things.\n- Clarity: I feel like some details are lacking that would hinder reproducing and extending the work presented here. Mostly, it isn't always clear why the chosen prodecures and hyperparameters were chosen (wrt the model being a BNN)\n- Method: I'm concerned by the use of MNIST to study such a problem. MNIST is almost linearly separable, has few examples, and given the current computational landscape, much better alternatives are available (SVHN for example if you wish to stay in the digits domain). Concerning black-box attacks, it seems that BNNs less beneficial in a way; trying more types of attacks and/or delving a bit deeping into that would have been nice. The CIFAR-10 results are barely discussed.\n\nOverall I think this paper is interesting and relevant to ICLR. It could have stronger results both in terms of the datasets used and the variety of attacks tested, as well as some more details concerning how to perform adversarial training with BNNs (or why that's not a good idea).", "1) Summary\nThis paper proposes a study on the robustness of one low-precision neural networks class - binarized neural networks (BNN) - against adversarial attacks. Specifically, the authors show that these low precision networks are not just efficient in terms of memory consumption and forward computation, but also more immune to adversarial attacks than their high-precision counterparts. In experiments, they show the advantage of BNNs by conducting experiments based on black-box and white-box adversarial attacks without the need to artificially mask gradients.\n\n\n2) Pros:\n+ Introduced, studied, and supported the novel idea that BNNs are robust to adversarial attacks.\n+ Showed that BNNs are robust to the Fast Gradient Sign Method (FGSM) and Carlini-Wagner attacks in white-box adversarial attacks by presenting evidence that BNNs either outperform or perform similar to the high-precision baseline against the attacks.\n+ Insightful analysis and discussion of the advantages of using BNNs against adversarial attacks.\n\n3) Cons:\nMissing full-precision model trained with PGD in section 3.2:\nThe authors mention that the full-precision model would also likely improve with PGD training, but do not have the numbers. It would be useful to have such numbers to make a better evaluation of the BNN performance in the black-box attack setting.\n\n\nAdditional comments:\nCan the authors provide additional analysis on why BNNs perform worse than full-precision networks against black-box adversarial attacks? This could be insightful information that this paper could provide if possible.\n\n\n4) Conclusion:\nOverall, this paper proposes great insightful information about BNNs that shows the additional benefit of using them besides less memory consumption and efficient computation. This paper shows that the used architecture for BBNs makes them less susceptible to known white-box adversarial attack techniques.\n", "his work presents an empirical study demonstrating that binarized networks are more robust to adversarial examples. The authors follow the stochastic binarization procedure proposed by Courbariaux et al. The robustness is tested with various attacks such as the fast gradient sign method and the projected gradient method on MNIST and CIFAR.\n\nThe experimental results validate the main claims of the paper on some datasets. While reducing the precision can intuitively improve the robustness, It remains unclear if this method would work on higher dimensional inputs such as Imagenet. Indeed: \n\n(1) state of the art architectures on Imagenet such as Residual networks are known to be very fragile to precision reduction. Therefore, reducing the precision can also reduce the robustness as it is positively correlated with accuracy. \n\n(2) Compressing reduces the size of the hypothesis space explored. Therefore, larger models may be needed to make this method work for higher dimensional inputs. \n\nThe paper is well written overall and the main idea is simple and elegant. I am less convinced by the experiments. ", "R1 - Computational effort of running CWL2 to 1000 iterations\n\nThe purpose of our statement was to provide some context for the level of effort spent attacking a network in terms of the effort required to train it originally. We agree that this is not a formal notion of security, although for high dimensional problems such as ImageNet, an attack that is an order of magnitude more expensive than training from scratch could be prohibitive for the layperson. \n\nR1 - More detail on attacks\n\nWe agree that more detail regarding the attacks makes the paper more accessible. As such, we have added a brief description of the Carlini-Wagner L2 attack. We note that the Papernot “smooth substitute model-black box attack” was briefly outlined in Section 3.2 and do not feel that we have much more space to elaborate given the target paper length.\n\nR3 - Full-precision model with PGD training in Section 3.2\n\nWe can confirm that the full-precision model (A) does indeed perform much better with PGD adversarial training rather than with FGSM, but so does the scaled binarized model (C), which we did not report originally in Table 4. We find no significant difference between models A and C when PGD training is used for models of varying capacity. Please refer to the latest revision for the updated Table 4.\n\nR3 - Can the authors provide additional analysis on why BNNs perform worse than full-precision networks against black-box adversarial attacks? This could be insightful information that this paper could provide if possible.\n\nWe wish to clarify that BNNs are not worse than NNs against black-box attacks, which should be more clear in the updated Table 4 (see C+*), and from Table 5 (C+). When using delayed PGD training, and scaling binarized activations by a single small tunable parameter per layer, similar performance to full-precision with PGD training is achieved on MNIST. For CIFAR-10, the scaled BNN with FGSM adversarial training (C+) achieved 8.6% higher black-box accuracy than full-precision with FGSM training (A+) for the high capacity model, and C+ maintained a small edge over A+ at the lowest capacity tested. A possible explanation for the improvement of the scaled binarized model on CIFAR-10 is that it has limited representation power to cheat by learning image statistics and other non-salient consistencies between training and test sets, an explanation inspired by Jo & Bengio, 2017. We have observed this effect in preliminary MNIST experiments with simple classifiers and a small set of feature-preserving frequency domain transformations applied to either training or test split, but a more detailed explanation is still in progress. \n\nR1 - In figure 1 I'm not sure what \"Scalar\" refers to, and it is not explained in the paper (nor could I find it explained in Papernot et al 2017a).\n\nWe have updated the second paragraph in Section 3 with additional detail about the nature and purpose of this scalar. The original description referred to a “scaling factor” but we have updated the language. This scalar does not come from Papernot et al., rather it is a modification to vanilla BNNs that reduces the range of hidden activations so they align more closely to NNs. This prevents numerical instabilities at the softmax layer, which has implications for gradient based attacks, and leads to improved accuracy on clean inputs as originally reported by Tang et al., 2017.\n\nR1 - Do you adopt the \"Shift based Batch Normalizing Transform\" of Courbariaux et al? If not, why?\n\nWe did not use shift based batch normalization (SBN) as this was viewed as a performance trick for reducing the number of multiplications required by “vanilla” batch normalization (BN) during training. The original Courbariaux paper reported no loss in accuracy when using SBN rather than BN for the same datasets. Traditional aspects of BNN performance were outside the scope of this paper. \n\nAdditional References:\n\nThomas Tanay and Lewis Griffin. A Boundary Tilting Perspective on the Phenomenon of Adversarial Examples, 2016 -- https://arxiv.org/abs/1608.07690\n\nYichuan Tang and Ruslan Salakhutdinov. Learning Stochastic Feedforward Neural Networks, 2013 -- http://www.cs.toronto.edu/~tang/papers/sfnn.pdf\n\nTapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for Learning Binary Stochastic Feedforward Neural Networks, 2014 -- https://arxiv.org/abs/1406.2989\n\nJason Jo and Yoshua Bengio. Measuring the tendency of CNNs to learn surface statistical regularities, 2017 -- https://arxiv.org/abs/1711.11561\n", "1) Introduction: As a final project for the course ”Applied Machine Learning” at McGill University, Canada, we were tasked with reproducing a paper submitted to the International Conference on Learning Representation (iclr.cc). We chose the paper ”Attacking Binarized Neural Networks” because of its application to mass-produced and available technology. Low-precision networks can be deployed on more cost-effective hardware and can therefore be more widely utilized.\n\n2) Analysis of Paper: The paper is well written and provides many novel approaches and findings:\n• The authors of this paper conducted many experiments.\nEach experiment was well deigned and focused on one aspect of the model.\n• The paper demonstrated that binary neural networks are often more robust against adversarial attacks.\n• The paper found that binary neural network were harder to properly train with adversarial examples than the full-precision network.\n\nHowever, there were limitations to this paper:\n• In the paper, the author stated that they ran the training and attacking in both MNIST and CIFAR-10 datasets, however, in the white-box attack portion, they only showed the results of performing MNIST dataset. They did not mention the CIFAR-10 dataset.\n• The authors applied the low-precision neural network to the datasets only in a relatively low dimension. It will be more convincing if they can test this neural network architecture on some higher dimensional datasets, such as Imagenet.\n\n3) Reproduction Methodology: The reproduction is realized with TensorFlow and CleverHans. We conducted attacks on similarly modeled BNNs and full-precision networks and compared their performance. Both white-box and black-box attacks were reproduced running under similar parameters in the original work.\nIn addition, we further verified BNNs’ robustness against adversarial attacks by experimenting on CIFAR-10 dataset. Details of our reproduction can be seen in paper linked below.\n\n4) Reproduction Results: To replicate the original paper’s white-box attack methods we ran both Fast-Gradient Sign Method (FGSM) and Carlini-Wagner L2 attacks on different neural network setups. From these attacks, we found that we were able to replicate the original papers findings. Most of our results consistently fell within the ranges that the author has listed. However, for some tests, we have received accuracies drastically different than those reported in the original paper. These inconsistencies may be due to us using slightly different model parameters as they were not well specified in the original paper. They may also be reduced by rerunning our replicated tests until we have more confidence in our values.\nReplication of Black-Box attacks consisted of running the same substitute model training procedure from Papernot et al. using CleverHans v2.0.0 on the MNIST dataset with FGSM adversarial training.\nSimilar to our White-box replication, some of our Black-box results had accuracies similar to those originally reported. However, when we attacked the binary neural network with learned scalar, our result differed far from the authors’ experimental results.\nFull-precision networks had a moderate advantage over binary model classes B and C, which was similar to the authors’ result. However, our binary neural network with learned scalar performed worse than binary network without the learned scalar. We think that the difference is due to due to the different number of adversarial instances.\n\n5) Conclusion: We reproduced the main findings of the paper Attacking Binarized Neural Networks. We found out that neural networks with low-precision weights and activations, such as binarized neural networks, would indeed improve robustness against some adversarial attacks like FGSM and Carlini-Wagner L2.\n\n6) Nota Bene: A copy of the original paper can be found at https://goo.gl/rQvXig." ]
[ -1, 7, 7, 6, -1, -1 ]
[ -1, 3, 4, 5, -1, -1 ]
[ "iclr_2018_HkTEFfZRb", "iclr_2018_HkTEFfZRb", "iclr_2018_HkTEFfZRb", "iclr_2018_HkTEFfZRb", "iclr_2018_HkTEFfZRb", "iclr_2018_HkTEFfZRb" ]
iclr_2018_S1jBcueAb
Depthwise Separable Convolutions for Neural Machine Translation
Depthwise separable convolutions reduce the number of parameters and computation used in convolutional operations while increasing representational efficiency. They have been shown to be successful in image classification models, both in obtaining better models than previously possible for a given parameter count (the Xception architecture) and considerably reducing the number of parameters required to perform at a given level (the MobileNets family of architectures). Recently, convolutional sequence-to-sequence networks have been applied to machine translation tasks with good results. In this work, we study how depthwise separable convolutions can be applied to neural machine translation. We introduce a new architecture inspired by Xception and ByteNet, called SliceNet, which enables a significant reduction of the parameter count and amount of computation needed to obtain results like ByteNet, and, with a similar parameter count, achieves better results. In addition to showing that depthwise separable convolutions perform well for machine translation, we investigate the architectural changes that they enable: we observe that thanks to depthwise separability, we can increase the length of convolution windows, removing the need for filter dilation. We also introduce a new super-separable convolution operation that further reduces the number of parameters and computational cost of the models.
accepted-poster-papers
Paper explore depth-wise separable convolutions for sequence to sequence models with convolutions encoders. R1 and R3 liked the paper and the results. R3 thought the presentation of the convolutional space was nice, but the experiments were hurried. Other reviewers thought the paper as a whole had dense parts and need cleaning up, but the authors seem to have only done this partially. From the reviewers comments, I'm giving this a borderline accept. I would have been feeling much more comfortable with the decision if the authors had incorporated the reviewers' suggestions more thoroughly..
train
[ "rJeAByPlf", "BkCwTl9lG", "rJ9-yZ9lM", "BJkhyncQM", "BJrg6o9mz", "HyW_7BOZM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer" ]
[ "Pros:\n- new module\n- good performances (not state-of-the-art)\nCons:\n- additional experiments\n\nThe paper is well motivated, and is purely experimental and proposes a new architecture. However, I believe that more experiments should be performed and the explanations could be more concise.\n\nThe section 3 is difficult to read because the notations of the different formula are a little bit heavy. They were nicely summarised on the Figure 1: each of the formula' block could be replaced by a figure, which would make this section faster to read and understand.\n\nI would have enjoyed a parameter comparison in Table 3 as it is claimed this architecture has less parameters and additional experiments would be welcome. As it does not reach the state-of-the-art, \"super separable convolutions\" could be compared on other tasks?\n\nminor:\n\"In contrast, regular convolutional layers break\nthis creed by learning filters that must simultaneously perform the extraction of spatial features and\ntheir merger into channel dimensions; an inefficient and ineffective use of parameters.\" - a verb is missing?\n", "The paper proposes to use depthwise separable convolution layers in a fully convolutional neural machine translation model. The authors also introduce a new \"super-separable\" convolution layer, which further reduces the computational cost of depthwise separable convolutions. Results are presented on the WMT English to German translation task, where the method is shown to perform second-best behind the Transformer model.\n\nThe paper's greatest strength is in my opinion the quality of its exposition of the proposed method. The relationship between spatial convolutions, pointwise convolutions, depthwise convolutions, depthwise separable convolutions, grouped convolutions, and super-separable convolutions is explained very clearly, and the authors properly introduce each model component.\n\nPerhaps as a consequence of this, the experimental section feels squeezed in comparison. Quantitative results are presented in two fairly dense tables (especially Table 2) which, although parsable after reading the paper carefully, could benefit from a little bit more information on how they should be read. The conclusions that are drawn in the text are stated without citing metrics or architectural configurations, leaving it up to the reader to connect the conclusions to the table contents.\n\nOverall, I feel that the results presented make a compelling case both for the effectiveness of depthwise separable convolutions and larger convolution windows, as well as the overall performance achievable by such an architecture. I think the paper constitutes a good contribution, and adjustments to the experimental section could make it a great contribution.", "This paper presents the SliceNet architecture, an sequence-to-sequence model based on super-dilated convolutions, which allow to reduce the computational cost of the model compared to standard convolution. The proposed model is then evaluated on machine translation and yields competitive performance compared to state-of-the-art approaches.\n\nIn terms of clarity, the paper is overall easy to follow, however I am a bit confused by Section 2 about what is related work and what is a novel contribution, although the section is called “Our Contribution”. For instance, it seems that the separable convolution presented in Section 2.1 were introduced by (Chollet, 2016) and are not part of the contribution of this paper. The authors should thus clarify the contributions of the paper.\n\nIn terms of significance, the SliceNet architecture is interesting and is a solid contribution for reducing computation cost of sequence-to-sequence models. The experiments on NMT are convincing and gives interesting insights, although I would like to see some pointers about why in Table 3 the Transformer approach (Vaswani et al. 2017) outperforms SliceNet.\n\nI wonder if the proposed approach could be applied to other sequence-to-sequence tasks in NLP or even in speech recognition ? \n\nMinor comment: \n* The equations are not easy to follow, they should be numbered. The three equations just before Section 2.2 should also be adapted as they seem redundant with Table 1.\n", "We are very grateful for helping us improve the paper.\n\nThe reviewer wrote: \"section 3 is difficult to read because the notations of the different formula are a little bit heavy. They were nicely summarised on the Figure 1: each of the formula' block could be replaced by a figure, which would make this section faster to read and understand.\" We took this very seriously and we have re-arranged the whole presentation of equations in Section 3. In the new revision, every set of equations comes together with the corresponding Figure, and the figures were slightly re-drawn to match the equations closer. We hope that this addressed the main concern about presentation.\n\nAs for Table 3, we found it hard to get the parameter count for every one of the earlier models presented in the table. But we asked the authors of the other papers and we will try to add one more revision with parameter counts.\n\nAs for not reaching state-of-the-art, we believe that it is due to the lack of self-attention in the decoder. Please see the comment to Review 1 above where we discuss this.", "We are very grateful for the review.\n\nAs for the suggestion to improve presentation and equations, we have uploaded a new revision with diagrams put together with equations in a new way (inspired by another review). We hope this makes it easier to understand.\n\nAs for this point: \" I would like to see some pointers about why in Table 3 the Transformer approach (Vaswani et al. 2017) outperforms SliceNet.\" -- let us explain how Transformer has crucial architectural parts missing from SliceNet. One key part of Transformer is self-attention in the decoder: an attention layer that allows the decoder to attend to previously generated (decoded) words. This is a main innovation of the Transformer architecture and it is missing from SliceNet (as we started working on SliceNet before the Transformer paper). We only have the encoder-decoder attention known from previous sequence-to-sequence models (i.e., the decoder can attend to the encoder, but not to previously decoded words). We believe that this difference is responsible for the difference in results: as far as we know, no architecture without self-attention in the decoder has shown better results than SliceNet. It should also be possible to combine the decoder self-attention with SliceNet -- some results (with non-separable convolutions) are already coming up as SNAIL for image generation (https://arxiv.org/abs/1712.09763). We believe that the techniques we present in this paper can be used to extend SNAIL to get additional improvements, and it should also work for tasks like image generation, parsing, summarization and others.", "I think there is a kind of consensus about the reviews of this paper. I would like to kindly encourage the authors to modify the sections 2&3, in order to incorporate the changes we requested, and to give some additional numerical results with a little bit more comments. In this case and if the modifications are relevant, I would be happy to raise my rating." ]
[ 5, 7, 7, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_S1jBcueAb", "iclr_2018_S1jBcueAb", "iclr_2018_S1jBcueAb", "rJeAByPlf", "rJ9-yZ9lM", "rJeAByPlf" ]
iclr_2018_rywHCPkAW
Noisy Networks For Exploration
We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent’s policy can be used to aid efficient exploration. The parameters of the noise are learned with gradient descent along with the remaining network weights. NoisyNet is straightforward to implement and adds little computational overhead. We find that replacing the conventional exploration heuristics for A3C, DQN and Dueling agents (entropy reward and epsilon-greedy respectively) with NoisyNet yields substantially higher scores for a wide range of Atari games, in some cases advancing the agent from sub to super-human performance.
accepted-poster-papers
The paper proposes to add noise to the weights of a policy network during learning in Deep-RL settings and finds that this results in better performance on DQN, A3C and other algorithms that use other exploration strategies. Unfortunately, the paper does not do a thorough job of exploring the reasons and doesn't offer a comparison to other methods that have been out on arxiv for several months before the submission, in spite of reviewers and anonymous requests. Otherwise I might have supported recommending the paper for a talk.
train
[ "Hyf0aUVeM", "rJ6Z7prxf", "H14gEaFxG", "SJDBQS5mz", "BJZhEy5Xf", "B1o89OFXz", "B1e_W8uMM", "H1NaqYFmz", "r14ytLuMM", "ry7jv6OQf", "HJlU1T9GM", "S1cI98dGM", "S1pPiLdMG", "BJRR3paAZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "public" ]
[ "In this paper, a new heuristic is introduced with the purpose of controlling the exploration in deep reinforcement learning. \n\nThe proposed approach, NoisyNet, seems very simple and smart: a noise of zero mean and unknown variance is added to each weight of the deep network. The matrices of unknown variances are considered as parameters and are learned with a standard gradient descent. The strengths of the proposed approach are the following:\n1 NoisyNet is generic: it is applied to A3C, DQN and Dueling agents. \n2 NoisyNet reduces the number of hyperparameters. NoisyNet does not need hyperparameters (only the kind of the noise distribution has to be defined), and replacing the usual exploration heuristics by NoisyNet, a hyperparameter is suppressed (for instance \\epsilon in the case of epsilon-greedy exploration).\n3 NoisyNet exhibits impressive experimental results in comparison to the usual exploration heuristics for to A3C, DQN and Dueling agents.\n\nThe weakness of the proposed approach is the lack of explanation and investigation (experimental or theoretical) of why does Noisy work so well. At the end of the paper a single experiment investigates the behavior of weights of noise during the learning. Unfortunately this experiment seems to be done in a hurry. Indeed, the confidence intervals are not plotted, and probably no conclusion can be reached because the curves are averaged only across three seeds! It’s disappointing. As expected for an exploration heuristic, it seems that the noise weights of the last layer (slowly) tend to zero. However for some games, the weights of the penultimate layer seem to increase. Is it due to NoisyNet or to the lack of seeds? \n\nIn the same vein, in section 3, two kinds of noise are proposed: independent or factorized Gaussian noise. The factorized Gaussian noise, which reduces the number of parameters, is associated with DQN and Dueling agents, while the independent noise is associated with A3C agent. Why? \n\nOverall the proposed approach is interesting and has strengths, but the paper has weaknesses. I am somewhat divided for acceptance. \n", "This paper introdues NoisyNets, that are neural networks whose parameters are perturbed by a parametric noise function, and they apply them to 3 state-of-the-art deep reinforcement learning algorithms: DQN, Dueling networks and A3C. They obtain a substantial performance improvement over the baseline algorithms, without explaining clearly why.\n\nThe general concept is nice, the paper is well written and the experiments are convincing, so to me this paper should be accepted, despite a weak analysis.\n\nBelow are my comments for the authors.\n\n---------------------------------\nGeneral, conceptual comments:\n\nThe second paragraph of the intro is rather nice, but it might be updated with recent work about exploration in RL.\nNote that more than 30 papers are submitted to ICLR 2018 mentionning this topic, and many things have happened since this paper was\nposted on arxiv (see the \"official comments\" too).\n\np2: \"our NoisyNet approach requires only one extra parameter per weight\" Parameters in a NN are mostly weights and biases, so from this sentence\none may understand that you close-to-double the number of parameters, which is not so few! If this is not what you mean, you should reformulate...\n\np2: \"Though these methods often rely on a non-trainable noise of vanishing size as opposed to NoisyNet which tunes the parameter of noise by gradient descent.\"\nTwo ideas seem to be collapsed here: the idea of diminishing noise over an experiment, exploring first and exploiting later, and the idea of\nadapting the amount of noise to a specific problem. It should be made clearer whether NoisyNet can address both issues and whether other\nalgorithms do so too...\n\nIn particular, an algorithm may adapt noise along an experiment or from an experiment to the next.\nFrom Fig.3, one can see that having the same initial noise in all environments is not a good idea, so the second mechanism may help much.\n\nBTW, the short section in Appendix B about initialization of noisy networks should be moved into the main text.\n\np4: the presentation of NoisyNets is not so easy to follow and could be clarified in several respects:\n- a picture could be given to better explain the structure of parameters, particularly in the case of factorised (factorized, factored?) Gaussian noise.\n- I would start with the paragraph \"Considering a linear layer [...] below)\" and only after this I would introduce \\theta and \\xi as a more synthetic notation.\nLater in the paper, you then have to state \"...are now noted \\xi\" several times, which I found rather clumsy.\n\np5: Why do you use option (b) for DQN and Dueling and option (a) for A3C? The reason why (if any) should be made clear from the clearer presentation required above.\n\nBy the way, a wild question: if you wanted to use NoisyNets in an actor-critic architecture like DDPG, would you put noise both in the actor and the critic?\n\nThe paragraph above Fig3 raises important questions which do not get a satisfactory answer.\nWhy is it that, in deterministic environments, the network does not converge to a deterministic policy, which should be able to perform better?\nWhy is it that the adequate level of noise changes depending on the environment? By the way, are we sure that the curves of Fig3 correspond to some progress\nin noise tuning (that is, is the level of noise really \"better\" through time with these curves, or they they show something poorly correlated with the true reasons of success?)?\n\nFinally, I would be glad to see the effect of your technique on algorithms like TRPO and PPO which require a stochastic policy for exploration, and where I believe that the role of the KL divergence bound is mostly to prevent the level of stochasticity from collasping too quickly.\n\n-----------------------------------\nLocal comments:\n\nThe first sentence may make the reader think you only know about 4-5 old works about exploration.\n\nPp. 1-2 : \"the approach differs ... from variational inference. [...] It also differs variational inference...\"\nIf you mean it differs from variational inference in two ways, the paragraph should be reorganized.\n\np2: \"At a high level our algorithm induces a randomised network for exploration, with care exploration\nvia randomised value functions can be provably-efficient with suitable linear basis (Osband et al., 2014)\"\n=> I don't understand this sentence at all.\n\nAt the top of p3, you may update your list with PPO and ACKTR, which are now \"classical\" baselines too.\n\nAppendices A1 and A2 are a lot redundant with the main text (some sentences and equations are just copy-pasted), this should be improved.\nThe best would be to need to reject nothing to the Appendix.\n\n---------------------------------------\nTypos, language issues:\n\np2\nthe idea ... the optimization process have been => has\n\np2\nThough these methods often rely on a non-trainable noise of vanishing size as opposed to NoisyNet which tunes the parameter of noise by gradient descent.\n=> you should make a sentence...\n\np3\nthe the double-DQN\n\nseveral times, an equation is cut over two lines, a line finishing with \"=\", which is inelegant\n\nYou should deal better with appendices: Every \"Sec. Ax/By/Cz\" should be replaced by \"Appendix Ax/By/Cz\".\nBesides, the big table and the list of performances figures should themselves be put in two additional appendices\nand you should refer to them as Appendix D or E rather than \"the Appendix\".\n\n\n\n\n", "A new exploration method for deep RL is presented, based on the idea of injecting noise into the deep networks’ weights. The noise may take various forms (either uncorrelated or factored) and its magnitude is trained by gradient descent along other parameters. It is shown how to implement this idea both in DQN (and its dueling variant) and A3C, with experiments on Atari games showing a significant improvement on average compared to these baseline algorithms.\n\nThis definitely looks like a worthy direction of research, and experiments are convincing enough to show that the proposed algorithms indeed improve on their baseline version. The specific proposed algorithm is close in spirit to the one from “Parameter space noise for exploration”, but there are significant differences. It is also interesting to see (Section 4.1) that the noise evolves in non-obvious ways across different games.\n\nI have two main concerns about this submission. The first one is the absence of a comparison to the method from “Parameter space noise for exploration”, which shares similar key ideas (and was published in early June, so there was enough time to add this comparison by the ICLR deadline). A comparison to the paper(s) by Osband et al (2016, 2017) would have also been worth adding. My second concern is that I find the title and overall discussion in the paper potentially misleading, by focusing only on the “exploration” part of the proposed algorithm(s). Although the noise injected in the parameters is indeed responsible for the exploration behavior of the agent, it may also have an important effect on the optimization process: in both DQN and A3C it modifies the cost function being optimized, both through the “target” values (respectively Q_hat and advantage) and the parameters of the policy (respectively Q and pi). Since there is no attempt to disentangle these exploration and optimization effects, it is unclear if one is more important than the other to explain the success of the approach. It also sheds doubt on the interpretation that the agent somehow learns some kind of optimal exploration behavior through gradient descent (something I believe is far from obvious).\n\nEstimating the impact of a paper on future research is an important factor in evaluating it. Here, I find myself in the akward (and unusual to me) situation where I know the proposed approach has been shown to bring a meaningful improvement, more precisely in Rainbow (“Rainbow: Combining Improvements in Deep Reinforcement Learning”). I am unsure whether I should take it into account in this review, but in doubt I am choosing to, which is why I am advocating for acceptance in spite of the above-mentioned concerns.\n\nA few small remarks / questions / typos:\n- In eq. 3 A(...) is missing the action a as input\n- Just below: “the the”\n- Last sentence of p. 3 can be misleading because the gradient is not back-propagated through all paths in the defined cost\n- “In our experiments we used f(x) = sgn(x) p |x|”: this makes sense to me for eq. 9 but why not use f(x) = x in eq. 10?\n- Why use factored noise in DQN and independent noise in A3C? This is presented like an arbitrary choice here.\n- What is the justification for using epsilon’ instead of epsilon in eq. 15? My interpretation of double DQN is that we want to evaluate (with the target network) the action chosen by the Q network, which here is perturbed with epsilon (NB: eq. 15 should have b in the argmax, not b*)\n- Section 4 should say explicitly that results are over 200M frames\n- Assuming the noise is sampled similarly doing evaluation (= as in training), please mention it clearly.\n- In paragraph below eq. 18: “superior performance compare to their corresponding baselines”: compared\n- There is a Section 4.1 but no 4.2\n- Appendix has a lot of redundant material with the main text, for instance it seems to me that A.1 is useless.\n- In appendix B: “σi,j is simply set to 0.017 for all parameters” => where does this magic value come from?\n- List x seems useless in C.1 and C.2\n- C.1 and C.2 should be combined in a single algorithm with a simple “if dueling” on l. 24\n- In C.3: (1) missing pi subscript for zeta in the “Output:” line, (2) it is not clear what the zeta’ parameters are for, in particular should they be used in l. 12 and 22?\n- The paper “Dropout as a Bayesian approximation” seems worth at least adding to the list of related work in the introduction.", "We completely agree with the reviewer that the role of noise in critic and whether it is useful or not requires further investigation. We will include experiments to investigate this in a future version. \n\nRegarding the reviewer comment \"exploration in the actor is precisely meant for keeping exploring the actions with seemingly small return\" it is true that due to the noise in actor network, actions with seemingly small return may be explored. But in practice this might not be enough. The problem is that if there is some error in the estimation of value function then the probability of seemingly \"bad\" actions can go to zero super fast due to the update rule of A3C and the exponential nature of softmax operator in the actor network. In that case adding some small noise to the actor network would not change those exponentially small probabilities that much (at this point the agent has already converged to a wrong \"almost\" deterministic policy). Using stochastic baseline may help to alleviate this problem since by adding noise to the baseline the algorithm does not deterministically decrease the probabilities of seemingly bad actions. \n\nRegarding Appendix B we agree with the reviewer and change the text accordingly.", "\"Note that in the standard A3C if there is some error in the estimation of baseline value function then the algorithm may stop exploring the actions with seemingly small return prematurely. \"\nI'm afraid this is wrong: exploration in the actor is precisely meant for keeping exploring the actions with seemingly small return.\nHonestly, this idea of stochasticity in the critic is interesting, but it would deserve a thorough mathematical analysis to figure out what it really does (and an empirical comparison with not using it).\n\nAbout the three added lines in Appendix B, they don't bring much: it would be more useful to bring the detailed explanation of the calculations close to the figure.\n\nAnd there is a new typo: \"whose weights our perturbed\" => are pertubed", "Thanks for the response.\n\nRegarding #1 Reporting results and comparison after 40 M steps 20 games, as it is done in DQN w/ param noise paper, is a non-standard practice (e.g., in the ES paper, used as the baseline of \n DQN w param noise algorithm, they use the standard setting of the nature paper). So we don't think it is a right course of action to report our results in a non-standard setting and we refrain from doing it. \n\nEven if we had considered comparison with DQN w/ param noise after 40 M frames, this would not have been a fair comparison. This is due the fact that the DQN w/ param noise algorithm uses a different optimizer (Adam) and a different set of hyper parameters (e.g., step size =1e-4) than the standard DQN, whereas we use the standard nature paper DQN optimizer (RMSProp) and the corresponding hyper parameters (step size=2.5e-4). So by just comparing the existing results after 40 M frames it would have been difficult to know whether any potential gain/loss is due to the strength of exploration strategy or due to the different choice of hyper parameter and the optimiser. We believe the right course of action would be that the authors of DQN w/ param noise report their results in the standard setting using standard hyper parameters and not the other way around. So a fair comparison between their work and the rest of literature would be straightforward. \n\nRegarding the changes in DQN and Dueling we will include them in the Log. We also confirm that these changes are fixes to correct mistakes in the original submission and match our implementation.\n", "We like to thank the anonymous reviewers for their helpful and constructive comments. We provide individual response to each reviewer's comments. Here we report the list of main changes which we have added to the new revision.\n\n1-A discussion on the optimisation aspects of NoisyNet (Section 5, Paragraph 1). \n2- Further clarifications on why factorised noise is used in some agents as opposed to independent noise in the case of A3C (Section 3, Paragraph 3).\n3- Reporting the learning curves and the scores for NoisyNet-A3C with factorised noise, showing that a similar performance to the case of independent noise can be achieved with significantly less noisy variables (Appendix D).\n4-Adding error bars to the learning curves of Fig. 3 and error bounds to the scores of Table 3.\n5-Adding a graphical representation of noisy linear layer (Appendix B). \n6- Correcting the inconsistencies between the description of the algorithm in the original submission and our implementation (Appendix C Algo. 1 line 13, 14 and 16 and Eq. 16)\n", "We thank the reviewer for the response.\n\nRegarding the use of noise in the critic (i.e., stochastic baseline) we think it is useful since it captures the uncertainty over the value function. Note that in the standard A3C if there is some error in the estimation of baseline value function then the algorithm may stop exploring the actions with seemingly small return prematurely. Stochastic baseline enables A3C-NoisyNet to do a better job in exploring those underappreciated actions as it dose not always decrease their probabilities.\n\nWe agree with the reviewer regarding the lack of description in Appendix B . In the new revision we have added a new paragraph describing the block diagram in Appendix B.\n\n", "Here we address the main concerns of the reviewer:\n\n1- Concerning the absence of empirical comparison to the method “Parameter space noise for exploration”, we argue that this work is a concurrent submission to ICLR. So we do not think it is necessary to compare with it at this stage. We must emphasize that a fair comparison between the two methods can not be done by directly using the reported results in “Parameter space noise for exploration” since in this work the authors report performance for a selection of Atari games trained for 40 million frames, whereas we use the standard (Nature paper) setting of 57 games and 200 million frames. So to have a fair comparison we would need to implement and run their algorithm in the standard setting. \n\n2- Concerning the focus on the exploration aspect, the reviewer is right when saying that it is difficult to disentangle the exploration effect from the optimization in the final performance. On the other hand, we argue that Noisy Networks is the only exploration technique used by our algorithm. We emphasize on the exploration aspect because having weights with greater uncertainty introduce more variability into the decisions made by the policy, which has potential for exploratory actions. We have added a discussion in the updated version of the paper discussing that improvements might also come from better optimization. Finally we need to emphasize that we do not claim that noisy networks provide an optimal strategy for exploration. Indeed noisy networks does not take into account the uncertainty of the action-value function of the future states, which is required for optimal tradeoff between exploration and exploitation (see Azar et al. 2017). Thus, it cannot be an optimal strategy. However, it can produce an exploration which is state-dependent and automatically tunes the level of exploration for each problem and can be used with any Deep RL agent. This is a step towards a general exploration strategy for Deep RL.\n\nThe reviewer raises an interesting point of adding a graphical representation of the noisy linear layer. We included that in the revision as it could help implementing the method.\n\nFinally, we agree on the minor comments/typos and we have already corrected them in this updated version. For a discussion on the choice of factorised noise, please see answer to AnonReviewer1.", "Thanks for your response and for editing the paper.\n\nAbout point 3 above, in the case of an actor-critic architecture, the relationship between exploration and noise in the actor is clear. By contrast, the relationship between exploration and noise in the critic is far less obvious. It is very unclear to me why having a noisy value function should help, hence my question. In a later paper (this is too late for this one), I would be glad to see what you get if you put noise only into the critic.\n\nMy general feeling is that the paper could have been improved more in terms of the split and redundancy between the main text and appendices A and B (in Appendix B, the figure alone without a word of explanation is a pity), but some useful improvements have been made.\n\nA new typo: p2, network.Randomised => missing space", "Thank you for the response and updated manuscript, this is appreciated.\n\nRegarding #1, I believe that current research (in ML in general and deep RL in particular) has reached a pace where one can't just dismiss Arxiv papers because they haven't been accepted yet at a conference / journal. Of course it has to be a judgement call taking into account the other paper's visibility, quality, similarity to the proposed approach, and how easy/hard it is to make such a comparison. But in that case my personal opinion is that such a comparison should have been made here. The easiest one would have probably been to compare to your own performance after 40M step on the same subset of games, though a better one would have been to re-run their code which is open sourced (since end of July if I read their commit history correctly).\n\nNB: I'm also disappointed that they didn't compare to your approach in their own ICLR submission :(\n\nIn your revised version you changed the DQN & Dueling algorithms in two ways:\n- The noise is now the same for all transitions in a batch, while originally it was sampled differently for each transition\n- There is a new noise parameter \\xi'' for the action selection network, which wasn't there before (it appears as epsilon'' in eq. 16 which btw doesn't seem to be properly defined)\nCould you please confirm that these changes are fixes to correct mistakes in the original submission and match your implementation? (I don't see them mentioned in your changelog)\n\nMinor: Conclusion, 1st paragraph, last sentence => \"introduceS\"", "1- Concerning the diminishing noise over an experiment and whether NoisyNet addresses this issue, we argue that the NoisyNet adapts automatically the noise during learning which is not the case with the prior methods based on hand-tuned scheduling schemes. As it is shown in Section 4.1 (Fig. 3) it seems the mechanism under which NoisyNet learns to make a balance between exploration and exploitation is problem dependent and does not always follow a same pattern such as exploring first and exploit later. We think this is a useful feature of NoisyNet since it is quite difficult, if not impossible, to know to what extent and when exploration is required in each problem. So it is sensible to let the algorithm learn on its own how to handle the exploration-exploitation tradeoff.\n\n2- Concerning the choice of factorised noise, the main reason is to boost the algorithm speed in the case of DQN. In the case of A3C, since it is a distributed algorithm and speed is not a major concern we don’t use the factorization trick. However, we have done experiments which shows that we can achieve a similar performance with A3C using factorised noise. We included this result in the revised version.\n\n3- Concerning the application of NoisyNet in DDPG. We think the adaptation should be straight forward. One can put noise on the actor and the critic as we have done for A3C which is also an actor-critic method.\n\n4- Concerning the convergence to deterministic weights, we are not entirely sure that why this does not happen in the penultimate layer. One hypothesis may be that although there exists a deterministic solution for the optimisation problem of Eq. 2 this solution is not necessarily unique and there may exist a non-deterministic optima to which NoisyNet converges. In fig 3 we wanted to show that even in complex problems such as Atari games we observe the reduction of the noise in the last layer and problem specific evolution of noise parameters across the board. We have provided further clarification in the revised version and also addressed the remainder of the minor comments made by the reviewer. ", "1- Concerning the number of seeds, we ran all the experiments for three seeds. Note that these experiments are very computationally intensive and this is why the number of seeds is low (all papers with atari experiments over the 57 games tend to do one or three seeds). Nonetheless, we have provided the errors bars w.r.t. to 3 seeds in the revised version for fig 3 and for table 3 (max score for the 57 games). The error bars were already present for the performance on the 57 games in the appendix (figs 4, 5 and 6). It is not common to compute error bars for the median human normalized score as this score is already averaged over all the 57 atari games. \n\n2- Concerning the question on why factorised noise is used in one case (DQN) and not in the other case (A3C). As we mentioned in our response to the reviewer 1, the main reason is to boost the algorithm speed in the case of DQN, in which generating the independent noise for each weight is costly. In the case of A3C, since it is a distributed algorithm and speed is not a major concern we don’t use the factorisation trick. However, we have done experiments which shows that we can achieve a similar performance with A3C using factorised noise which we are including in the revised version.\n", "Very interesting paper and results, thanks for the paper! I have a few questions:\n\nEarlier this year, before \"Noisy Networks For Exploration\", a paper with very similar approach, \"Parameter space noise for exploration\" has been published. It has already reported a number of improvements compare to the baseline, action space noise implementations of different variants of DQN as well as in the continuous domain. So it would be very nice to see in the paper comparison of the \"Noisy Networks For Exploration\" not only against the baseline but also against parameter space noise approach to understand if noisy networks can provide any benefits - better exploration, larger maximum reward achieved, or noisy networks showed comparable to the parameter space approach results but also with the cost of additional computational complexity.\n\nAlso it would be nice if similar to OpenAI you can release your \"noisy network\" implementation to help with independent reproduction and of the results described in paper." ]
[ 5, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rywHCPkAW", "iclr_2018_rywHCPkAW", "iclr_2018_rywHCPkAW", "BJZhEy5Xf", "H1NaqYFmz", "HJlU1T9GM", "iclr_2018_rywHCPkAW", "ry7jv6OQf", "H14gEaFxG", "S1cI98dGM", "r14ytLuMM", "rJ6Z7prxf", "Hyf0aUVeM", "iclr_2018_rywHCPkAW" ]
iclr_2018_Hkc-TeZ0W
A Hierarchical Model for Device Placement
We introduce a hierarchical model for efficient placement of computational graphs onto hardware devices, especially in heterogeneous environments with a mixture of CPUs, GPUs, and other computational devices. Our method learns to assign graph operations to groups and to allocate those groups to available devices. The grouping and device allocations are learned jointly. The proposed method is trained with policy gradient and requires no human intervention. Experiments with widely-used computer vision and natural language models show that our algorithm can find optimized, non-trivial placements for TensorFlow computational graphs with over 80,000 operations. In addition, our approach outperforms placements by human experts as well as a previous state-of-the-art placement method based on deep reinforcement learning. Our method achieves runtime reductions of up to 60.6% per training step when applied to models such as Neural Machine Translation.
accepted-poster-papers
The authors provide an alternative method to [1] for placement of ops in blocks. The results are shown to be an improvement over prior RL based placement in [1] and superior to *some* (maybe not the best) earlier methods for operations placements. The paper seems to have benefited strongly from reviewer feedback and seems like a reasonable contribution. We hope that the implementation may be made available to the community. [1] Mirhoseini A, Pham H, Le Q V, et al. Device Placement Optimization with Reinforcement Learning[J]. arXiv preprint arXiv:1706.04972, 2017.
train
[ "HytWY1DVG", "ryazKvH4M", "BJuGT9zez", "Sk-qjGYlz", "rkSREOYgM", "r1yts_37f", "HJMiRCdGz", "SypbR0OGG", "rJZNkyYzz" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Thanks for your response!\n\nAlthough we cited Scotch papers from 2009, the software we used was developed in 2012: http://www.labri.fr/perso/pelegrin/scotch/. Thanks to your suggestion, we have found a more recent graph partitioning package called KaHIP, which has publications in 2017 as well as ongoing software development. We didn’t originally use it as a baseline, because unlike Scotch, it doesn’t directly address the problem of mapping a graph of operations to a graph of hardware devices. Earlier work (e.g., [1]) shows that load balancing is not always the best solution for optimizing placements. For completeness, we are now converting our graphs into a compatible format and running experiments using KaHIP and will add these results to the final version of the paper. We are also happy to compare against new approaches suggested by reviewers.\n\nRegarding SystemML’s other optimizations, such as op fusion, compression, etc. (we also looked at its citations and other relevant papers), we want to emphasize that our paper focuses on a specific problem: partitioning operations in a computational graph so as to minimize runtime. While there are other approaches to optimizing graphs, they are complementary and/or orthogonal to our approach. We only mentioned memory optimization because we plan to apply our method to this problem in the future. Nevertheless, we looked further into references of SystemML and found a number of papers proposing cost model optimizations of computational graphs. Within those references (and references to those references), we did not find any approach that directly addresses our problem. Among those, we found [2] most relevant, which is a method that optimizes resource allocation for programs. [2] assumes a compiler that breaks the program into blocks. Given the blocks, the method searches over a grid of possible resources to be allocated to each block and finds a configuration that minimizes the cost. This method and related approaches cannot be directly applied to our problem because such a partitioner does not exist in our setting. In fact, a contribution of our work is to learn to partition a complex deep net with tens of thousands of operations into blocks and then place them to minimize the runtime. Our work shows the importance of jointly learning this partitioning together with our resource allocation over prior work such as [1] that uses a fixed set of blocks. If we were to directly apply [2] to our problem without partitioning, this would involve searching over a space of 9^80000, which is prohibitively expensive.\n\n[1] Mirhoseini A, Pham H, Le Q V, et al. Device Placement Optimization with Reinforcement Learning, ICML'17. \n[2] Botong Huang, Matthias Boehm, Yuanyuan Tian, Berthold Reinwald, Shirish Tatikonda, and Frederick R. Reiss. 2015. Resource Elasticity for Large-Scale Machine Learning, SIGMOD '15. ", "Scotch (circa 2009) seems dated. I would urge the authors to compare against more recent efforts. In case it wasn't clear from the initial review, there are other related efforts that the authors may want to compare against if they want to make the paper stronger (check the SystemML ref and/or other SystemML papers for such refs). \n\nAs an aside, while SystemML does worry about memory (it ensures that an operators' arguments fit within device memory before placing them), it also does a number of other things such as plan enumeration, cost-based optimization, code generation, operator fusion, compression etc. etc. Describing it as a \"cost-based model\" that forms a \"baseline for memory optimizations\" does not do it justice.", "The paper seems clear enough and original enough. The idea of jointly forming groups of operations to colocate and figure out placement on devices seems to hold merit. Where the paper falls short is motivating the problem setting. Traditionally, for determining optimal execution plans, one may resort to cost-based optimization (e.g., database management systems). This paper's introduction provides precisely 1 statement to suggest that may not work for deep learning. Here's the relevant phrase: \"the cost function is typically non-stationary due to the interactions between multiple devices\". Unfortunately, this statement raises more questions than it answers. Why are the cost functions non-stationary? What exactly makes them dynamic? Are we talking about a multi-tenancy setting where multiple processes execute on the same device? Unlikely, because GPUs are involved. Without a proper motivation, its difficult to appreciate the methods devised.\n\nPros:\n- Jointly optimizing forming of groups and placing these seems to have merit\n- Experiments show improvements over placement by human \"experts\"\n- Targets an important problem\n\nCons:\n- Related work seems inadequately referenced. There exist other linear/tensor algebra engines/systems that perform such optimization including placing operations on devices in a distributed setting. This paper should at least cite those papers and qualitatively compare against those approaches. Here's one reference (others should be easy to find): \"SystemML's Optimizer: Plan Generation for Large-Scale Machine Learning Programs\" by Boehm et al, IEEE Data Engineering Bulletin, 2014.\n- The methods are not well motivated. There are many approaches to devising optimal execution plans, e.g., rule-based, cost-based, learning-based. In particular, what makes cost-based optimization inapplicable? Also, please provide some reasoning behind your hypothesis which seems to be that while costs may be dynamic, optimally forming groups and placing them is learn-able.\n- The template seems off. I don't see the usual two lines under the title (\"Anonymous authors\", \"Paper under double-blind review\").\n- The title seems misleading. \".... Device Placement\" seems to suggest that one is placing devices when in fact, the operators are being placed.", "In a previous work [1], an auto-placement (better model partition on multi GPUs) method was proposed to accelerate a TensorFlow model’s runtime. However, this method requires the rule-based co-locating step, in order to resolve this problem, the authors of this paper purposed a fully connect network (FCN) to replace the co-location step. In particular, hand-crafted features are fed to the FCN and the output is the prediction of group id of this operation. Then all the embeddings in each group are averaged to serve as the input of a seq2seq encoder. \n\nOverall speaking, this work is quite interesting. However, it also has several limitations, as explained below.\n\nFirst, the computational cost of the proposed method seems very high. It may take more than one day on 320-640 GPUs for training (I did not find enough details in this paper, but the training complexity will be no less than the in [1]). This makes it very hard to reproduce the experimental results (in order to verify it), and its practical value becomes quite restrictive (very few organizations can afford such a cost).\n\nSecond, as the author mentioned, it’s hard to compare the experimental results in this paper wit those in [1] because different hardware devices and software versions were used. However, this is not a very sound excuse. I would encourage the authors to implement colocRL [1] on their own hardware and software systems, and make direct comparison. Otherwise, it is very hard to tell whether there is improvement, and how significant the improvement is. In addition, it would be better to have some analysis on the end-to-end runtime efficiency and the effectiveness of the placements.\n\n [1] Mirhoseini A, Pham H, Le Q V, et al. Device Placement Optimization with Reinforcement Learning[J]. arXiv preprint arXiv:1706.04972, 2017. https://arxiv.org/pdf/1706.04972.pdf \n", "This paper proposes a device placement algorithm to place operations of tensorflow on devices. \n\nPros:\n\n1. It is a novel approach which trains the placement end to end.\n2. The experiments are solid to demonstrate this method works very well.\n3. The writing is easy to follow.\n4. This would be a very useful tool for the community if open sourced.\n\nCons:\n\n1. It is not very clear in the paper whether the training happens for each model yielding separate agents, or a shared agent is trained and used for all kinds of models. The latter would be more exciting. The adjacency matrix varies size for different graphs, so I guess a separate agent is trained for each graph? However, if the agent is not shared, why not just use integer to represent each operation in the graph, since overfitting would be more desirable in this case.\n2. Averaging the embedding is hard to understand especially for the output sizes and number of outputs.\n3. It is not clear how the adjacency information is used.\n", "Thanks to helpful feedback from all three reviewers, we were able to significantly improve the paper!\n\nTwo reviewers have given us low scores, and we believe that the main reason for the lack of enthusiasm is that the reviewers don’t believe that device placement is worthwhile. We want to emphasize that our method saves a lot of time. For example, it takes us around 12.5 GPU-hours to save 265 GPU-hours on training NMT for one epoch on WMT’14 En->Fr. Our method finds optimized, non-trivial placements for computational graphs with over *80,000 operations*. Not only does our approach achieve strong results, but unlike previous methods which require human experts to feed in properties of the hardware or manually cluster operations, our method is end-to-end and scales to much larger computational graphs and novel hardware devices.\n\nBased on reviewer suggestions, we made several changes to the paper, including:\n -To address Reviewer 1’s concern that the policy training for device placement “may take more than one day on 320-640 GPUs”, we’ve updated the paper to clarify that we actually use 36 GPUs (or 68 GPUs for deep networks) for at most three hours. Furthermore, we ran additional experiments in which we achieved comparable results using only 5 GPUs for 2.5 hours, which means that it takes us around 12.5 GPU-hours to save 265 GPU-hours on training NMT for one epoch on WMT’14 En->Fr. For more experiments and discussions, please see the newly added subsection called “Overhead of Training Hierarchical Planner” in Section 3.\n -To address Reviewer 3’s concern about the motivation behind our method and the non-stationarity of the reward, we have added discussions to Section 1 explaining that we use a standard cloud environment with a shared cluster of CPUs and GPUs. Therefore, our CPUs serve other jobs concurrently, making our cost function non-stationary. We also want to make it clear that we *did* compare against cost-based optimizations in Table 1 and that we achieve significantly better results.\n\nIncorporating reviewer feedback has made our paper much stronger, so please consider updating your scores. Thanks for taking the time to review our paper!\n", "Thank you for your constructive feedback!\n\nThe reviewer is concerned that the policy training for device placement takes so many resources and quoted “320-640 GPUs” being used. In reality, we use 36 GPUs in our experiments (or 68 GPUs for deep networks). We apologize this was not clear in the paper. [More details can be found in Section 2 under “Distributed Training”.]\n\nRegarding concern about reproducibility, we confirm that it’s possible to replicate the experiments with only 5 K40 GPUs. We ran an experiment to partition 4-layer NMT model on 5 devices (1 CPU and 4 GPUs). We used 5 GPUs, 1 for policy training and 4 for measuring time, and it took roughly 2.5 hours to find a good placement. While this may seem slow, it actually takes around 12.5 GPU-hours to save 265 GPU-hours on training NMT on WMT’14 En->Fr for one epoch. [More details can be found below and in Section 3 including Fig. 3 under “Overhead of Training Hierarchical Planner”.]\n\nThe reviewer is concerned with the lack of comparison against ColocRL. We want to emphasize CoLocRL makes a strong assumption that we have a human expert to manually assign operations to groups. Our method does not make this assumption. In addition to being more flexible, our method uses much fewer resources and actually gets better results. For example for NMT (2-layer), our improvement over best heuristics is 60.6%, compared to 19.0% reported in ColocRL. For NMT (4-layer) and NMT (8-layer), no results were reported for ColocRL, which we suspect is due to the model being unable to handle the large number of operations in these graphs.\n\n\n", "Thank you for your positive feedback. We will open-source our code once the paper gets accepted. \n\nThe reviewer asks if we are training a policy per model (which is the case) and whether it’s possible to use different embeddings for different ops, because it’s easier to overfit. While this is true, training the policy network will take longer without the shared embeddings. We actually tried this, and it took longer to train the policy network because the policy network has more parameters to learn.\n\nThe reviewer is concerned that “averaging is hard to understand especially for the output sizes and number of outputs.” We apologize for this as this is not exactly what we did. We corrected our paper as we are not averaging the operation embeddings, but we are using information about operations assigned to a group to make a new embedding for those groups. \n\nMore details are as follows which also include how adjacency information is used (we also added these details in Section 2 of the submission). \n\nFirst, for creating embedding for each operation, we concatenate 3 vectors: \n1) A vector that embeds operation type information. We learn this vector similarly to how language model embedding is learned. Our vocabulary is the set of all TF operations and we learn an operation embedding of size 20.\n 2) A vector that contains output sizes and number of outputs for an operation. We set a fixed threshold (6 in our design) for maximum number of possible output edges for an operation, and for each output edge we set a threshold (4 in our design) for maximum dimension. We fill this vector of size 24 by reading the outputs of an operation one by one and putting in the output operations shapes. We fill the vector with -1 for non-existing outputs edges or dimensions.\n3) A vector that contains adjacency information for that operation. We index the graph by traversing it in a BFS manner and set the maximum number of incoming and outgoing edges to 12 (6 for each direction). We then fill the vector with the index of the incoming and outgoing operations. We fill the vector with -1, for non-existing edges. \n\nSecond, to create an embedding for each group, we concatenate 3 vectors: \n1) A vector that counts how many of each operation types are assigned to that group. The size of this vector is the size of vocabulary of of TensorFlow’s most widely used operations which we limit to 200.\n2) A vector that counts the overall output shapes of all the operations in that group. This vector is created by adding all the operation output shape embedding described above (not including the -1) and is of size 16.\n 3) A vector that contains group adjacency information. The size of this vector is the number of groups (256 in our experiments), and its i'th value is 1 if the group has edges to the i'th group and is 0 otherwise. \n", "Thank you for your constructive feedback! \n\nThe reviewer is concerned with the lack of references to previous works and comparison against them. First, we are happy to add more citations to related work (see Section 1 in the updated submission). We believe that related works such as SystemML will be a strong baseline for us if we want to expand this work to memory management, since unlike runtime, memory usage is deterministic. We also compared our approach against cost-based optimization implemented in Scotch library (see Table 1) and showed that our method performs significantly better. The advantage of our method is that it’s not dependent on the hardware platform because our method can learn runtime information directly through experiments. Whereas to use Scotch, we need to feed information about the hardware platform to it.\n\nThe reviewer is asking why the cost is non-stationary and dynamic, and therefore is concerned with motivation of the work. To answer this question we have added a discussion, in Section 1 on why our reward, the runtime of executing a TensorFlow graph, is non-stationary and also made it more clear that we did compare against cost-based optimizations in Table 1. In summary, in our distributed environment, we use a shared cluster of CPUs and GPUs, and our CPUs can also serve other jobs at the same time. Furthermore, in next generation of hardware platforms (such as Cloud TPUs), there will be a lot of interferences between the concurrent jobs. Again, the advantage of our method is that it’s not dependent on the hardware platform because our method can learn runtime information directly through experiments.\n\nRegarding “The template seems off. I don't see the usual two lines under the title (\"Anonymous authors\", \"Paper under double-blind review\").” and “The title seems misleading. \".... Device Placement\" seems to suggest that one is placing devices when in fact, the operators are being placed.” Thanks! We fixed the formatting and will think of new names. We used device placement to be consistent with previous work.\n" ]
[ -1, -1, 5, 5, 8, -1, -1, -1, -1 ]
[ -1, -1, 4, 4, 5, -1, -1, -1, -1 ]
[ "ryazKvH4M", "rJZNkyYzz", "iclr_2018_Hkc-TeZ0W", "iclr_2018_Hkc-TeZ0W", "iclr_2018_Hkc-TeZ0W", "iclr_2018_Hkc-TeZ0W", "Sk-qjGYlz", "rkSREOYgM", "BJuGT9zez" ]
iclr_2018_BJJLHbb0-
Deep Autoencoding Gaussian Mixture Model for Unsupervised Anomaly Detection
Unsupervised anomaly detection on multi- or high-dimensional data is of great importance in both fundamental machine learning research and industrial applications, for which density estimation lies at the core. Although previous approaches based on dimensionality reduction followed by density estimation have made fruitful progress, they mainly suffer from decoupled model learning with inconsistent optimization goals and incapability of preserving essential information in the low-dimensional space. In this paper, we present a Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection. Our model utilizes a deep autoencoder to generate a low-dimensional representation and reconstruction error for each input data point, which is further fed into a Gaussian Mixture Model (GMM). Instead of using decoupled two-stage training and the standard Expectation-Maximization (EM) algorithm, DAGMM jointly optimizes the parameters of the deep autoencoder and the mixture model simultaneously in an end-to-end fashion, leveraging a separate estimation network to facilitate the parameter learning of the mixture model. The joint optimization, which well balances autoencoding reconstruction, density estimation of latent representation, and regularization, helps the autoencoder escape from less attractive local optima and further reduce reconstruction errors, avoiding the need of pre-training. Experimental results on several public benchmark datasets show that, DAGMM significantly outperforms state-of-the-art anomaly detection techniques, and achieves up to 14% improvement based on the standard F1 score.
accepted-poster-papers
+ Empirically convincing and clearly explained application: a novel deep learning architecture and approach is shown to significantly outperform state-of-the-art in unsupervised anomaly detection. - No clear theoretical foundation and justification is provided for the approach - Connexion and differentiation from prior work on simulataneous learning representation and fitting a Gaussian mixture to it would deserve a much more thorough discussion / treatment.
train
[ "S1f48huxz", "r1tvocFgf", "B1aQ8_2ef", "S11z079mf", "B1wAzvEQf", "Sk4EfvVQf", "rJq9evE7G", "HyFqWv4Xf", "BJXb-w4Qz", "rkkvR84XM", "S1g4W3x-f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public" ]
[ "1. This is a good paper, makes an interesting algorithmic contribution in the sense of joint clustering-dimension reduction for unsupervised anomaly detection\n2. It demonstrates clear performance improvement via comprehensive comparison with state-of-the-art methods\n3. Is the number of Gaussian Mixtures 'K' a hyper-parameter in the training process? can it be a trainable parameter?\n4. Also, it will be interesting to get some insights or anecdotal evidence on how the joint learning helps beyond the decoupled learning framework, such as what kind of data points (normal and anomalous) are moving apart due to the joint learning ", "The paper presents a new technique for anomaly detection where the dimension reduction and the density estimation steps are jointly optimized. The paper is rigorous and ideas are clearly stated. The idea to constraint the dimension reduction to fit a certain model, here a GMM, is relevant, and the paper provides a thorough comparison with recent state-of-the-art methods. My main concern is that the method is called unsupervised, but it uses the class information in the training, and also evaluation. I'm also not convinced of how well the Gaussian model fits the low-dimensional representation and how well can a neural network compute the GMM mixture memberships.\n\n1. The framework uses the class information, i.e., “only data samples from the normal class are used for training”, but it is still considered unsupervised. Also, the anomaly detection in the evaluation step is based on a threshold which depends on the percentage of known anomalies, i.e., a priori information. I would like to see a plot of the sample energy as a function of the number of data points. Is there an elbow that indicates the threshold cut? Better yet it would be to use methods like Local Outlier Factor (LOF) (Breunig et al., 2000 – LOF:Identifying Density-based local outliers) to detect the outliers (these methods also have parameters to tune, sure, but using the known percentage of anomalies to find the threshold is not relevant in a purely unsupervised context when we don't know how many anomalies are in the data).\n2. Is there a theoretical justification for computing the mixture memberships for the GMM using a neural network? \n3. How do the regularization parameters \\lambda_1 and \\lambda_2 influence the results?\n4. The idea to jointly optimize the dimension reduction and the clustering steps was used before neural nets (e.g., Yang et al., 2014 - Unsupervised dimensionality reduction for Gaussian mixture model). Those approaches should at least be discussed in the related work, if not compared against.\n5. The authors state that estimating the mixture memberships with a neural network for GMM in the estimation network instead of the standard EM algorithm works better. Could you provide a comparison with EM?\n6. In the newly constructed space that consists of both the extracted features and the representation error, is a Gaussian model truly relevant? Does it well describe the new space? Do you normalize the features (the output of the dimension reduction and the representation error are quite different)? Fig. 3a doesn't seem to show that the output is a clear mixture of Gaussians.\n7. The setup of the KDDCup seems a little bit weird, where the normal samples and anomalies are reversed (because of percentage), where the model is trained only on anomalies, and it detects normal samples as anomalies ... I'm not convinced that it is the best example, especially that is it the one having significantly better results, i.e. scores ~ 0.9 vs. scores ~0.4/0.5 score for the other datasets.\n8. The authors mention that “we can clearly see from Fig. 3a that DAGMM is able to well separate ...” - it is not clear to me, it does look better than the other ones, but not clear. If there is a clear separation from a different view, show that one instead. We don't need the same view for all methods. \n9. In the experiments the reduced dimension used is equal to 1 for two of the experiments and 2 for one of them. This seems very drastic!\n\nMinor comments:\n\n1. Fig.1: what dimension reduction did you use? Add axis labels.\n2. “DAGMM preserves the key information of an input sample” - what does key information mean?\n3. In Fig. 3 when plotting the results for KDDCup, I would have liked to see results for the best 4 methods from Table 1, OC-SVM performs better than PAE. Also DSEBM-e and DSEBM-r seems to perform very well when looking at the three measures combined. They are the best in terms of precision.\n4. Is the error in Table 2 averaged over multiple runs? If yes, how many?\n\nQuality – The paper is thoroughly written, and the ideas are clearly presented. It can be further improved as mentioned in the comments.\n\nClarity – The paper is very well written with clear statements, a pleasure to read.\n\nOriginality – Fairly original, but it still needs some work to justify it better.\n\nSignificance – Constraining the dimension reduction to fit a certain model is a relevant topic, but I'm not convinced of how well the Gaussian model fits the low-dimensional representation and how well can a neural network compute the GMM mixture memberships. \n", "Summary\n\nThis applications paper proposes using a deep neural architecture to do unsupervised anomaly detection by learning the parameters of a GMM end-to-end with reconstruction in a low-dimensional latent space. The algorithm employs a tailored loss function that involves reconstruction error on the latent space, penalties on degenerate parameters of the GMM, and an energy term to model the probability of observing the input samples.\n\nThe algorithm replaces the membership probabilities found in the E-step of EM for a GMM with the outputs of a subnetwork in the end-to-end architecture. The GMM parameters are updated with these estimated responsibilities as usual in the M-step during training.\n\nThe paper demonstrates improvements in a number of public datasets. Careful reporting of the tuning and hyperparameter choices renders these experiments repeatable, and hence a suitable improvement in the field. Well-designed ablation studies demonstrate the importance of the architectural choices made, which are generally well-motivated in intuitions about the nature of anomaly detection.\n\nCriticisms\n\nBased on the performance of GMM-EN, the reconstruction error features are crucial to the success of this method. Little to no detail about these features is included. Intuitively, the estimation network is given the latent code conditioned and some (probably highly redundant) information about the residual structure remaining to be modeled.\n\nSince this is so important to the results, more analysis would be helpful. Why did the choices that were made in the paper yield this success? How do you recommend other researchers or practitioners selected from the large possible space of reconstruction features to get the best results?\n\nQuality\n\nThis paper does not set out to produce a novel network architecture. Perhaps the biggest innovation is the use of reconstruction error features as input to a subnetwork that predicts the E-step output in EM for a GMM. This is interesting and novel enough in my opinion to warrant publication at ICLR, along with the strong performance and careful reporting of experimental design.\n\n", "Thank you for your detailed responses. I am happy with the comments, and have revised the grading of the paper.\n\nI would add a short description for clean vs. contamined model training in unsupervised anomaly detection (as in your responses to the comments) for the readers not familiar with the literature. It's good to see results with both types of training, added in Table 3. \n\nRewrite to make more clear: “For DSEBM, while it works reasonably well on multiple datasets, DAGMM outperforms as both latent representation and reconstruction error are jointly considered in energy modeling.” Possible: “DSEBM works reasonably well on multiple datasets, but DAGMM outperforms it, as DAGMM takes into account both the latent representation and the reconstruction error in the energy modeling.”\n\nAppendix B is very useful, and if the authors think it's helpful, you could switch the axis, where the energy is a function of the percentage; that would also help with interpreting the threshold cut. Also possibly show the y axis as a log if that helps the visualization. ", "Thanks for sharing these related work with us. It is interesting to read these related techniques from the speech recognition community.\n\nIn our revised paper, we added a paragraph in Section 2 to discuss their connection and difference. The main message is as follows: Unlike your mentioned existing methods, we focus on unsupervised settings: DAGMM extracts useful features for anomaly detection through linear/non-linear dimensionality reduction realized by a deep autoencoder, and jointly learns their density under the GMM framework by mixture membership estimation, for which DAGMM can be viewed as a more powerful deep version of adaptive mixture of experts (Jacobs et al. (1991)) in combination with a deep autoencoder. More importantly, DAGMM combines induced reconstruction error and learned latent representation for unsupervised anomaly detection.", "Thanks for your valuable comments to our paper.\n\nQuestion 1. Is the number of Gaussian Mixtures 'K' a hyper-parameter in the training process? can it be a trainable parameter?\n\nYes, in the current DAGMM, 'K' is a hyperparameter. In our opinion, it could be a trainable parameter. One possible way is to incorporate a Dirichlet Process prior into the optimization process so that an optimal 'K' can be automatically inferred. Meanwhile, one may have to make significant changes to the architecture of DAGMM, as the number of output neurons for the estimation network is tightly coupled with 'K'. A new architecture could be required in order to handle the uncertain 'K', if 'K' becomes trainable. In summary, we believe that the question of how to make 'K' trainable is interesting and important, and we will explore the answer to this question in our future work. \n\nQuestion 2. Also, it will be interesting to get some insights or anecdotal evidence on how the joint learning helps beyond the decoupled learning framework, such as what kind of data points (normal and anomalous) are moving apart due to the joint learning.\n\nThanks for this constructive comment. In the revised paper, we added Appendix E, where we provide a case study on the KDDCUP dataset and show which kind of anomalous samples benefit more from joint learning.", "Due to the length constraint on comment, we have to split our response into several parts.\n\nThanks for your valuable comments to our paper.\n\nQuestion 1. The framework uses the class information, i.e., only data samples from the normal class are used for training, but it is still considered unsupervised. Also, the anomaly detection in the evaluation step is based on a threshold which depends on the percentage of known anomalies, i.e., a priori information. I would like to see a plot of the sample energy as a function of the number of data points. Is there an elbow that indicates the threshold cut? Better yet it would be to use methods like Local Outlier Factor (LOF) (Breunig et al., 2000 LOF:Identifying Density-based local outliers) to detect the outliers (these methods also have parameters to tune, sure, but using the known percentage of anomalies to find the threshold is not relevant in a purely unsupervised context when we don't know how many anomalies are in the data).\n\nWe answer this question from three aspects.\n\nFirst, why do we only use samples from normal class for training? \n\nIn general, there are two settings for model training in unsupervised anomaly detection: clean training data and contaminated training data. \n\nClean training data is a widely adopted setting in many applications. For example, in the case of system fault detection, the detectors are trained by system logs generated from normal days. Under this setting, model training is still unsupervised, as there is no guidance from data labels that differentiate normal and abnormal samples. The experiment reported in Table 2 follows this setting.\n\nContaminated training data is another setting, where we are not sure if there are anomaly data in the training data or not. Usually, the anomaly detection technique works well only when the contamination ratio is small. Again, model training in this setting is unsupervised, as the model does not know which samples are contaminated and receives no guidance from data labels as well. In the revised paper, we added Table 3 to show how DAGMM and its baselines respond to the contaminated training data from the KDDCUP dataset.\n\nSecond, why do we use prior knowledge to set threshold?\n\nAs far as we know, it is inevitable to decide a threshold for unsupervised anomaly detection techniques. Given the anomaly score of a sample, we still need to answer the question: should I report this sample as anomaly? With a threshold, we are able to answer this question. For different techniques, the threshold may be decided at different phases. For OC-SVM, the threshold is decided at the model training phase (i.e., the parameter nu). For LOF and DAGMM, the threshold is decided at the testing phase. \n\nWhile it is important to decide a threshold, the problem of how to find the optimal threshold is non-trivial in practice. In most cases, it is a process with exploration and exploitation: If the threshold is too high with high false positive rate, we decrease the threshold a bit; If the threshold is too low with low recall, we may increase the threshold.\n\nIn this work, we do not intend to solve the problem of how to find the optimal threshold. For validation purpose, we assume DAGMM and its baselines have the prior knowledge to set their own optimal threshold so that we can fairly compare their performance. \n\nThird, in the revised paper, we added Appendix B that reports the cdf of the energy function learned by DAGMM for all the datasets.\n\nQuestion 2. Is there a theoretical justification for computing the mixture memberships for the GMM using a neural network?\n\nThe estimation network in DAGMM is related to existing techniques such as neural variational inference [1] and adaptive mixture of experts [2]. In the revised paper, we added Section 3.5 to discuss how our technique connects to neural variational inference, and added a baseline that optimizes DAGMM under the framework of neural variational inference.\n\nQuestion 3. How do the regularization parameters \\lambda_1 and \\lambda_2 influence the results?\n\nIn the revised paper, we added Appendix F to discuss how these hyperparameters impact the performance of DAGMM.\n\nQuestion 4. The idea to jointly optimize the dimension reduction and the clustering steps was used before neural nets (e.g., Yang et al., 2014 - Unsupervised dimensionality reduction for Gaussian mixture model). Those approaches should at least be discussed in the related work, if not compared against.\n\nThanks for sharing the related work. In the revised paper, we added the discussion on them in Section 2.\n\nReference\n\n[1] Andriy Mnih and Karol Gregor. \"Neural variational inference and learning in belief networks.\" arXiv preprint arXiv:1402.0030 (2014).\n[2] Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. Adaptive mixtures of local experts. Neural computation, 3(1):79?87, 1991.", "Question 10. Fig.1: what dimension reduction did you use? Add axis labels.\n\nWe utilize a deep autoencoder to perform dimension reduction on the dataset presented in Fig.1. In the revised paper, we added this missing information in the figure caption, and added axis labels to all the figures. \n\nQuestion 11. DAGMM preserves the key information of an input sample - what does key information mean?\n\nThanks for pointing out the confusion. As stated in the paper, the key information means the features derived from both the reduced dimensions discovered by dimensionality reduction and the induced reconstruction error, which are important for anomaly detection tasks.\n\nQuestion 12. In Fig. 3 when plotting the results for KDDCup, I would have liked to see results for the best 4 methods from Table 1, OC-SVM performs better than PAE. Also DSEBM-e and DSEBM-r seems to perform very well when looking at the three measures combined. They are the best in terms of precision.\n\nFor OC-SVM, we use the RBF kernel with an infinite number of dimensions in its kernel space, which is difficult to visualize and compare with Fig.3.\n\nFor DSEBM-e and DSEBM-r, they share the same latent representation. Unlike the methods presented in Fig.3, DSEBM does not include reconstruction features in energy modeling; therefore, it is also difficult to compare its visualization results with the ones in Fig.3. In the revised paper, we added Appendix C that includes the visualization results of DSEBM on the KDDCUP dataset.\n\nQuestion 13. Is the error in Table 2 averaged over multiple runs? If yes, how many?\n\nAs stated in the experiment section, Table 2 reports the average results over 20 runs. In the revised paper, we emphasized this information at the beginning of multiple paragraphs.\n\nAt last, we sincerely appreciate your constructive and very detailed comments.", "Question 5. The authors state that estimating the mixture memberships with a neural network for GMM in the estimation network instead of the standard EM algorithm works better. Could you provide a comparison with EM?\n\nThanks for pointing out this confusion. In this paper, we have no intention to claim that the estimation network works better than the traditional EM algorithm. Instead, our point is that anomaly detection tasks benefit more from the joint training of dimension reduction and density estimation, compared with decoupled training. Meanwhile, it is indeed interesting to see how well the EM algorithm works with deep autoencoders. Therefore, in the revised paper, we added a baseline called PAE-GMM-EM, which uses the EM algorithm to learn GMM.\n\nQuestion 6. In the newly constructed space that consists of both the extracted features and the representation error, is a Gaussian model truly relevant? Does it well describe the new space? Do you normalize the features (the output of the dimension reduction and the representation error are quite different)? Fig. 3a doesn't seem to show that the output is a clear mixture of Gaussians.\n\nIn this work, we do not assume the underlying distribution is Gaussian. Instead, we utilize a mixture of Gaussian distributions to approximate an unknown distribution. Informally speaking, any distribution can be well approximated by a finite number of Gaussian mixtures.\n\nIn the current DAGMM, we do not perform any normalization on the output of dimension reduction and reconstruction features. Instead, we normalize input samples and carefully select reconstruction features (metrics) to keep the values in the low-dimensional space relatively small so that they are friendly to the estimation network training. In the revised paper, we added Appendix D to discuss reconstruction feature selection. \n\nQuestion 7. The setup of the KDDCup seems a little bit weird, where the normal samples and anomalies are reversed (because of percentage), where the model is trained only on anomalies, and it detects normal samples as anomalies ... I'm not convinced that it is the best example, especially that is it the one having significantly better results, i.e. scores ~ 0.9 vs. scores ~0.4/0.5 score for the other datasets.\n\nThanks for the suggestion. In the revised paper, we added one more dataset KDDCUP-Rev, which is derived from the KDDCUP dataset. In this dataset, \"normal\" samples are the majority class, and \"attack\" samples are anomalies.\n\nQuestion 8. The authors mention that we can clearly see from Fig. 3a that DAGMM is able to well separate ... - it is not clear to me, it does look better than the other ones, but not clear. If there is a clear separation from a different view, show that one instead. We don't need the same view for all methods.\n\nIn the revised paper, we modified the presentation in the second paragraph of Section 4.5 to make it more objective.\n\nQuestion 9. In the experiments the reduced dimension used is equal to 1 for two of the experiments and 2 for one of them. This seems very drastic!\n\nWe are also surprised by the fact that we can use 1 or 2 reduced dimensions to achieve the state-of-the-art performance. We will share the source code on github upon the acceptance of this work so that more people are able to verify this discovery.", "Thanks for your valuable comments to our paper. \n\nFor the reconstruction features used in the experiment, we report their details in the first paragraph of Section 4.3. In the revised paper, we added Appendix D to discuss why reconstruction features are important to anomaly detection and the principles that guide us to find candidate reconstruction features.\n\nFor the question of how to find the set of reconstruction features that deliver the best results, it is important, but non-trivial. In our study, we discovered the two reconstruction features used in the experiment through a manual data exploration process. The principles in Appendix D are important guidelines for choosing candidate reconstruction metrics.", "This work obviously ignored several important related previous work from the speech recognition community. All these works use GMMs model features produced by a bottle-neck feature extractor (trained in a way sort of similar to an autoencoder), and jointly trained the feature extractor & GMMs. For example,\n\n1. M.Paulik,\"Lattice-based training of bottleneck feature extraction neural networks\", Proc. Interspeech 2013.\nwhich trains GMMs using EM and bottle-neck features using SGD in an interleaved fashion.\n\n2. E. Variani, E. McDermott, and G. Heigold, \"A Gaussian mixture model layer jointly optimized with discriminative features within a deep neural network architecture\", ICASSP 2015.\n3. C. Zhang and P.C. Woodland, \"Joint optimisation of tandem systems using Gaussian mixture density neural network discriminative sequence training\", ICASSP 2017.\nThese two papers jointly trained GMMs and their bottle-neck features using SGD and different criterion. \n\n4. Z. Tuuske, M. Sundermeyer, R. Schluuter, and H. Ney, \"Integrating Gaussian mixtures into deep neural networks: Softmax layer with hidden variables\", ICASSP 2015.\n5. Z. Tuuske, P. Golik, R. Schluuter, and H. Ney, \"Speaker adaptive joint training of Gaussian mixture models and bottleneck features\", ASRU 2015.\nThese two papers trained a log-linear mixture model (seen as an extension to softmax) together with the features.\n\nThe major difference between the neural network architectures in this paper and those cited above (esp. those in 2 & 3) is perhaps mainly whether to use a separate network to estimate the membership of the sample. It is not certain if such a membership estimation network is useful given sufficient computational power. \n\n" ]
[ 8, 8, 8, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJJLHbb0-", "iclr_2018_BJJLHbb0-", "iclr_2018_BJJLHbb0-", "rJq9evE7G", "S1g4W3x-f", "S1f48huxz", "r1tvocFgf", "r1tvocFgf", "r1tvocFgf", "B1aQ8_2ef", "iclr_2018_BJJLHbb0-" ]
iclr_2018_BySRH6CpW
Learning Discrete Weights Using the Local Reparameterization Trick
Recent breakthroughs in computer vision make use of large deep neural networks, utilizing the substantial speedup offered by GPUs. For applications running on limited hardware, however, high precision real-time processing can still be a challenge. One approach to solving this problem is training networks with binary or ternary weights, thus removing the need to calculate multiplications and significantly reducing memory size. In this work, we introduce LR-nets (Local reparameterization networks), a new method for training neural networks with discrete weights using stochastic parameters. We show how a simple modification to the local reparameterization trick, previously used to train Gaussian distributed weights, enables the training of discrete weights. Using the proposed training we test both binary and ternary models on MNIST, CIFAR-10 and ImageNet benchmarks and reach state-of-the-art results on most experiments.
accepted-poster-papers
Well written paper on a novel application of the local reprarametrisation trick to learn networks with discrete weights. The approach achieves state-of-the-art results. Note: I apreciate that the authors added a comparison to the Gumbel-softmax continuous relaxation approach during the review period, following the suggestion of a reviewer. This additional comparison strengthens the paper.
train
[ "BJHcawFxM", "SkOjP3Hlf", "ryZHzH9gz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes training binary and ternary weight distribution networks through the local reparametrization trick and continuous optimization. The argument is that due to the central limit theorem (CLT) the distribution on the neuron pre-activations is approximately Gaussian, with a mean given by the inner product between the input and the mean of the weight distribution and a variance given by the inner product between the squared input and the variance of the weight distribution. As a result, the parameters of the underlying discrete distribution can be optimized via backpropagation by sampling the neuron pre-activations with the reparametrization trick. The authors further propose appropriate initialisation schemes and regularization techniques to either prevent the violation of the CLT or to prevent underfitting. The method is evaluated on multiple experiments.\n\nThis paper proposed a relatively simple idea for training networks with discrete weights that seems to work in practice. My main issue is that while the authors argue about novelty, the first application of CLT for sampling neuron pre-activations at neural networks with discrete r.v.s is performed at [1]. While [1] was only interested in faster convergence and not on optimization of the parameters of the underlying distribution, the extension was very straightforward. I would thus suggest that the authors update the paper accordingly. \n\nOther than that, I have some other comments:\n- The L2 regularization on the distribution parameters for the ternary weights is a bit ad-hoc; why not penalise according to the entropy of the distribution which is exactly what you are trying to achieve? \n- For the binary setting you mentioned that you had to reduce the entropy thus added a “beta density regulariser”. Did you add R(p) or log R(p) to the objective function? Also, with alpha, beta = 2 the beta density is unimodal with a peak at p=0.5; essentially this will force the probabilities to be close to 0.5, i.e. exactly what you are trying to avoid. To force the probability near the endpoints you have to use alpha, beta < 1 which results into a “bowl” shaped Beta distribution. I thus wonder whether any gains you observed from this regulariser are just an artifact of optimization. \n- I think that a baseline (at least for the binary case) where you learn the weights with a continuous relaxation, such as the concrete distribution, and not via CLT would be helpful. Maybe for the network to properly converge the entropy for some of the weights needs to become small (hence break the CLT). \n\n[1] Wang & Manning, Fast Dropout Training.\n\nEdit: After the authors rebuttal I have increased the rating of the paper: \n- I still believe that the connection to [1] is stronger than what the authors allude to; eg. the first two paragraphs of sec. 3.2 could easily be attributed to [1].\n- The argument for the entropy was to include a term (- lambda * H(p)) in the objective function with H(p) being the entropy of the distribution p. The lambda term would then serve as an indicator to how much entropy is necessary.\n- There indeed was a misunderstanding with the usage of the R(p) regularizer at the objective function (which is now resolved).\n- The authors showed benefits compared to a continuous relaxation baseline.", "Summary of the paper:\nThe paper suggests to use stochastic parameters in combination with the local reparametrisation trick (previously introduced by Kingma et al. (2015)) to train neural networks with binary or ternary wights. Results on MNIST, CIFAR-10 and ImageNet are very competitive. \n\nPros:\n- The proposed method leads to state of the art results .\n- The paper is easy to follow and clearly describes the implementation details needed to reach the results. \n\nCons:\n- The local reprarametrisation trick it self is not new and applying it to a multinomial distribution (with one repetition) instead of a Gaussian is straight forward, but its application for learning discrete networks is to my best knowledge novel and interesting. \n\nIt could be nice to include the results of Zuh et al (2017) in the results table and to indicate the variance for different samples of weights resulting from your methods in brackets. \n\n\nMinor comments:\n- Some citations have a strange format: e.g. “in Hubara et al. (2016); Restegari et al. (2016)“ would be better readable as “by Hubara et al. (2016) and Restegari et al. (2016)“\n- To improve notation, it could be directly written that W is the set of all w^l_{i,j} and \\mathcal{W} is the joint distribution resulting from independently sampling from \\mathcal{W}^l_{i,j}. \n- page 6: “on the last full precision network”: should probably be “on the last full precision layer”\n “ distributions has” -> “ distributions have” \n", "This paper introduces the LR-Net, which uses the reparametrization trick inspired by a similar component in VAE. Although the idea of reparametrization itself is not new, applying that for the purpose of training a binary or ternary network, and sample the pre-activations instead of weights is novel. From the experiments, we can see that the proposed method is effective. \n\nIt seems that there could be more things to show in the experiments part. For example, since it is using a multinomial distribution for weights, it makes sense to see the entropy w.r.t. training epochs. Also, since the reparametrization is based on the Lyapunov Central Limit Theorem, which assumes statistical independence, a visualization of at least the correlation between the pre-activation of each layer would be more informative than showing the histogram. \n\nAlso, in the literature of low precision networks, people are concerning both training time and test time computation demand. Since you are sampling the pre-activations instead of weights, I guess this approach is also able to reduce training time complexity by an order. Thus a calculation of train/test time computation could highlight the advantage of this approach more boldly. " ]
[ 6, 7, 6 ]
[ 4, 3, 3 ]
[ "iclr_2018_BySRH6CpW", "iclr_2018_BySRH6CpW", "iclr_2018_BySRH6CpW" ]
iclr_2018_BJ_wN01C-
Deep Rewiring: Training very sparse deep networks
Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them. But also generic hardware and software implementations of deep learning run more efficiently for sparse networks. Several methods exist for pruning connections of a neural network after it was trained without connectivity constraints. We present an algorithm, DEEP R, that enables us to train directly a sparsely connected neural network. DEEP R automatically rewires the network during supervised training so that connections are there where they are most needed for the task, while its total number is all the time strictly bounded. We demonstrate that DEEP R can be used to train very sparse feedforward and recurrent neural networks on standard benchmark tasks with just a minor loss in performance. DEEP R is based on a rigorous theoretical foundation that views rewiring as stochastic sampling of network configurations from a posterior.
accepted-poster-papers
Clearly explained, well motivated and empirically supported algorithm for training deep networks while simultaneously learning their sparse connectivity. The approach is similar to previous work (in particular Welling et al., Bayesian Learning via Stochastic Gradient Langevin Dynamics, ICML 2011) but is novel in that it satisfies a hard constraint on the network sparsity, which could be an advantage to match neuromorphic hardware limitations.
train
[ "Syx4zM9xM", "H1aEoGAgG", "r1UOC9lbf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors present an approach to implement deep learning directly on sparsely connected graphs. Previous approaches have focused on transferring trained deep networks to a sparse graph for fast or efficient utilization; using this approach, sparse networks can be trained efficiently online, allowing for fast and flexible learning. Further investigation is necessary to understand the full implications of the two main conceptual changes introduced here (signed connections that can disappear and random walk in parameter space), but the initial results are quite promising.\n\nIt would also be interesting to understand more fully how performance scales to larger networks. If the target connectivity could be pushed to a very sparse limit, where only a fixed number of connections were added with each additional neuron, then this could significantly shape how these networks are trained at very large scales. Perhaps the heuristics for initializing the connectivity matrices will be insufficient, but could these be improved in further work?\n\nAs a last minor comment, the authors should specify explicitly what the shaded areas are in Fig. 4b,c.", "The authors provide a novel, interesting, and simple algorithm capable of training with limited memory. The algorithm is well-motivated and clearly explained, and empirical evidence suggests that the algorithm works well. However, the paper needs additional examination in how the algorithm can deal with larger data inputs and outputs. Second, the relationship to existing work needs to be explained better.\n\nPro:\nThe algorithm is clearly explained, well-motivated, and empirically supported.\n\nCon:\nThe relationship to stochastic gradient markov chain monte carlo needs to be explained better. In particular, the update form was first introduced in [1], the annealing scheme was analyzed in [2], and the reflection step was introduced in [3]. These relationships need to be explained clearly.\nThe evidence is presented on very small input data. With something like natural images, the parameterization is much larger and with more data, the number of total parameters is much larger. Is there any evidence that the proposed algorithm could continue performing comparatively as the total number of parameters in state-of-the-art networks increases? This would require a smaller ratio of included parameters.\n\n[1] Welling, M. and Teh, Y.W., 2011. Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML-11)(pp. 681-688).\n\n[2] Chen, C., Carlson, D., Gan, Z., Li, C. and Carin, L., 2016, May. Bridging the gap between stochastic gradient MCMC and stochastic optimization. In Artificial Intelligence and Statistics(pp. 1051-1060).\n\n[3] Patterson, S. and Teh, Y.W., 2013. Stochastic gradient Riemannian Langevin dynamics on the probability simplex. In Advances in Neural Information Processing Systems (pp. 3102-3110).\n \n", "This paper presents an iterative approach to sparsify a network already during training. During the training process, the amount of connections in the network is guaranteed to stay under a specific threshold. This is a big advantage when training is performed on hardware with computational limitations, in comparison to \"post-hoc\" sparsification methods, that compress the network after training.\nThe method is derived by considering the \"rewiring\" of an (artificial) neural network as a stochastic process. This perspective is based on a recent model in computational biology but also can be interpreted as a (sequential) monte carlo sampling based stochastic gradient descent approach. References to previous work in this area are missing, e.g.\n\n[1] de Freitas et al., Sequential Monte Carlo Methods to Train Neural Network\nModels, Neural Computation 2000\n[2] Welling et al., Bayesian Learning via Stochastic Gradient Langevin Dynamics, ICML 2011\n\nEspecially the stochastic gradient method in [2] is strongly related to the existing approach.\n\nPositive aspects\n\n- The presented approach is well grounded in the theory of stochastic processes. The authors provide proofs of convergence by showing that the iterative updates converge to a fixpoint of the stochastic process\n\n- By keeping the temperature parameter of the stochastic process high, it can be directly applied to online transfer learning.\n\n- The method is specifically designed for online learning with limited hardware ressources.\n\nNegative aspects\n\n- The presented approach is outperformed for moderate compression levels (by Han's pruning method for >5% connectivity on MNIST, Fig. 3 A, and by l1-shrinkage for >40% connectivity on CIFAR-10 and TIMIT, Fig. 3 B&C). Especially the results on MNIST suggest that this method is most advantageous for very high compression levels. However in these cases the overall classification accuracy has already dropped significantly which could limit the practical applicability.\n\n- A detailled discussion of the relation to previously existing very similar work is missing (see above)\n\n\nTechnical Remarks\n\nFig. 1, 2 and 3 are referenced on the pages following the page containing the figure. Readibility could be slightly increased by putting the figures on the respective pages.\n" ]
[ 8, 5, 6 ]
[ 4, 5, 4 ]
[ "iclr_2018_BJ_wN01C-", "iclr_2018_BJ_wN01C-", "iclr_2018_BJ_wN01C-" ]
iclr_2018_SJQHjzZ0-
Quantitatively Evaluating GANs With Divergences Proposed for Training
Generative adversarial networks (GANs) have been extremely effective in approximating complex distributions of high-dimensional, input data samples, and substantial progress has been made in understanding and improving GAN performance in terms of both theory and application. However, we currently lack quantitative methods for model assessment. Because of this, while many GAN variants being proposed, we have relatively little understanding of their relative abilities. In this paper, we evaluate the performance of various types of GANs using divergence and distance functions typically used only for training. We observe consistency across the various proposed metrics and, interestingly, the test-time metrics do not favour networks that use the same training-time criterion. We also compare the proposed metrics to human perceptual scores.
accepted-poster-papers
+ clearly written and thorough empirical comparison of several metrics/divergences for evaluating GANs, prominently parametric-critic based divergences. - little technical novelty with respect to prior work. As noted by reviewers and an anonymous commentator: using an Independent critic for evaluation has been proposed and used in practice before. + the contribution of the work thus lies primarily in its well-done and extensive empirical comparisons of multiple metrics and models
train
[ "ryX_FSexG", "H1uFgwqeM", "H1C2pZplz", "HJBlWX6mM", "SyqLlX6QM", "S1Gex7a7z", "Hyexk7TmM", "S1mjkm67z", "SJm-JvWfz", "SJ7anZWlM", "ry8IOs_yM", "SkCF1x4kG", "r1GaH_NAW" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "author", "public", "author", "public" ]
[ "Through evaluation of current popular GAN variants. \n * useful AIS figure\n * useful example of failure mode of inception scores\n * interesting to see that using a metric based on a model’s distance does not make the model better at that distance\nthe main criticism that can be given to the paper is that the proposed metrics are based on trained models which do not have an independent clear evaluation metrics (as classifiers do for inception scores). However, the authors do show that the results are consistent when changing the critic architecture. Would be nice to see if this also holds for changes in learning rates. \n * nice to see an evaluation on how models scale with the increase in training data.\n\nUsing an Independent critic for evaluation has been proposed and used in practice before, see “Comparison of Maximum Likelihood and GAN-based training of Real NVPs”, Danihelka et all, as well as Variational Approaches for Auto-Encoding Generative Adversarial Networks, Rosca at all.\n\nImprovements to be added to the paper:\n * How about overfitting? Would be nice to mention whether the proposed metrics are useful at detecting overfitting. From algorithm 1 one can see that the critic is trained on training data, but at evaluation time test data is used. However, if the generator completely memorizes the training set, the critic will not be able to learn anything useful. In that case, the test measure will not provide any information either. A way to go around this is to use validation data to train the critic, not training data. In that case, the critic can learn the difference between training and validation data and at test time the test set can be used. \n * Using the WGAN with weight clipping is not a good baseline. The improved WGAN method is more robust to hyper parameters and is the one currently used by the community. The WGAN with weight clipping is quite sensitive to the clipping hyperparameter, but the authors do not report having changed it from the original paper, both for the critic or for the discriminator used during training. \n * Is there a guidance for which metric should be used? \n\nFigure 3 needs to be made a bit larger, it is quite hard to read in the current set up. ", "This paper proposes using divergence and distance functions typically used for generative model training to evaluate the performance of various types of GANs. Through numerical evaluation, the authors observed that the behavior is consistent across various proposed metrics and the test-time metrics do not favor networks that use the same training-time criterion. \n\nMore specifically, the evaluation metric used in the paper are: 1) Jensen-Shannon divergence, 2) Constrained Pearson chi-squared, 3) Maximum Mean Discrepancy, 4) Wasserstein Distance, and 5) Inception Score. They applied those metrics to compare three different GANs: the standard DCGAN, Wasserstein DCGAN, and LS-DCGAN on MNIST and CIFAR-10 datasets. \n\nSummary:\n——\nIn summary, it is an interesting topic, but I think that the paper does not have sufficient novelty. Some empirical results are still preliminary. It is hard to judge the effectiveness of the proposed metrics for model selection and is not clear that those metrics are better qualitative descriptors to replace visual assessment. In addition, the writing should be improved. See comments below for details and other points.\n\nComments:\n——\n1.\tIn Section 3, the evaluation metrics are existing metrics and some of them have already been used in comparing GAN models. Maximum mean discrepancy has been used before in work by Yujia Li et al. (2016, 2017)\n\n2.\tIn the experiments, the proposed metrics were only tested on small scale datasets; the authors should evaluate on larger datasets such as CIFAR-100, Toronto Faces, LSUN bedrooms or CelebA.\n\n3.\tIn the experiments, the authors noted that “Gaussian observable model might not be the ideal assumption for GANs. Moreover, we observe a high log-likelihood at the beginning of training, followed by a drop in likelihood, which then returns to the high value, and we are unable to explain why this happens.” Could the authors give explanation to this phenomenon? The authors should look into this more carefully.\n\n4.\tIn algorithm 1, it seems that the distance is computed via gradient decent. Is it possible to show that the optimization always converges? Is it meaningful to compare the metrics if some of them cannot be properly computed?\n\n5. With many different metrics for assessing GANs, how should people choose? How do we trust the scores? Recently, Fréchet Inception Distance (FID) was proposed to evaluate the samples generated from GANs (Heusel et al. 2017), how are the above scores compared with FID?\n\nMinor Comments:\n——\n1.\tWriting should be fixed: “It seems that the common failure case of MMD is when the mean pixel intensities are a better match than texture matches (see Figure 5), and the common failure cases of IS happens to be when the samples are recognizable textures, but the intensity of the samples are either brighter or darker (see Figure 2).”\n", "the paper proposes an evaluation method for training GANs using four standard distribution distances in literature namely:\n- JSD\n- Pearson-chi-square\n- MMD\n- Wasserstein-1\n\nFor each distance, a critic is initialized with parameters p. The critic is a neural network with the same architecture as the discriminator.\nThe critic then takes samples from the trained generator model, and samples from the groundtruth dataset. It trains itself to maximize the distance measure between these two distributions (trained via gradient descent).\n\nThese critics after convergence will then give a measure of the quality of the generator (lower the better).\n\nThe paper is easy to read, experiments are well thought out.\nFigure 3 is missing (e) and (f) sub-figures.\n\nWhen proposing a distance measure for GANs (which is a high standard, because everyone is looking forward to a robust measure), one has to have enough convincing to do. The paper only does experiments on two small datasets MNIST and CIFAR. If the paper has to convince me that this metric is good and should be used, I need to see experiments on one large-scale datset, such as Imagenet or LSUN. If one can clearly identify the good generators from bad generators using a weighted-sum of these 4 distances on Imagenet or LSUN, this is a metric that is going to stand well.\nIf such experiments are constructed in the rebuttal period, I shall raise my rating.", "The revised version of the paper contains several additional experimental contributions motivated by the reviews we received. Though we have details of each of these in our responses to the reviews, we wanted to provide a short summary as follows:\n\n- Comparison with human perceptual scores (Correlation test between different metrics to human perceptual scores)\n- Evaluation on larger image data, LSUN bedrooms\n- Use of the (improved) WGAN with gradient penalty\n- Comparison to Fréchet Inception Distance\n- Investigating the metrics as a means of detecting overfitting\n\nThank you for your interest!", "Thank you for directing us to “GANs for Biological Image Synthesis” by Osokin et al. We have included references to this work in the revised paper. \n\nThank you!!\n", "Thank you for your review.\n\nAt the reviewer’s suggestion, we conducted the same experiments on the LSUN bedroom dataset. We used 64 x 64 of 90,000 images to train GAN, LSGAN, WGAN and tested on another 90,000 unseen images. We evaluated using the LS score, IW distance, and MMD. We omitted the Inception score, because LSUN bedroom dataset contains just a single class and there is no pre-trained convolutional network available (inception score needs a pre-trained convolutional network). Samples from each model are also added in the appendix of the paper. Here is a summary of the results:\n\n LS (higher the better) IW (lower the better) MMD (higher the better)\nGAN : 0.14614 3.79097 0.00708\nLSGAN: 0.173077 3.36779 0.00973\nWGAN : 0.205725 2.91787 0.00584\n\nAll three metrics agrees that WGAN has the best score. LSGAN is ranked second according to the LS score and IW distance, in contrast, MMD puts GAN in second place and LSGAN on third place. Nevertheless, in our more recent experiments added to the revised version of the paper, we showed that MMD score often disagrees with human perceptual score.\n\nIn summary, we applied our evaluation methods to larger images and the performance of IW and LS are consistent with what we observed on MNIST and CIFAR10. \n\nWe added these results to the paper.\n\n\nAdditionally, we would like to note that we added a comparison between the evaluation metrics to human perceptual scores. Please see the response (***) to Reviewer 1. \n\nThank you!!\n", "Thank you for your review!\n\nRegarding comment 1 : \nThe reviewer noted that MMD has been used in previous works by Yujia Li et al. [1,2]. Indeed, MMD has been proposed as a training objective in many previous works. Nevertheless, the goal of this paper was to consider different evaluation metrics for scoring GANs and test whether one type of GANs is statistically better than the other one under different metrics. Our claim of novelty is not in proposing a new metric but in evaluating GANs under many different metrics. We made an observation that many metrics have been used to train GANs but surprisingly have not been used to evaluate GANs at test time. Hence, we used those metrics with a critic network to evaluate GANs. Li et al. [1,2] have not employed MMD as a GAN evaluation metric at test time. In the paper, we systematically compared and ranked GANs under different metrics. \n\n\nRegarding comment 2 : Please see our response to Reviewer 3. \n\n\nRegarding comment 3 : The mentioned phenomenon for log-likelihood using AIS by [3] is interesting. However, we do not know why it has that behaviour and we believe that this is not within the scope of our paper to find out. It would be best to directly ask the authors of [3].\n\nRegarding comment 4: The reviewer asked about convergence guarantees using gradient descent. Note that gradient descent is employed widely in deep learning and optimizing the critic’s objective (the distance) is exactly same as training a deep feedforward network with gradient descent. The scope of this comment falls under all deep learning methods and it is not specific to our paper. \n\nNote : We include the training curve of critics to show that at least the training curve converges (see Figure 26).\n\nRegarding comment 5: Please see our response to the review (Fréchet Inception Distance for evaluating GANs) from Nov. 20th 2017. We have included substantial experimental results in the updated version of the paper.\n\n(***)\nMoreover, we added a comparison between the evaluation metrics and human perceptual scores. We showed which metrics are more statistically correlated with human perceptual scores. This was done based on the Wilcoxon rank sum test & two-sided Fisher’s test. The fraction of pairs on which each metric agrees with humans (higher the better):\n\n Metric Fraction Agreed agreed # samples / # total samples\nInception score 0.862595 ( 113 / 131)\nIW Distance 0.977099 (128 / 131)\nLS score 0.931298 (122 / 131)\nMMD 0.83261 (109 / 131) \n\n\nIt shows that IW distance agreed the most with human perceptual scores, and then followed by LS, Inception score, and MMD. \n\nAlso, here are the results of a two-sided Fisher’s test (no multiple comparison correlation) that these fractions are the same or not:\n\nIS equals IW : p-value = 0.000956\nIS equals LS : p-value = 0.102621\nIS equals MMD : p-value = 0.606762\nIW equals LS : p-value = 0.136684\nIW equals MMD : p-value = 0.000075\nLS equals MMD : p-value = 0.020512\n\nOver all, it demonstrates that IW, LS are significantly more aligned with the perception scores than the Inception Score and MMD, p < 0.05. (See the Section 4.2.1 for details).\n\nWe have improved the quality of the writing in the revised paper.\nThank you!!\n\n\n[1] Yujia Li, Kevin Swersky and Richard Zemel. Generative Moment Matching Networks. International Conference on Machine Learning (ICML), 2015\n[2] Yujia Li, Alexander Schwing, Kuan-Chieh Wang, Richard Zemel. Dualing GANs. https://arxiv.org/abs/1706.06216\n[3] Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov and Roger Grosse. On the Quantitative Analysis of Decoder-Based Generative Models. ICLR, 2017. \n", "Thank you for your review and thank you for directing us to “Comparison of Maximum Likelihood and GAN-based training of Real NVPs” by Danihelka et al. We have included references to this work in the revised paper. \n\nRegarding the comment on WGAN with weight clipping :\nWe agree with the reviewer. We will include the clipping hyperparameter, which was 0.1. Additionally, we added experiments with the (improved) WGAN with gradient penalty as shown above. The results can be also found in the updated version of the paper.\n\n\n\nRegarding the comment on overfitting : \nWe agree with the reviewer’s comments. As such, we experimented with the scenario proposed by the reviewer. We trained two critics on training data and validation data respectively and evaluated on test data from both critics. \n\nWe trained six GANs (GAN, LSGAN, WGAN_GP, DRAGAN, BEGAN, EBGAN) on MNIST and Fashion MNIST. We trained these GANs with 50K training examples. At test time, we used 10k training and 10k validation examples for training the critics, and evaluated on 10k test examples. Here, we present the test scores from the critics trained on training and validation data :\n\nFashion-MNIST\n LS score (trained on training data) LS score (trained on validation data) \nLSGAN 0.135 +- 0.0046 0.136 +- 0.0074\nGAN \t 0.1638 +- 0.010 0.1635 +- 0.0006\nDRAGAN 0.1638 +- 0.015 0.1645 +- 0.0151\nBEGAN 0.1133 +- 0.042 0.0893 +- 0.0095 \nEBGAN 0.0037 +- 0.0009 0.0048 +- 0.0023\nWGAN_GP 0.000175 +- 0.0000876 0.000448 +- 0.0000862\n\nMNIST \n LS score (trained on training data) LS score (trained on validation data) \nLSGAN 0.323 +- 0.0104 0.352 +- 0.0143\nGAN \t 0.312 +- 0.010 0.4408 +- 0.0201\nDRAGAN 0.318 +- 0.012 0.384 +- 0.0139\nBEGAN 0.081 +- 0.016 0.140 +- 0.0329 \nEBGAN 3.38e-6 +- 0.1.86e-7 3.82e-6 +- 2.82e-7\nWGAN_GP 0.196 +- 0.006 0.307 +- 0.0381\n\nNote that we also have the IW and FID evaluation on these models in the paper. For Fashion-MNIST, we find that test scores with critic trained on training and validation data are very close. Hence, we don’t see any indication of overfitting. On the other hand, there are gaps between the scores for the MNIST dataset and the test scores from critics trained on validation set gives better performance than the ones that are trained on the training set. \n\n\n\nRegarding the comment on guidance towards a metric :\n\nIn terms of guidance on which metric to use, we recommend using the metric that is the closest to the human perceptual score! Thus, we added a comparison between the evaluation metrics to human perceptual scores. Please see the response (***) to Reviewer 1. \n\n\nThank you!!\n", "Hi all,\nI have a reference recommendation: prior work using variants of C2ST for evaluating GANs.\nGANs for Biological Image Synthesis\nAnton Osokin, Anatole Chessel, Rafael E. Carazo Salas, Federico Vaggi\nICCV 2017\nhttps://arxiv.org/abs/1708.04692\n\nThis paper used Metric 1 and Metric 4 (and the approximation of Wasserstein derived from WGAN-GP) to evaluate performance of GAN, WGAN, WGAN-GP for a specific application. They did a detailed study of how these metrics were correlated with the visual quality and several sanity checks of these metrics (see Section 5.1 and appendix A). I think it would be appropriate to cite this work.\n", "Thank you for directing us to FID paper.\n\nAs the reviewer stated, FID computes the distance between the Gaussian in Wasserstein-2 distance. The sufficient statistics of Gaussian comes from the first and second moment of the neural network features (convolutional neural network's feature maps). \n\nCompare to our proposed evaluation methods, FID has an advantage and disadvantage. The main advantage is the speed. The calculating of FID distance is much faster than our evaluation methods. The disadvantage of FID is that it only considers difference in first two moments of the samples, which can be insufficient unless the feature maps are Gaussian distributed. On the other hand, the four metrics that we consider does not make any assumptions about the distribution of the samples. \n\nWe agree with the reviewer on analyzing the experiments with FID! So, we have run experiments that includes FID: We included the FID scores to Table 3, which shows the overall performance of DCGAN, LSGAN, WGAN on CIFAR10. We also evaluated on more recently proposed models, such as DRAGAN, BEGAN, EBGAN, WGAN_GP, based on the off-the-shelf package on MNIST and Fashion-MNIST. The evaluation metric includes, LS, IW and FID. \n\n\nFor CIFAR10, the FID results agree with LS. The FID results are following: \n CIFAR10 FID\n DCGAN : 0.112 +- 0.010; \n W-DCGAN : 0.095 +- 0.003; \n LS-DCGAN : 0.088 +- 0.008 \n(the smaller FID the better; see the Table 3 for other metric scores). According to FID, LS-DCGAN is the best among three models. \n\nFor MNIST, the three metrics, LS, IW, and FID, agree with each other on the rank as well. They all find that samples from DRAGAN is the best, then LSGAN, and so on. \n MNIST Metric \n DCGAN IW: 0.111 +- 0.0074 LS: 0.4814 +- 0.0083 FID: 1.84 +- 0.15\n EBGAN IW: 0.029 +- 0.0026 LS: 0.7277 +- 0.0159 FID: 5.36 +- 0.32\n WGAN GP IW: 0.035 +- 0.0059 LS: 0.7314 +- 0.0194 FID: 2.67 +- 0.15\n LSGAN IW: 0.115 +- 0.0070 LS: 0.5058 +- 0.0117 FID: 2.20 +- 0.27\n BEGAN IW: 0.009 +- 0.0063 LS: -\t \t\t FID: 15.9 +- 0.48\n DRAGAN IW: 0.116 +- 0.0116 LS: 0.4632 +- 0.0247 FID: 1.09 +- 0.13\n\nFor Fashion-MNIST, LS, IW and FID agree on the rank of worst ones, but there are some subtle difference between LS and IW versus FID. According to LS and IW, DRAGAN samples are ranked first and LSGAN samples are ranked second, and visa versa for FID. \n Fashion-MNIST Metric\n DCGAN IW: 0.69 +- 0.0057 LS: 0.0202 +- 0.00242 FID: 3.23 +- 0.34\n EBGAN IW: 0.99 +- 0.0001 LS: 2.2e-5 +- 5.3e-5 FID: 104.08 +- 0.56\n WGAN GP IW: 0.89 +- 0.0086 LS: 0.0005 +- 0.00037 FID: 2.56 +- 0.25\n LSGAN IW: 0.68 +- 0.0086 LS: 0.0208 +- 0.00290 FID: 0.62 +- 0.13\n BEGAN IW: 0.90 +- 0.0159 LS: 0.0016 \t 0.00047 FID: 1.51 +- 0.16\n DRAGAN IW: 0.66 +- 0.0108 LS: 0.0219 +- 0.00232 FID: 0.97 +- 0.14\n \n \nWe will make sure to add these descriptions and experiment results in the paper for the next revision cycle.\n\nThank you!!", "[1] proposed the Fréchet Inception Distance (FID) to evaluate GANs which is the Fréchet distance aka Wasserstein-2 distance between the real world and generated samples statistics. [1] showed in their experiments clearly a much more consistent behaviour of the FID compared to the Inception Score. It is now unclear if the analysis in this paper could be improved by using the FID for GAN evaluations.\n\n[1] https://arxiv.org/abs/1706.08500\n", "Thank you for directing us to C2ST.\n\nThere is a relationship between the methods proposed and classifier two-sample tests (C2ST). C2ST proposes to train a classifier that can distinguish samples drawn from two distribution P and Q and accept/reject the null hypothesis.\n\nOne of the commonalities that shared between our proposed test methods and C2ST is that both require optimizing a function (training a neural network) at test time. C2ST trains neural network to maximize the classification between data and samples, whereas our method is training neural network to maximize the distance between the data and samples.\n\nIn our paper, we considered four distance metrics that belong to two class of metrics, $\\phi$-divergence and IPMs. Sriperumbudur et al. have shown that the optimal risk function is associated with a binary classifier with P and Q distribution conditioned on class when the discriminant function is restricted to certain F (Theorem 17 from Sriperumbudur et al).\n\nLet the optimal risk function be\n\n R(L,F) = inf_{f \\in F} \\int L(y, f(x)) dp(x,y)\n\nwhere F is the set of discriminant functions (classifier), y \\in {-1,1}, and L is the loss function.\n\nBy following derivation, we can see that the optimal risk function becomes IPM:\n\nR(L,F) = inf_{f \\in F} \\int L(y, f(x)) du(x,y)\n = inf_{f \\in F} [ eps \\int L(1, f(x)) dp(x) + (1 - eps) \\int L(0, f(x)) dq(x)\n = inf_{f \\in F} f dp(x) + \\inf_{f \\in F} f dq(x) \n = - IPM\n\nwhere L(1, f(x)) = 1/eps and L(0,f(x)) = -1 / (1-eps). \n\nThe second equality is derived by separating the loss for class 1 and class 0. \nThird equality is from the way how we chose L(1,f(x)) and L(0,f(x)).\nThe last equality follow by the fact that F is symmetric around zero (f \\in F => -f \\in F). Hence, this shows that with appropriately choosing L, MMD, Wasserstein distance can be understood as the optimal L-risk associated with binary classifier with specific set of F functions. For example, Wasserstein distance and MMD distances are equivalent to optimal risk function with 1-Lipschitz classifiers and RKHS classifier with an unit length.\n\nSimilarly, since every binary classifier has a corresponding distance metric from IPM, C2ST binary classifier must have a distance function associated with it with specific set of F. For example, if the binary classifier is KNN, then we are considering IPM metric with the topology induced by KNN, as well if the classifier is neural networks then we are considering IPM metric with the topology induced by the neural network.\n\nThank you for asking relationship between our proposed testing methods and C2ST.\nWe will make sure to add these descriptions in the paper for the next revision cycle!!!\n\n\n[1] Sriperumbudur et al. On Integral Probability Metrics, $\\phi$-divergence and binary classification.", "What is the relationship between the methods proposed here and classifier two-sample tests?\n\nhttps://arxiv.org/abs/1610.06545" ]
[ 7, 4, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJQHjzZ0-", "iclr_2018_SJQHjzZ0-", "iclr_2018_SJQHjzZ0-", "iclr_2018_SJQHjzZ0-", "SJm-JvWfz", "H1C2pZplz", "H1uFgwqeM", "ryX_FSexG", "iclr_2018_SJQHjzZ0-", "ry8IOs_yM", "iclr_2018_SJQHjzZ0-", "r1GaH_NAW", "iclr_2018_SJQHjzZ0-" ]
iclr_2018_BkLhaGZRW
Improving GAN Training via Binarized Representation Entropy (BRE) Regularization
We propose a novel regularizer to improve the training of Generative Adversarial Networks (GANs). The motivation is that when the discriminator D spreads out its model capacity in the right way, the learning signals given to the generator G are more informative and diverse, which helps G to explore better and discover the real data manifold while avoiding large unstable jumps due to the erroneous extrapolation made by D . Our regularizer guides the rectifier discriminator D to better allocate its model capacity, by encouraging the binary activation patterns on selected internal layers of D to have a high joint entropy. Experimental results on both synthetic data and real datasets demonstrate improvements in stability and convergence speed of the GAN training, as well as higher sample quality. The approach also leads to higher classification accuracies in semi-supervised learning.
accepted-poster-papers
+ Original regularizer that encourages discriminator representation entropy is shown to improve GAN training. + good supporting empirical validation - While intuitively reasonable, no compelling theory is given to justify the approach - The regularizer used in practice is a heap of heuristic approximations (continuous relaxation of a rough approximate measure of the joint entropy of a binarized activation vector) - The writing and the mathematical exposition could be clearer and more precise
train
[ "B1ssgfcgM", "SJa7Mu9lf", "By5aywgZf", "Hk1j4_p7z", "HyAkfOTQf", "Skcy4dTXf", "r1xcQdaXG", "Sk8_EdpXz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "The paper proposes a regularizer that encourages a GAN discriminator to focus its capacity in the region around the manifolds of real and generated data points, even when it would be easy to discriminate between these manifolds using only a fraction of its capacity, so that the discriminator provides a more informative signal to the generator. The regularizer rewards high entropy in the signs of discriminator activations. Experiments show that this helps to prevent mode collapse on synthetic Gaussian mixture data and improves Inception scores on CIFAR10. \n\nThe high-level idea of guiding model capacity by rewarding high-entropy activations is interesting and novel to my knowledge (though I am not an expert in this space). Figure `1 is a fantastic illustration that presents the core idea very clearly. That said I found the intuitive story a little bit difficult to follow -- it's true that in Figure 1b the discriminator won't communicate the detailed structure of the data manifold to the generator, but it's not clear why this would be a problem -- the gradients should still pull the generator *towards* the manifold of real data, and as this happens and the manifolds begin to overlap, the discriminator will naturally be forced to allocate its capacity towards finer-grained details. Is the implicit assumption that for real, high-dimensional data the generator and data manifolds will *never* overlap? But in that case much of the theoretical story goes out the window. I'd also appreciate further discussion of the relationship of this approach to Wasserstein GANs, which also attempt to provide a clearer training gradient when the data and generator manifolds do not overlap.\n\nMore generally I'd like to better understand what effect we'd expect this regularizer to have. It appears to be motivated by improving training dynamics, which is understandably a significant concern. Does it also change the location of the Nash equilibria? (or equivalently, the optimal generator under the density-ratio-estimator interpretation of discriminators proposed by https://arxiv.org/abs/1610.03483). I'd expect that it would but the effects of this changed objective are not discussed in the paper. \n\n The experimental results seem promising, although not earthshattering. I would have appreciated a comparison to other methods for guiding discriminator representation capacity, e.g. autoencoding (I'd also imagine that learning an inference network (e.g. BiGAN) might serve as a useful auxiliary task?). \n\nOverall this feels like an cute hack, supported by plausible intuition but without deep theory or compelling results on real tasks (yet). As such I'd rate it as borderline; though perhaps interesting enough to be worth presenting and discussing.\n\nA final note: this paper was difficult to read due to many grammatical errors and unclear or misleading constructions, as well as missing citations (e.g. sec 2.1). From the second paragraph alone:\n\"impede their wider applications in new data domain\" -> domains\n\"extreme collapse and heavily oscillation\" -> heavy oscillation\n\"modes of real data distribution\" -> modes of the real data distribution\n\"while D fails to exploit the failure to provide better training signal to G\" -> should be \"this failure\" to refer to the previously-described generator mode collapse, or rewrite entirely\n\"even when they are their Jensen-Shannon divergence\" -> even when their Jensen-Shannon divergence\n I'm sympathetic to the authors who are presumably non-native English speakers; many good papers contain mistakes, but in my opinion the level in this paper goes beyond what is appropriate for published work. I encourage the authors to have the work proofread by a native speaker; clearer writing will ultimately increase the reach and impact of the paper. \n", "The paper proposed a novel regularizer that is to be applied to the (rectifier) discriminators in GAN in order to encourage a better allocation of the \"model capacity\" of the discriminators over the (potentially multi-modal) generated / real data points, which might in turn helps with learning a more faithful generator.\n\nThe paper is in general very well written, with intuitions and technical details well explained and empirical studies carefully designed and executed.\n\nSome detailed comments / questions:\n\n1. It seems the concept of \"binarized activation patterns\", which the proposed regularizer is designed upon, is closely coupled with rectifier nets. I would therefore suggest the authors to highlight this assumption / constraint more clearly e.g. in the abstract.\n\n2. In order for the paper to be more self-contained, maybe list at least once the formula for \"rectifier net\" (sth. like \"a^T max(0, wx + b) + c\") ? This might also help the readers better understand where the polytopes in Figure 1 come from.\n\n3. In section 3.1, when presenting random variables (U_1, ..., U_d), I find the word \"Bernourlli\" a bit misleading because typically people would expect U_i to take values from {0, 1} whereas here you assume {-1, +1}. This can be made clear with just one sentence yet would greatly help with clearing away confusions for subsequent derivations.\nAlso, \"K\" is already used to denote the mini-batch size, so it's a slight abuse to reuse \"k\" to denote the \"kth marginal\".\n\n4. In section 3.2, it may be clearer to explicitly point out the use of the \"3-sigma\" rule for Gaussian distributions here. But I don't find it justified anywhere why \"leave 99.7% of i, j pairs unpenalized\" is sth. to be sought for here?\n\n5. In section 3.3, when presenting Corollary 3.3 of Gavinsky & Pudlak (2015), \"n\" abruptly appears without proper introduction / context.\n\n6. For the empirical study with 2D MoG, would an imbalanced mixture make it harder for the BRE-regularized GAN to escape from modal collapse?\n\n7. Figure 3 is missing the sub-labels (a), (b), (c), (d).", "The paper presents a method for improving the diversity of Generative Adversarial Network (GAN) by promoting the Gnet's weights to be as informative as possible. This is achieved by penalizing the correlation between responses of hidden nodes and promoting low entropy intra node. Numerical experiments that demonstrate the diversity increment on the generated samples are shown.\n\nConcerns.\n\nThe paper is hard do tear and it is deficit to identify the precise contribution of the authors. Such contribution can, in my opinion, be summarized in a potential of the form\n\nwith\n\n$$\nR_BRE = a R_ME+ b R_AC = a \\sum_k \\sum_i s_{ki}^2 + b \\sum_{<k,l>} \\sum_i \\{ s_{ki} s_{li} \\} \n$$\n(Note that my version of R_ME is different to the one proposed by the authors, but it could have the same effect)\n\nWhere a and b are parameters that weight the relative contribution of each term (maybe computed as suggested in the paper).\n\nIn this formulation:\n\nThen R_ME has a high response if the node has saturated responses -1’s or 1``s, as one desire such saturated responses, a should be negative.\n\nThe R_AC, penalizes correlation between responses of different nodes.\n\nThe point is, \n\na) The second term will introduce low correlation in saturated vectors, then the will be informative. \n \nb) why the authors use the softsign instead the tanh: $tahnh \\in C^2 $! Meanwhile the derivative id softsign is discontinuous.\n\nc) It is not clear is the softsign is used besides the activation function: In page 5 is said “R_BRE can be applied on ant rectified layer before the nolinearity” . This seems tt the authors propose to add a second activation function (the softsign), why not use the one is in teh layer?\n\nd) The authors found hard to regularize the gradient $\\nabla_x D(x)$, even they tray tanh and cosine based activations. It seems that effectively, the introduce their additional softsign in the process.\n\ne) En the definition of R_AC, I denoted by <k,l> the pair of nodes (k \\ne l). However, I think that it should be for pair in the same layer. It is not clear in the paper.\n\nf) It is supposed that the L_1 regularization motes the weights to be informative, this work is doing something similar. How is it compared the L_1 regularization vs. the proposal?\n\nRecommendation\nI tried to read the paper several times and I accept that it was very hard to me. The most difficult part is the lack of precision on the maths, it is hard to figure out what the authors contribution indeed are. I think there is some merit in the work. However, it is not very well organized and many points are not defined. In my opinion, the paper is in a preliminary stage and should be refined. I recommend a “SOFT” REJECT\n", "** Reviewer’s interpretation of the regularizer and potential confusion about its effect: **\n\n“ in my opinion, be summarized in a potential of the form with \n$$R_{BRE} = a R_{ME}+ b R_{AC} = a \\sum_k \\sum_i s_{ki}^2 + b \\sum_{<k,l>} \\sum_i \\{ s_{ki} s_{li} \\}$$\n“\n\nOur formulation of BRE is:\n$$R_{ME} = \\frac{1}{d}\\sum_{k=1}^d \\bar{s}_{(k)}^2 = \\frac{1}{d}\\sum_{k=1}^d \\frac{1}{K^2}(\\sum_{i=1}^{K}s_{k,i})^2$$\n$$R_{AC} = \\text{avg}_{i \\neq j} | s_{i}^{T}s_{j}| / d = \\frac{1}{K(K-1)} \\sum_{i\\neq j} | \\sum_{l=1}^d s_{il}s_{jl}| / d.$$\n\nLet RRME denote the RME term proposed in the reviewer’s comment, and RME be this term in the paper. Similarly let RRAC and RAC denote the RAC term in the comment and in the paper, respectively. The motivation of our regularizer is to encourage a large entropy of the activation vector on some particular layer of D, so that D would provide informative learning signals for G.\n\nThe activation vector s is a binary vector (each element is either +1 or -1), computed from a sign function. So $$s_{k,i}^2$$ will always be 1 in RRME and this term will be ineffective. On the other hand, RME encourages $$\\sum_{i=1}^{K}s_{k,i}$$ to be 0, i.e. zero mean for the k-th hidden unit. We would like to encourage this zero-mean property for the purpose of increasing its entropy (and thus $$a$$ is set to be positive). \n\nFor the second term, note that RAC has an absolute value on the correlation of s_i and s_j (they are approximately zero-mean because of RME), which encourage s_i and s_j to be independent. Both positive correlation and negative correlation are penalized. Thus again RAC encourage a large entropy. On the other hand, penalizing RRAC is actually encouraging s_k and s_l to be negatively correlated. The minimal value of the inner product of s_k and s_l is -d. However, it is impossible for all the pairs of (s_k, s_l) to simultaneously achieve this minimal value. It seems not easy to analyze when (s_1, …, s_K) would achieve its minimal value on RRAC, which makes it difficult to interpret its effect.\nOverall, we don’t see that the proposed regularizer in the review has a similar effect as the BRE regularizer proposed in the paper.\n\n“e) En the definition of R_AC, I denoted by <k,l> the pair of nodes (k \\ne l). However, I think that it should be for pair in the same layer. It is not clear in the paper.”\nYes, that is right. The summation is over all the pairs in the same layer. We have added a footnote for the definition of R_BRE to clarify this point in the new version of the paper. \n\n“f) It is supposed that the L_1 regularization motes the weights to be informative, this work is doing something similar. How is it compared the L_1 regularization vs. the proposal?”\nOur method is to encourage D to have diverse activation patterns, so that G could have more informative signals for learning. The BRE regularizer is computed based on the pre-nonlinearity value of the hidden nodes, while L_1 regularization is applied to the weights of D. We don’t see the connection between the L_1 regularization and the BRE regularizer at the moment. ", "We thank all reviewers for their detailed feedbacks and insights. Following remarks by the reviewers, we significantly revised the writing of the paper. The flow and the structure are mostly the same, but the language is changed significantly in most paragraphs to achieve better clarity hopefully. Illustrations are also added in Section 3 to help to interpret the notations. We would also like to shorten the title to “Improving GAN Training via Binarized Representation Entropy Regularization” if this paper is accepted. \n\nThe experiment section still demonstrates the same claims, but now with more comprehensive comparisons and some more compelling new results. In particular, Table 1 shows improvements over both baselines without the regularizer and WGAN-GP, on four different architectures. Figure 5 shows that for DCGAN that is already well-engineered to be stable, adding the regularizer makes the convergence to equilibrium significantly faster (shown by repeated runs with error bars). In the semi-supervised learning setting, we now report a systematic comparison from 10 random runs in Table 2, rather than curves from a single run using plots. Furthermore, we report semi-supervised classification results on SVHN which were not included in the original draft. The new results on SVHN show that even in semi-supervised learning with feature matching, GAN training occasionally fails, depending on the random seeds. But the proposed regularizer dramatically reduces this failure rate. The CelebA experimental results are now moved to the Appendix. A different set of 2D synthetic experimental results are shown in the main paper now (old ones are moved to the Appendix), which conveys the main ideas slightly better. \nTaken together, we believe that the updated experimental results provide much stronger evidences for our claim: the proposed BRE regularizer improves exploration in the initial phase of GAN training, so that G can discover various modes/parts of the real data manifold more successfully (pictorially illustrated in Fig 1); training is more stable because there is fewer large linear regions, in which D bluntly extrapolates and G makes large jumps as results; final sample quality is better following more stable training and less mode dropping; semi-supervised classification accuracy is also improved. \n \nDue to writing problems in the original draft, we feel that the main idea of the paper was not sufficiently laid out with clarity. This is now improved in the revised paper, and we would like to summarize it here again, briefly:\n\n- Our motivation is the following: when D spreads out its model capacity, the evenly dispersed partitioning of the input data space helps G to explore better and discover the real data manifold, while avoiding large unstable jumps due to erroneous extrapolation made by D.\n\n- When the classification task for D is too simple, the internal layers of D could have degenerate representations due to overfitting, whereby large portions of the input space are modelled as linear regions, as depicted in the illustration in Fig. 1 and shown by the synthetic experiment in Fig. 4 (control run without BRE), more plots in Fig. 12-17 in the appendix. This could happen at the beginning of the training, or even in the later stages for high dimensional data.\n\n - With such degeneracy, learning signals from D are not diverse and G could under-explore the input space at the beginning, potentially missing out on distinct faraway modes. G might recover later, but it is more desirable to sufficiently explore at the beginning, both for a better chance of capturing all modes and for faster convergence.\n\n- Furthermore, the large linear regions in the degenerate representation would cause the learning of G to bluntly extrapolate, producing large jumps that could drop the current modes and/or lead to oscillations. This phenomenon can be observed in the experiments on synthetic data in Figure 4, and Fig. 12-17 in the appendix.\n\n- If the model has already been well-engineered, such as DCGAN, the improvement by BRE is less dramatic at the end of training (still yielding some improvement, as shown by the DCGAN results in Table 1 on page 8 of the revised paper). However, the speed of convergence to equilibrium with BRE regularization is significantly faster (Figure 5 of the revised paper), thanks to the improved exploration at the beginning.", "Thank you for your detailed feedbacks and suggestions.\n\nWe have made improvements in the presentation of the paper in its new version. In particular Sec. 2 is rewritten to discuss the effects of the regularizer with better clarity. We also have more compelling and comprehensive experimental results supporting the intuition. Please see the summary of change about the updates in the experiment section. \n \nTo address your specific questions/comments:\n 1. “ it's true that in Figure 1b the discriminator won't communicate the detailed structure of the data manifold to the generator, but it's not clear why this would be a problem -- […]\" \n\n Ideally, when GAN training is stable, the min-max game eventually forces D to represent subtle variations in the real data distribution and passes the information to G. But when the internal representation of D is degenerate, two problems happen: 1) G under-explores, as all training signals from D could be co-linear if the fake points are in one linear region. It is unclear if G can always recover from collapsed mass caused by this. It is much more desirable for G to better explore the space from the beginning. And even if G could recover, the convergence is slowed down due to the need to correct initial mistakes. 2) large linear regions could cause the learning of G to bluntly extrapolate, resulting in large updates, which in turn drop already discovered real data modes and/or lead to oscillations. Both of these intuitions are captured in the updated 2D synthetic plots, as well as more detailed frames in Fig. 12 and 13. Furthermore, our updated results on the convergence speed for DCGAN confirms that improved initial exploration makes convergence faster even if the dynamics were stable already. \n\n\n2. “location of Nash equilibrium”\n\n The locations of the Nash equilibria would change. Since D is assigned a different reward objective, there is no reason to believe that D would still have the same values for the Nash equilibria. Annealing the coefficients of the regularizer may be able to maintain these locations (which depends on the uniform convergence property of the problem and the annealing strategy, which is beyond the scope of this paper.). Our preliminary studies for annealing the regularization coefficients produced marginally inferior Inception score. We have not explored different annealing strategies yet in the experiments.\n\n3 “discussion and comparison to Wasserstein GAN”\n\n Thank you for the suggestion. We’ve added a discussion in Sec.\\ 2.1 on other WGAN-GP and other methods that regularize gradient norm. We’ve also added comprehesive comparison in the experiments (Table 1.) showing that BRE outperforms WGAN-GP on all architectures tested.\n \n4. “auxiliary tasks”\n\n Indeed certain auxiliary tasks can regularize GAN training. For example, predicting image classes in semi-supervised learning GAN. BRE regularizer is compatible with semi-supervised GAN as well, and as shown in Sec. 4.3, can further improve the results. \n \n We tested out reconstructing real data as auxiliary task to regularize D, and found that it worsens results consistently. A brief discussion is added in Sec. 5 (DISCUSSION AND FUTURE WORK), and results are shown in a table in the Appendix. We believe this is an interesting direction worth further experimentations and analysis in future work. There are a few other GAN works that use auto-encoder, such as Energy-Based GAN (Zhao et al., 2016) and Boundary Equilibrium GAN (Berthelot et al., 2017), or learning an inference network as the reviewer suggested, (Donahue et al., 2016; Dumoulin et al., 2016). It is unclear if their benefits stem from the regularization effects or the fact that other parts of GAN (such as the objective) are modified. We added a disccussion about this in Sec. 2.1. \n", "Thank you for your insightful suggestions.\n\nWe have made improvements to the presentation of the paper according to the comments.\n\n“For the empirical study with 2D MoG, would an imbalanced mixture make it harder for the BRE-regularized GAN to escape from modal collapse?”\n\nThank you for the suggestion. We have added one more set of results for imbalanced mixture distributions in the appendix of the revised paper. We find that on imbalanced mixture distributions, BRE-regularized GAN can still discover the support of infrequent modes most of the time, however, sometimes the probability mass assigned to those modes is not correct (usually under represented). ", "Thank you for your comments. We have improved the explanation about the motivation of this regularizer, and the math presentation of its formal definition in the revised paper. We believe that there are a few misunderstandings of our method, and we will clarify them and address the reviewer’s questions below.\n\n**Concerning the use of softsign: **\n\n“(b) why the authors use the softsign instead the tanh: $tanh \\in C^2 $! Meanwhile the derivative id softsign is discontinuous.”\nUsing the softsign function to replace the sign function is to prevent null computation when $$h=0$$. Theoretically tanh with high temperature could also work. In the revised paper, we have also included the experimental results when using tanh, which shows decreased effectiveness in regularizing the GAN training comparing to softsign. We believe the reason why our version softsign is better empirically than tanh when used in BRE is due to the scale-invariance achieved by the adaptive \\epsilon. This is discussed at the end of the first paragraph in Sec. 3.3. \nAlso, the derivative of the softsign function is continuous (although its 2nd order derivative is not continuous at a single point, but does not really matter for SGD). \n\n“c) It is not clear is the softsign is used besides the activation function: In page 5 is said “R_BRE can be applied on ant rectified layer before the nolinearity” . This seems tt the authors propose to add a second activation function (the softsign), why not use the one is in teh layer?”\nNo, we compute the value of R_BRE from the immediate pre-nonlinearity layer, and add this value to the objective function. The nonlinearity of the networks are not changed. We have added a figure (Figure 2) in the revised paper to clarify this.\n\n“d) The authors found hard to regularize the gradient $\\nabla_x D(x)$, even they tray tanh and cosine based activations. It seems that effectively, the introduce their additional softsign in the process.”\nRegularizing the diversity of $\\nabla_x D(x)$ is a straightforward naive approach if we want rich diverse training signal for G. However, this does not work for rectifier nets, for reason analysed in Sec. 5 (Discussion), and hence one of the significance of our contribution. The use of softsign is unrelated to this issue. " ]
[ 6, 7, 4, -1, -1, -1, -1, -1 ]
[ 3, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BkLhaGZRW", "iclr_2018_BkLhaGZRW", "iclr_2018_BkLhaGZRW", "Sk8_EdpXz", "iclr_2018_BkLhaGZRW", "B1ssgfcgM", "SJa7Mu9lf", "By5aywgZf" ]
iclr_2018_r1NYjfbR-
Generative networks as inverse problems with Scattering transforms
Generative Adversarial Nets (GANs) and Variational Auto-Encoders (VAEs) provide impressive image generations from Gaussian white noise, but the underlying mathematics are not well understood. We compute deep convolutional network generators by inverting a fixed embedding operator. Therefore, they do not require to be optimized with a discriminator or an encoder. The embedding is Lipschitz continuous to deformations so that generators transform linear interpolations between input white noise vectors into deformations between output images. This embedding is computed with a wavelet Scattering transform. Numerical experiments demonstrate that the resulting Scattering generators have similar properties as GANs or VAEs, without learning a discriminative network or an encoder.
accepted-poster-papers
The paper got mixed scores of 4 (R1), 6 (R3), 8 (R2). R1 initially gave up after a few pages of reading, due to clarity problems. But looking over the revised version was much happier, so raised their score to 7. R2, who is knowledge about the area, was very positive about the paper, feeling it is a very interesting idea. R3 was also cautiously positive. The authors have absorbed the comments by the reviewers to make significant changes to the paper. The AC feels the idea is interesting, even if the experimental results aren't that compelling, so feels the paper can be accepted.
train
[ "H1WORsdlG", "SkJxZ1FeG", "H1QWqHsgz", "Hy83g4Gmz", "S1d6aXzmM", "ryTfQNMXM", "SkP1GEGmM", "SkG-ZEzQG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "After a first manuscript that needed majors edits, the revised version\noffers an interesting GAN approach based the scattering transform.\n\nApproach is well motivated with proper references to the recent literature.\n\nExperiments are not state of the art but clearly demonstrate that the\nproposed approach does provide meaningful results.", "The authors introduce scattering transforms as image generative models in the context of Generative Adversarial Networks and suggest why they could be seen as Gaussianization transforms with controlled information loss and invertibility.\nWriting is suggestive and experimental results are interesting, so I clearly recommend acceptation. \n\nI would appreciate more intuition on some claims (e.g. relation between Lipschitz continuity and wavelets) but they refer to the appropriate reference to Mallat, so this is not a major problem for the interested reader.\n\nHowever, related to the above non-intuitive claim, here is a question on a related Gaussianization transform missed by the authors that (I feel) fulfils the conditions defined in the paper but it is not obviously related to wavelets. Authors cite Chen & Gopinath (2000) and critizise that their approach suffers from the curse of dimensionality because of the ICA stage. However, other people [Laparra et al. Iterative Gaussianization from ICA to random rotations IEEE Trans.Neural Nets 2011] proved that the ICA stage is not required (but only marginal operations followed by even random rotations). That transform seems to be Lipschitz continuous as well -since it is smooth and derivable-. In fact it has been also used for image synthesis. However, it is not obviously related to wavelets... Any comment?\n\nAnother relation to previous literature: in the end, the proposed analysis (or Gaussianization) transform is basically a wavelet transform where the different scale filters are applied in a cascade (fig 1). This is similar to Gaussian Scale Mixture models for texture analysis [Portilla & Simoncelli Int. J. Comp. Vis. 2000] in which after wavelet transform, local division is performed to obtain Gaussian variables, and these can be used to synthesize the learned textures. That is similar to Divisive Normalization models of visual neuroscience that perform similar normalization alfter wavelets to factorize the PDF (e.g. [Lyu&Simoncelli Radial Gaussianization Neur.Comput. 2009], or [Malo et al. Neur.Comput. 2010]).\n\nMinor notation issues: authors use a notation for functions that seems confusing (to me) since it looks like linear products. For instance: GZ for G(Z) [1st page] and phiX for phi(X) [2nd page] Sx for S(x) [in page 5]... \n", "\nThe paper proposes a generative model for images that does no require to learn a discriminator (as in GAN’s) or learned embedding. The proposed generator is obtained by learning an inverse operator for a scattering transform.\n\nThe paper is well written and clear. The main contribution of the work is to show that one can design an embedding with some desirable properties and recover, to a good degree, most of the interesting aspects of generative models. However, the model doesn’t seem to be able to produce high quality samples. In my view, having a learned pseudo-inverse for scattering coefficients is interesting on its own right. The authors should show more clearly the generalization capabilities to test samples. Is the network able to invert images that follow the train distribution but are not in the training set?\n\nAs the authors point out, the representation is non-invertible. It seems that using an L2 loss in pixel space for training the generator would necessarily lead to blurred reconstructions (and samples) (as it produces a point estimate). Unless the generator overfits the training data, but then it would not generalize. The reason being that many images would lie in the level set for a given feature vector, and the generator cannot deterministically disambiguate which one to match. \n\nThe sampling method described in Section 3.2 does not suffer from this problem, although as the authors point out, a good initialization is required. Would it make sense to combine the two? Use the generator network to produce a good initial condition and then refine it with the iterative procedure.\n\nThis property is exploited in the conditional generation setting in:\n\nBruna, J. et al \"Super-resolution with deep convolutional sufficient statistics.\" arXiv preprint arXiv:1511.05666 (2015).\n\nThe samples produced by the model are of poorer quality than those obtained with GAN’s. Clearly the model is assigning mass to regions of the space where there are not valid images. (similar effect that suffer models train with MLE). Could you please comment on this point?\n\nThe title is a bit misleading in my view. “Analyzing GANs” suggests analyzing the model in general, this is, its architecture and training method (e.g. loss functions etc). However the analysis concentrates in the structure of the generator and the particular case of inverting scattering coefficients.\n\nHowever, I do find very interesting the analysis provided in Section 3.2. The idea of using meaningful intermediate (and stable) targets for the first two layers seems like a very good idea. Are there any practical differences in terms of quality of the results? This might show in more complex datasets.\n\nCould you please provide details on what is the dimensionality of the scattering representation at different scales? Say, how many coefficients are in S_5?\n\nIn Figure 3, it would be good to show some interpolation results for test images as well, to have a visual reference.\n\nThe authors mention that considering the network as a memory storage would allow to better recover known faces from unknown faces. It seems that it would be known from unknown images. Meaning, it is not clear why this method would generalize to novel image from the same individuals. Also, the memory would be quite rigid, as adding a new image would require adapting the generator.\n\nOther minor points:\n\nLast paragraph of page 1, “Th inverse \\Phi…” is missing the ‘e’.\n\nSome references (to figures or citations) seem to be missing, e.g. at the end of page 4, at the beginning of page 5, before equation (6). \n\nAlso, some citations should be corrected, for instance, at the end of the first paragraph of Section 3.1: \n\n“… wavelet filters Malat (2016).” \n\nSould be:\n\n“... wavelet filters (Malat, 2016).” \n\nFirst paragraph of Section 3.3. The word generator is repeated.\n", "Remark: Please do not miss the thank you comment (above) for all the reviews.\n\nQuestion: \"In my view, having a learned pseudo-inverse for scattering coefficients is interesting on its own right. The authors should show more clearly the generalization capabilities to test samples. Is the network able to invert images that follow the train distribution but are not in the training set?\"\n\nWe have reorganized the paper so that this very important generalization point appears immediately, in Section 2.1 of the new version. We have thus inverted two sections. Figure 4 shows that the network is indeed able to generalize in the sense that it can invert test images that follow the distributions but which are not in the training set. The precision of this recovery depends upon the image complexity, which is quantified by Table 1.\n\nQuestion: \"As the authors point out, the representation is non-invertible. It seems that using an L2 loss in pixel space for training the generator would necessarily lead to blurred reconstructions (and samples) (as it produces a point estimate). Unless the generator overfits the training data, but then it would not generalize. The reason being that many images would lie in the level set for a given feature vector, and the generator cannot deterministically disambiguate which one to match.\"\n\nThe blur is in fact not due to the non-invertibility of the embedding. We checked this by using an invertible embedding, by reducing the scattering scale $2^J$ from $2^5$ in the original paper to $2^4$ in this version. Figure 2(a,b) shows that this embedding operator is nearly invertible. However, as we now further emphasize in Section 2.1, the convolutional network does not invert exactly the embedding operator (here the scattering). It makes a regularized inversion on the training images. It is the regularization induced by the convolutional network structure which allows the generator to build random models of complex images. Figure 2(c,d) shows that it recovers much better images than what would have obtained by inverting the scattering embedding operator, this is also now explained in Section 2.1.\n\nThe regularized inversion is based on some form of memorization of information in the training samples, which does not seem to be sufficient for complex images. The blur is indeed smaller for images of polygons. It may also be that the blur is partly due to instabilities in the optimization. We now explain this point in the experiments section.\n\nQuestion: \"The sampling method described in Section 3.2 does not suffer from this problem, although as the authors point out, a good initialization is required. Would it make sense to combine the two? Use the generator network to produce a good initial condition and then refine it with the iterative procedure. This property is exploited in the conditional generation setting in: Bruna, J. et al \"Super-resolution with deep convolutional sufficient statistics.\" arXiv preprint arXiv:1511.05666 (2015).\"\n\nAs previously mentioned the goal is not to invert the scattering transform because it does not build good image models, as shown by the Figure 2. If we incorporate iterations from the inverse scattering transform, it degrades the model because the convolutional network generator becomes an inverse scattering transform. \n\nQuestion: \"The samples produced by the model are of poorer quality than those obtained with GAN’s. Clearly the model is assigning mass to regions of the space where there are not valid images. (similar effect that suffer models train with MLE). Could you please comment on this point?\"\n\nWe have reduced this effect by choosing a smaller maximum scattering scale $2^J$ with $J = 4$ as opposed to $J = 5$. GANs suffer from a diversity issue which means that they sample a limited part of the space. As the reviewer says, it seems that we do the opposite, the model assigns mass to regions where the images are not valid. We conjecture that there is a trade-off between image quality and diversity, this trade-off comes from a limited memory capacity of the network. However, understanding the generalization and memory capabilities of these models remains an open question.", "We would like to thank very much the reviewers who helped us to understand the weak points in the paper writing. As a consequence of these remarks, we have changed many explanations and some elements of the organization of the paper, which hopefully will make things clearer. We apologize for the heavy modifications that resulted in the paper, but we felt that given the positive and important feedback of the reviewers, \nwe should improve the paper presentation, at the cost of some reorganization. In the following, we answer each reviewer's remarks and relate them to the paper modifications.", "Remark: Please do not miss the thank you comment (above) for all the reviews.\n\nWe rewrote large parts of the paper to make it as clear as possible. Hopefully, as expressed in the thank you comment, the changes we made will make things clearer.", "Remark: Please do not miss the thank you comment (above) for all the reviews.\n\nQuestion: \"I would appreciate more intuition on some claims (e.g. relation between Lipschitz continuity and wavelets) but they refer to the appropriate reference to Mallat, so this is not a major problem for the interested reader.\"\n\nWe have included now the definition of Lipschitz continuity to deformations in order to specify more clearly what it means, and we give a short, intuitive explanation of what is required at the beginning of section 3.1. Going beyond would be too long, so we referred to the paper of Mallat (2012).\n\nQuestion: \"However, related to the above non-intuitive claim, here is a question on a related Gaussianization transform missed by the authors that (I feel) fulfils the conditions defined in the paper but it is not obviously related to wavelets. Authors cite Chen \\& Gopinath (2000) and critizise that their approach suffers from the curse of dimensionality because of the ICA stage. However, other people [Laparra et al. Iterative Gaussianization from ICA to random rotations IEEE Trans.Neural Nets 2011] proved that the ICA stage is not required (but only marginal operations followed by even random rotations). That transform seems to be Lipschitz continuous as well -since it is smooth and derivable-. In fact it has been also used for image synthesis. However, it is not obviously related to wavelets... Any comment?\"\n\nThese transforms are Lipschitz in the sense that a small additive modification of the input yields a small modification of the output. The Lipschitz continuity to deformations means that a small modification of the form of a dilation yields a small modification of the Euclidean norm of the resulting vector. However, a small dilation can induce a large displacement of high frequencies. To avoid this, it is necessary to separate the frequencies in different packets, which is done by wavelets, and map them back to lower frequencies, which is done by the modulus and averaging (a rectifier could replace the modulus). We now explain these points at the beginning of section 3.1.\n\nFor this reason, it is very unlikely that an Iterative Gaussianization produces an operator that is stable to deformations unless they take into account this issue, but in [Laparra et al. IEEE Trans.Neural Nets 2011] there is no mention of this fact. While it is true that they also synthesize images, they do not show results on the nature of the interpolations between them. \n\nQuestion: \"Another relation to previous literature: in the end, the proposed analysis (or Gaussianization) transform is basically a wavelet transform where the different scale filters are applied in a cascade (fig 1). This is similar to Gaussian Scale Mixture models for texture analysis [Portilla \\& Simoncelli Int. J. Comp. Vis. 2000] in which after wavelet transform, local division is performed to obtain Gaussian variables, and these can be used to synthesize the learned textures. That is similar to Divisive Normalization models of visual neuroscience that perform similar normalization alfter wavelets to factorize the PDF (e.g. [Lyu\\&Simoncelli Radial Gaussianization Neur.Comput. 2009], or [Malo et al. Neur.Comput. 2010]).\"\n\nYes, there are similarities between Portilla and Simoncelli representations and scattering representations, and we have included them in the references. Portilla and Simoncelli also use the modulus of wavelet coefficients. At the second order, they use a covariance operator; this may create a problem because covariance operators are not entirely stable to deformations (this is better explained in the paper of Mallat (2012)). One may indeed define embedding operators, different from the scattering transform, which could lead to as good and maybe better results. We now emphasize this critical point in the introduction and at the beginning of Section 3.1 and refer to all these papers.\n\nQuestion: \"Minor notation issues: authors use a notation for functions that seems confusing (to me) since it looks like linear products. For instance: GZ for G(Z) [1st page] and phiX for phi(X) [2nd page] Sx for S(x) [in page 5]...\"\n\nThis was modified.", "Question: \"The title is a bit misleading in my view. “Analyzing GANs” suggests analyzing the model in general, this is, its architecture and training method (e.g. loss functions etc). However the analysis concentrates in the structure of the generator and the particular case of inverting scattering coefficients.\"\n\nWe fully agree with this point. We thus propose to change the title to \"Generative networks as inverse problems with scattering transforms\" to emphasize our inverse problem approach, which is indeed different from GANs. This is the title of the new version. \n\nQuestion: \"However, I do find very interesting the analysis provided in Section 3.2. The idea of using meaningful intermediate (and stable) targets for the first two layers seems like a very good idea. Are there any practical differences in terms of quality of the results? This might show in more complex datasets.\"\n\nThere were slight numerical differences but no visual difference in the quality of the training and test images. To address the previous point of the reviewer, in this new version, we reduced the scattering scale from $2^5$ to $2^4$, which also improved image qualities. We thus do not distinguish the invertibility from the non-invertibility range of the scattering, which also simplifies explanations. We are thus now using a single global architecture which is the same as the DCGAN generator (Radford et al. 2016). We are currently exploring better the idea of using meaningful intermediate targets along the generative network. However, these results extend the paper considerably; therefore, they will be better explained in future work.\n\nQuestion \"Could you please provide details on what is the dimensionality of the scattering representation at different scales? Say, how many coefficients are in $S_5$?\"\n\nWe gave the ratio $\\alpha_j$ between the number of image coefficients and the size of each layer $S_j$. For $j = 5$ then $\\alpha_j = 0.66$ which means that $S_5$ has about twice less coefficients than the number of pixels in the image, which is $64^2$. If $j = 4$ then $\\alpha_j = 1.63$ and $S_4(x)$ thus has more coefficients than the image $x$ which explains why it is invertible. We now give these numbers.\n\nQuestion: \"In Figure 3, it would be good to show some interpolation results for test images as well, to have a visual reference.\"\n\nThis is done in Figure 6 in the new version.\n\nQuestion: \"The authors mention that considering the network as a memory storage would allow to better recover known faces from unknown faces. It seems that it would be known from unknown images. Meaning, it is not clear why this method would generalize to novel image from the same individuals. Also, the memory would be quite rigid, as adding a new image would require adapting the generator.\"\n\nThe network has some form of memory since it recovers high dimensional images from lower dimensional input vectors. It can be considered as an associative memory in the sense that it is content addressable. From an image $x$ sampled from $X$, we can compute an address $z = \\Sigma_d^{-1/2} (S_J (x) - \\mu)$ from which we can reconstruct an approximation of $x$. This is what is usually called an associative memory (e.g., Hopfield networks). Indeed, it clearly depends upon the generalization capabilities of the network.\n\nThe memory is indeed rigid in the sense that it requires modifying all coefficients to add a single image, but this is the case of any distributed associative memory such as Hopfield networks. We agree that the ability to add a new face easily in the network is key to have an effective memory.\n\nWe have included all the minor points of the reviewer." ]
[ 7, 8, 6, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1NYjfbR-", "iclr_2018_r1NYjfbR-", "iclr_2018_r1NYjfbR-", "H1QWqHsgz", "iclr_2018_r1NYjfbR-", "H1WORsdlG", "SkJxZ1FeG", "Hy83g4Gmz" ]
iclr_2018_BJGWO9k0Z
Critical Percolation as a Framework to Analyze the Training of Deep Networks
In this paper we approach two relevant deep learning topics: i) tackling of graph structured input data and ii) a better understanding and analysis of deep networks and related learning algorithms. With this in mind we focus on the topological classification of reachability in a particular subset of planar graphs (Mazes). Doing so, we are able to model the topology of data while staying in Euclidean space, thus allowing its processing with standard CNN architectures. We suggest a suitable architecture for this problem and show that it can express a perfect solution to the classification task. The shape of the cost function around this solution is also derived and, remarkably, does not depend on the size of the maze in the large maze limit. Responsible for this behavior are rare events in the dataset which strongly regulate the shape of the cost function near this global minimum. We further identify an obstacle to learning in the form of poorly performing local minima in which the network chooses to ignore some of the inputs. We further support our claims with training experiments and numerical analysis of the cost function on networks with up to 128 layers.
accepted-poster-papers
The paper got generally positive scores of 6,7,7. The reviewers found the paper to be novel but hard to understand. The AC feels the paper should be accepted but the authors should revise their paper to take into account the comments from the reviewers to improve clarity.
val
[ "HJ1MEAYxG", "HJraEkqlz", "BkojC46xG", "r13AbIaWG", "Byo7BITbM", "HyZXN8abM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors are motivated by two problems: Inputting non-Euclidean data (such as graphs) into deep CNNs, and analyzing optimization properties of deep networks. In particular, they look at the problem of maze testing, where, given a grid of black and white pixels, the goal is to answer whether there is a path from a designated starting point to an ending point. \n\nThey choose to analyze mazes because they have many nice statistical properties from percolation theory. For one, the problem is solvable with breadth first search in O(L^2) time, for an L x L maze. They show that a CNN can essentially encode a BFS, so theoretically a CNN should be able to solve the problem. Their architecture is a deep feedforward network where each layer takes as input two images: one corresponding to the original maze (a skip connection), and the output of the previous layer. Layers alternate between convolutional and sigmoidal. The authors discuss how this architecture can solve the problem exactly. The pictorial explanation for how the CNN can mimic BFS is interesting but I got a little lost in the 3 cases on page 4. For example, what is r? And what is the relation of the black/white and orange squares? I thought this could use a little more clarity. \n\nThough experiments, they show that there are two kinds of minima, depending on whether we allow negative initializations in the convolution kernels. When positive initializations are enforced, the network can more or less mimic the BFS behavior, but never when initializations can be negative. They offer a rigorous analysis into the behavior of optimization in each of these cases, concluding that there is an essential singularity in the cost function around the exact solution, yet learning succumbs to poor optima due to poor initial predictions in training. \n\nI thought this was an impressive paper that looked at theoretical properties of CNNs. The problem was very well-motivated, and the analysis was sharp and offered interesting insights into the problem of maze solving. What I thought was especially interesting is how their analysis can be extended to other graph problems; while their analysis was specific to the problem of maze solving, they offer an approach -- e.g. that of finding \"bugs\" when dealing with graph objects -- that can extend to other problems. I would be excited to see similar analysis of other toy problems involving graphs.\n\nOne complaint I had was inconsistent clarity: while a lot was well-motivated and straightforward to understand, I got lost in some of the details (as an example, the figure on page 4 did not initially make much sense to me). Also, in the experiments, the authors mention multiple attempt with the same settings -- are these experiments differentiated only by their initialization? Finally, there were various typos throughout (one example is \"neglect minimua\" on page 2 should be \"neglect minima\").\n\nPros: Rigorous analysis, well motivated problem, generalizable results to deep learning theory\nCons: Clarity ", "This paper thoroughly analyzes an algorithmic task (determining if two points in a maze are connected, which requires BFS to solve) by constructing an explicit ConvNet solution and analytically deriving properties of the loss surface around this analytical solution. They show that their analytical solution implements a form of BFS algorithm, characterize the probability of introducing \"bugs\" in the algorithm as the weights move away from the optimal solution, and how this influences the error surface for different depths. This analysis is conducted by drawing on results from the field of critical percolation in physics.\n\nOverall, I think this is a good paper and its core contribution is definitely valuable: it provides a novel analysis of an algorithmic task which sheds light on how and when the network fails to learn the algorithm, and in particular the role which initialization plays. The analysis is very thorough and the methods described may find use in analyzing other tasks. In particular, this could be a first step towards better understanding the optimization landscape of memory-augmented neural networks (Memory Networks, Neural Turing Machines, etc) which try to learn reasoning tasks or algorithms. It is well-known that these are sensitive to initialization and often require running the optimizer with multiple random seeds and picking the best one. This work actually explains the role of initialization for learning BFS and how certain types of initialization lead to poor solutions. I am curious if a similar analysis could be applied to methods evaluated on the bAbI question-answering tasks (which can be represented as graphs, like the maze task) and possibly yield better initialization or optimization schemes that would remove the need for multiple random seeds. \n\nWith that being said, there is some work that needs to be done to make the paper clearer. In particular, many parts are quite technical and may not be accessible to a broader machine learning audience. It would be good if the authors spent more time developing intuition (through visualization for example) and move some of the more technical proofs to the appendix. Specifically:\n- I think Figure 3 in the appendix should be moved to the main text, to help understand the behavior of the analytical solution. \n- Top of page 5, when you describe the checkerboard BFS: please include a visualization somewhere, it could be in the Appendix.\n- Section 6: there is lots of math here, but the main results don't obviously stand out. I would suggest highlighting equations 2 and 4 in some way (for example, proposition/lemma + proof), so that the casual reader can quickly see what the main results are. Interested readers can then work through the math if they want to. Also, some plots/visualizations of the loss surface given in Equations 4 and 5 would be very helpful. \n\nAlso, although I found their work to be interesting after finishing the paper, I was initially confused by how the authors frame their work and where the paper was heading. They claim their contribution is in the analysis of loss surfaces (true) and neural nets applied to graph-structured inputs. This second part was confusing - although the maze can be viewed as a graph, many other works apply ConvNets to maze environments [1, 2, 3], and their work has little relation to other work on graph CNNs. Here the assumptions of locality and stationarity underlying CNNs are sensible and I don't think the first paragraph in Section 3 justifying the use of the CNN on the maze environment is necessary. However, I think it would make much more sense to mention how their work relates to other neural network architectures which learn algorithms (such as the Neural Turing Machine and variants) or reasoning tasks more generally (for example, memory-augmented networks applied to the bAbI tasks). \n\nThere are lots of small typos, please fix them. Here are a few:\n- \"For L=16, batch size of 20, ...\": not a complete sentence. \n- Right before 6.1.1: \"when the these such\" -> \"when such\"\n- Top of page 8: \"it also have a\" -> \"it also has a\", \"when encountering larger dataset\" -> \"...datasets\"\n- First sentence of 6.2: \"we turn to the discuss a second\" -> \"we turn to the discussion of a second\"\n- etc. \n\nQuality: High\nClarity: medium-low\nOriginality: high\nSignificance: medium-high\n\nReferences:\n[1] https://arxiv.org/pdf/1602.02867.pdf\n[2] https://arxiv.org/pdf/1612.08810.pdf\n[3] https://arxiv.org/pdf/1707.03497.pdf", "This paper studies a toy problem: a random binary image is generated, and treated as a maze (1=wall, 0=freely moveable space). A random starting point is generated. The task is to learn whether the center pixel is reachable from the starting point.\n\nA deep architechture is proposed to solve the problem: see fig 1. A conv net on the image is combined with that on a state image, the state being interpreted as rechable pixels. This can work if each layer expands the reachable region (the state) by one pixel if the pixel is not blocked.\n\nTwo local minima are observed: 1) the network ignores stucture and guesses if the task is solvable by aggregate statistics 2) it works as described above but propagates the rechable region on a checkerboard only.\n\nThe paper is chiefly concerned with analysing these local minima by expanding the cost function about them. This analysis is hard to follow for non experts graph theory. This is partly because many non-trivial results are mentioned with little or no explanation.\n\nThe paper is hard to evaluate. The actual setup seems somewhat arbitrary, but the method of analysing the failure modes is interesting. It may inspire more useful research in the future.\n\nIf we trust the authors, then the paper seems good because it is fairly unusual. But it is hard to determine whether the analysis is correct.", "AUTHORS: We would first like to thank the reviewer his their valuable comments and feedback. We were glad to hear that all reviewers agreed that our paper has novel and positive points. We strived to follow the reviewers’ comments to improve it, especially increasing its clarity. The changes can be viewed in the revised version already submitted. Here we address all the concerns raised by the reviewers, pointing out the correspondent modifications made in the text (all references made to section, figures and equation are w.r.t. their number in the revised version of the paper):\n\nMAIN COMMENTS: \n1) This analysis is hard to follow for non experts in graph theory. This is partly because many non-trivial results are mentioned with little or no explanation. \n\nAUTHORS: Following the referee’s comment we improved the text along the following three main lines:\n1. Further numerical results in the form of snapshots of activations levels and movies tracking the development of activation levels as a function of the layers, were added to the work. This provides direct evidence supporting our analytical claims.\n2. The key results in our analytical analysis carried in section 6. have been brought forward so that the key results of the analytical derivation is clearer. \n3. The numerical verification of our key results have been brought from the appendix into the main text (Fig 5.) \nWe hope that this would increase the referee’s confidence in our claims. \n\n2) The paper is hard to evaluate. The actual setup seems somewhat arbitrary, but the method of analysing the failure modes is interesting. \n\nAUTHORS: The reason we have chosen the proposed toy model is two-fold: i) it allows the modelling of data structured as planar graphs, an important and actual research topic in the field; \nii) it allows the use of various tools from theoretical physics to derive non-trivial analytical results from the data. Indeed believe that our analysis of failure mode will be relevant for other problems as well. \n\n3) If we trust the authors, then the paper seems good because it is fairly unusual. But it is hard to determine whether the analysis is correct. \n\nAUTHORS: To better support the correctness of the presented analysis, we have enhanced the main text with more experimental results and illustrations in the form of figures and movies. Some of them were previously placed in the Appendix (Fig. 5), while others were added to the text (Fig. 3 and 4). We also mention that in Fig. 5 our theoretical predictions regarding the error landscape are shown to compare well with numerical experiments. ", "AUTHORS: We would first like to thank the reviewer his their valuable comments and feedback. We were glad to hear that all reviewers agreed that our paper has novel and positive points. We strived to follow the reviewers’ comments to improve it, especially increasing its clarity. The changes can be viewed in the revised version already submitted. Here we address all the concerns raised by the reviewers, pointing out the correspondent modifications made in the text (all references made to section, figures and equation are w.r.t. their number in the revised version of the paper):\n\nMAIN COMMENTS:\n\n1) The pictorial explanation for how the CNN can mimic BFS is interesting but I got a little lost in the 3 cases on page 4. For example, what is r? And what is the relation of the black/white and orange squares? I thought this could use a little more clarity.\n\nAUTHORS: First we'd like to the comment that the revised version contains many new illustrations, numerical results, and movies which are meant to improve the clarity of our presentation. More specifically to the above comment we have improved the explanation in Section 4, correcting nomenclature (referring to x instead of r to denote a position in the maze or in the hot-spot images) and improved Figure 2 where the 3 cases in question are illustrated. Already in Figure 1 we added the explanation: “A maze-testing sample consists of a maze (I) and an initial hot-spot image (H0). The proposed architecture processes H0 by generating a series of hot-spot images (Hi>0) which are of the same dimension as H0, however their pixels are not binary but rather take on values between 0 (Off, pale-orange) and 1 (On, red).” We have added a similar explanation in the legend of Figure 2.\n\n2) One complaint I had was inconsistent clarity: while a lot was well-motivated and straightforward to understand, I got lost in some of the details (as an example, the figure on page 4 did not initially make much sense to me).\n\nAUTHORS: As mentioned previously, we have improved the clarity of Section 4, along with other parts of text.\n\n3) Also, in the experiments, the authors mention multiple attempt with the same settings -- are these experiments differentiated only by their initialization?\n\nAUTHORS: Yes. This has now been made clearer in the text. \n\n4) Finally, there were various typos throughout (one example is \"neglect minimua\" on page 2 should be \"neglect minima\").\n\nAUTHORS: We have corrected these and additional typos in the text.\n\n \nMAIN SUGGESTIONS: \n\nI would be excited to see similar analysis of other toy problems involving graphs. \n\nAUTHORS: We appreciate this suggestion which was added as possible future works to our conclusion section.", "AUTHORS: We would first like to thank the reviewer his their valuable comments and feedback. We were glad to hear that all reviewers agreed that our paper has novel and positive points. We strived to follow the reviewers’ comments to improve it, especially increasing its clarity. The changes can be viewed in the revised version already submitted. Here we address all the concerns raised by the reviewers, pointing out the correspondent modifications made in the text (all references made to section, figures and equation are w.r.t. their number in the revised version of the paper):\n\nMAIN COMMENTS: \n \n1) In particular, many parts are quite technical and may not be accessible to a broader machine learning audience. It would be good if the authors spent more time developing intuition.... Specifically: \n\n- I think Figure 3 in the appendix should be moved to the main text, to help understand the behaviour of the analytical solution.\n\nAUTHORS: We have merged former Figure 3 (Appendix) with former Figure 1 (main text), in what is now the current Figure 1 (main text). We belief this figure now illustrate better the whole architecture as well as how each sample of the toy dataset is composed. In the legend of the current Figure 1 we have added a lot more details about the dataset, the architecture and the breadth-first search optimum.\n\n- Top of page 5, when you describe the checkerboard BFS: please include a visualization somewhere, it could be in the Appendix.\n\nAUTHORS: We have added the Figure 3, in which we included a visualization of the checkerboard BFS pattern. Additionally we have included Figure 4, which illustrate the occurrence of bugs in the process. We also have included links to videos showing the layer by layer activation levels for these two phenomena. \n\n- Section 6: there is lots of math here, but the main results don't obviously stand out. I would suggest highlighting equations 2 and 4 in some way (for example, proposition/lemma + proof)..... Also, some plots/visualizations of the loss surface given in Equations 4 and 5 would be very helpful.\n\nAUTHORS: As suggested, we have made section 6 more concise, highlighting and simplifying former equations 2, 4 and 5 (current equations 1, 3 and 4). These key results were moved to the beginning of their subsections and are then followed by their derivations. We hope this simplifies the reading as required. Regarding visualization of the loss function, this now appears in Fig. 5 which plots the logarithm of the error near the optimal BFS solution. \n\n2) I was initially confused by how the authors frame their work and where the paper was heading. They claim their contribution is in the analysis of loss surfaces (true) and neural nets applied to graph-structured inputs. \nThis second part was confusing - although the maze can be viewed as a graph, many other works apply ConvNets ...\nHere the assumptions of locality and stationarity underlying CNNs are sensible and I don't think the first paragraph in Section 3 justifying the use of the CNN on the maze environment is necessary. \nHowever, I think it would make much more sense to mention how their work relates to other neural network architectures which learn algorithms ...\n\nAUTHORS: We appreciated the above comments from R3. We included the cited references, from the realm of reinforced learning and planning, into our introduction in the context of related works (fourth paragraph). We also simplified the first paragraph in Section 3 suppressing unnecessary justifications.\nConcerning the modelling of mazes as graphs, although we acknowledge that [1,2,3] do not directly or strongly relate mazes to graphs, this approach is commonly adopted in graph theory and applications. We also referred to additional works in the introduction to support our claim (fourth paragraph). Moreover, in doing so, we belief we were capable of: i) establishing a correspondence between our network and the well-know BFS algorithm; ii) pointing out real problems in which planar graph can be used to model the data; iii) setting up the introduced analysis as the basis for an eventual extension of it to general graphs. \nFinally, we also have added references and explanations based on the insightful comment about the relation between our framework and “memory” networks. These points were added in the penultimate paragraph of the introduction.\n\n3) There are lots of small typos, please fix them....\n\nAUTHORS: We have corrected all the aforementioned typos and some additional ones.\n\nMAIN SUGGESTIONS: \n\nI am curious if a similar analysis could be applied to methods evaluated on the bAbI question-answering tasks (which can be represented as graphs, like the maze task) and possibly yield better initialization or optimization schemes that would remove the need for multiple random seeds. \n\nAUTHORS: We appreciate this suggestion which was added as possible future works to our conclusion section.\n" ]
[ 7, 7, 6, -1, -1, -1 ]
[ 3, 3, 1, -1, -1, -1 ]
[ "iclr_2018_BJGWO9k0Z", "iclr_2018_BJGWO9k0Z", "iclr_2018_BJGWO9k0Z", "BkojC46xG", "HJ1MEAYxG", "HJraEkqlz" ]
iclr_2018_HkNGsseC-
On the Expressive Power of Overlapping Architectures of Deep Learning
Expressive efficiency refers to the relation between two architectures A and B, whereby any function realized by B could be replicated by A, but there exists functions realized by A, which cannot be replicated by B unless its size grows significantly larger. For example, it is known that deep networks are exponentially efficient with respect to shallow networks, in the sense that a shallow network must grow exponentially large in order to approximate the functions represented by a deep network of polynomial size. In this work, we extend the study of expressive efficiency to the attribute of network connectivity and in particular to the effect of "overlaps" in the convolutional process, i.e., when the stride of the convolution is smaller than its filter size (receptive field). To theoretically analyze this aspect of network's design, we focus on a well-established surrogate for ConvNets called Convolutional Arithmetic Circuits (ConvACs), and then demonstrate empirically that our results hold for standard ConvNets as well. Specifically, our analysis shows that having overlapping local receptive fields, and more broadly denser connectivity, results in an exponential increase in the expressive capacity of neural networks. Moreover, while denser connectivity can increase the expressive capacity, we show that the most common types of modern architectures already exhibit exponential increase in expressivity, without relying on fully-connected layers.
accepted-poster-papers
The paper received scores of 8 (R1), 6 (R2), 6 (R3). R1's review is brief, and also is optimistic that these results demonstrated on ConvACs generalize to real convnets. R2 and R3 feel this might be a potential problem. R2 advocates weak accept and given that R1 is keen on the paper, the AC feels it can be accepted.
train
[ "BJZ4zdslf", "HkopHcseG", "BypvOtGZz", "HyxMluZfz", "S1rPmFqZM", "SyfVBDMbG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper studies the expressive power provided by \"overlap\" in convolution layers of DNNs. Instead of ReLU networks with average/max pooling (as is standard in practice), the authors consider linear activations with product pooling. Such networks, which have been known as convolutional arithmetic circuits, are easier to analyze (due to their connection to tensor decomposition), and provide insight into standard DNNs.\n\nFor these networks, the authors show that overlap results in the overall function having a significantly higher rank (exponentially larger) than a function obtained from a network with non-overlapping convolutions (where the stride >= filter width). The key part of the proof is showing a lower bound on the rank for networks with overlap. They do so by an argument well-known in this space: showing a lower bound for some particular tensor, and then inferring the bound for a \"generic\" tensor.\n\nThe results are interesting overall, but the paper has many caveats:\n1. the results are only for ConvACs, which are arguably quite different from ReLU networks (the non-linearity in successive non-pooling layers could be important).\n2. it's not clear if the importance of overlap is too surprising (or is a pressing question to understand, as in the case of depth).\n3. the rank of the tensor being high does not preclude approximation (to a very good accuracy) by tensors of much smaller rank.\n\nThat said, the results could be of interest to those thinking about minimizing the number of connections in ConvNets, as it gives some intuition about how much overlap might 'suffice'. \n\nI recommend weak accept.", "The paper analyzes the expressivity of convolutional arithmetic circuits (ConvACs), where neighboring neurons in a single layer have overlapping receptive fields. To compare the expressivity of overlapping networks with non-overlapping networks, the paper employs grid tensors computed from the output of the ConvACs. The grid tensors are matricized and the ranks of the resultant matrices are compared. The paper obtains a lower bound on the rank of the resultant grid tensors, and uses them to show that an exponentially large number of non-overlapping ConvACs are required to approximate the grid tensor of an overlapping ConvACs. Assuming that the result carries over to ConvNets, I find this result to be very interesting. While overlapped convolutional layers are almost universally used, there has been very little theoretical justification for the same. This paper shows that overlapped ConvACs are exponentially more powerful than their non-overlapping counterparts. ", "The paper studies convolutional neural networks where the stride is smaller than the convolutional filter size; the so called overlapping convolutional architectures. The main object of study is to quantify the benefits of overlap in convolutional architectures.\n\nThe main claim of the paper is Theorem 1, which is that overlapping convolutional architectures are efficient with respect to non-overlapping architectures, i.e., there exists functions in the overlapping architecture which require an exponential increase in size to be represented in the non-overlapping architecture; whereas overlapping architecture can capture within a linear size the functions represented by the non-overlapping architectures. The main workhorse behind the paper is the notion of rank of matricized grid tensors following a paper of Cohen and Shashua which capture the relationship between the inputs and the outputs, the function implemented by the neural network. \n\n(1) The results of the paper hold only for product pooling and linear activation function except for the representation layer, which allows general functions. It is unclear why the generalized convolutional networks are stated with such generality when the results apply only to this special case. That this is the case should be made clear in the title and abstract. The paper makes a point that generalized tensor decompositions can be potentially applied to solve the more general case, but since it is left as future work, the paper should make it clear throughout.\n\n(2) The experiment is minimal and even the given experiment is not described well. What data augmentation was used for the CIFAR-10 dataset? It is only mentioned that the data is augmented with translations and horizontal flips. What is the factor of augmentation? How much translation? These are important because there maybe a much simpler explanation to the benefit of overlap: it is able to detect these translated patterns easily. Indeed, this simple intuition seems to be why the authors chose to make the problem by introducing translations and flips. \n\n(3) It is unclear if the paper resolves the mystery that they set out to solve, which is a reconciliation of the following two observations (a) why are non-overlapping architectures so common? (b) why only slight overlap is used in practice? The paper seems to claim that since overlapping architectures have higher expressivity that answers (a). It appears that the paper does not answer (b) well: it points out that since there is exponential increase, there is no reason to increase it beyond a particular point. It seems the right resolution will be to show that after the overlap is set to a certain small value, there will be *only* linear increase with increasing overlap; i.e., the paper should show that small overlap networks are efficient with respect to *large* overlap networks; a comparison that does not seem to be made in the paper. \n\n(4) Small typo: the dimensions seem to be wrong in the line below the equation in page 3. \n\nThe paper makes important progress on a highly relevant problem using a new methodology (borrowed from a previous paper). However, the writing is hurried and the high-level conclusions are not fully supported by theory and experiments. ", "We thank the reviewer for reading our paper and taking the time reviewing it. Our response follows:\n\n1. We present the concept of Generalized Convolutional Networks to link our overlapping extension of Convolutional Arithmetic Circuits (ConvACs) to standard ConvNets with overlaps — there is more than one way to extend ConvACs to overlapping architecture, and the GC framework we introduced shows why our specific extension is natural with respect to standard ConvNets. Given the reviewer’s suggestion, we have updated our abstract to emphasize that we examine the expressive efficiency of overlapping architectures by theoretically analyzing ConvACs as surrogates to standard ConvNets, while demonstrating empirically that our predictions hold for standard ConvNets as well.\n\n2. Regarding our experiments:\n\n * We have added the requested details on or experiments to a new revision of our submission. Specifically, we uniformly sampled translations with a maximal translation of 3 pixels in each direction, i.e. 10% of the dimensions of the CIFAR10 images. We also plan to release the source code for reproducing our results once the double blind phase of ICLR is over.\n\n * To resolve the concerns of the reviewer regarding an alternative explanation to our experiments that used translation augmentations, we have conducted an additional set of experiments. Instead of relying on spatial transformations, this time we use color augmentations, i.e. we randomly sample a constant shift to the hue, saturation and luminance of each pixel, as well as a randomly sample multiplication factor just for the saturation and luminance (see new revision for details). This augmentation method is based on the one suggested by [1]. These new results exactly follow the behavior we have seen with spatial augmentations: (i) there is a very large gap between the non-overlapping and overlapping case, even when comparing against non-overlapping networks with many more channels, and (ii) when plotted with respect to number of parameters, all overlapping architecture fall on the same curve. Thus, our results cannot be explained simply by some spatial-invariance type argument. Our full results are presented in the revision of our latest revision to our submission.\n\n3. In the paper we have proved a strong theoretical argument to address observation (a), i.e. why are non-overlapping architectures so common, and a weak theoretical argument to address observation (b), i.e. why only slight overlap is used in practice, that is then augmented by our experiments. Indeed the optimal theoretical argument for observation (b) would be to show an upper on the rank of the grid tensor with respect to the overlapping degree, however, at this point we were only able to show upper bounds for non-overlapping architectures — proving general upper bounds is left for future works. Nevertheless, given our proven lower bounds, we demonstrated (Proposition 2) that even architectures with very little overlapping are already separated from the non-overlapping case. This partially addresses observation (b) by showing that separation from the non-overlapping case does not mean large local receptive fields. Additionally, in our experiments, we see that when we only modify the local receptive fields, then the expressiveness of all overlapping architectures fall on exactly the same curve with respect to the number of parameters, which strengthen our argument that small receptive fields are sufficient, while leaving it as an open conjecture for future theoretical works to prove completely.\n\n4. We have fixed the typo in our new revision - thank you for pointing it out.\n\nReferences:\n[1] Dosovitskiy et al. “Discriminative Unsupervised Feature Learning with Convolutional Neural Networks” (NIPS 2014)", "We thank the reviewer for taking the time to review our paper and supporting it.", "We thank the reviewer for reading our paper and taking the time reviewing it. Our response follows:\n\n1. While ConvACs do not model all aspects of ReLU networks (as you noted), they do model what we believe are their most important properties: locality, sharing, and pooling, i.e. the architecture of the network. Additionally, It has already been shown in the past [1] that results on ConvACs could be transfer to ReLU networks with max pooling, which is a topic we wish to address in consecutive works.\n2. One of the contributions of our paper is to specifically highlight this area that has so far being mostly overlooked compared to the many studies (both empirical and theoretical) that have been focused just on the depth property. Extending our understanding beyond the depth is critical if wish to understand why modern networks work better than the ones that came before, and how we can design better networks in the future. More specifically for the case of overlaps, in many respects the inclusion of overlapping convolutional filters in almost all neural networks used in practice is taken today for granted, while so far there weren’t any theoretical arguments to why that should be the case — in fact, there are actually some recent theoretical arguments to why there shouldn’t be any overlaps, e.g. universality [2], better convergence [3], and simply requiring less computations. On the other hand, practitioners have slowly moved to networks where the overlapping degree is considerably decreased compared to the past (where large convolutional kernels were very commons, while today a mixture of 3x3 and 1x1 convolutions prevail). We explain both phenomena by proving that overlaps lead to exponential expressive efficiency, and that even with the degree of overlaps used in practice this exponential separation is already achieved.\n3. It is important to emphasize that though our analysis relies on grid tensors, our bounds are on the rank of different matricizations of these tensors, and hence our results could translate to approximation bounds for specific cases by examining the singular values of the matricized grid tensors. Particularly, the example we construct for our lower bound is very close to the identity matrix, and thus non-overlapping networks would not be able to approximate such a grid tensor to a good degree.\n\nReferences:\n[1] Cohen et al. “Convolutional Rectifier Networks as Generalized Tensor Decompositions” (ICML 2016)\n[2] Cohen et al. “On the Expressive Power of Deep Learning: A Tensor Analysis” (COLT 2016)\n[3] Brutzkus et al. “Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs” (ICML 2017)" ]
[ 6, 8, 6, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1 ]
[ "iclr_2018_HkNGsseC-", "iclr_2018_HkNGsseC-", "iclr_2018_HkNGsseC-", "BypvOtGZz", "HkopHcseG", "BJZ4zdslf" ]
iclr_2018_HJ94fqApW
Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers
Model pruning has become a useful technique that improves the computational efficiency of deep learning, making it possible to deploy solutions in resource-limited scenarios. A widely-used practice in relevant work assumes that a smaller-norm parameter or feature plays a less informative role at the inference time. In this paper, we propose a channel pruning technique for accelerating the computations of deep convolutional neural networks (CNNs) that does not critically rely on this assumption. Instead, it focuses on direct simplification of the channel-to-channel computation graph of a CNN without the need of performing a computationally difficult and not-always-useful task of making high-dimensional tensors of CNN structured sparse. Our approach takes two stages: first to adopt an end-to-end stochastic training method that eventually forces the outputs of some channels to be constant, and then to prune those constant channels from the original neural network by adjusting the biases of their impacting layers such that the resulting compact model can be quickly fine-tuned. Our approach is mathematically appealing from an optimization perspective and easy to reproduce. We experimented our approach through several image learning benchmarks and demonstrate its interest- ing aspects and competitive performance.
accepted-poster-papers
The paper received scores either side of the borderline: 6 (R1), 5 (R2), 7 (R3). R1 and R3 felt the idea to be interesting, simple and effective. R2 raised a number of concerns which the rebuttal addressed satisfactorily. Therefore the AC feels the paper can be accepted.
val
[ "BJtJ3c_gG", "B1rak-5eG", "B1KcBUqlz", "r1lVkUJGz", "Skqs6Bh-G", "r1ud3r3WM", "HJklnrnbM", "ByUWjHhZf", "ByWHEgg-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "author", "author", "author", "public" ]
[ "In this paper, the authors propose a data-dependent channel pruning approach to simplify CNNs with batch-normalizations. The authors view CNNs as a network flow of information and applies sparsity regularization on the batch-normalization scaling parameter \\gamma which is seen as a “gate” to the information flow. Specifically, the approach uses iterative soft-thresholding algorithm step to induce sparsity in \\gamma during the overall training phase of the CNN (with additional rescaling to improve efficiency. In the experiments section, the authors apply their pruning approach on a few representative problems and networks. \n\nThe concept of applying sparsity on \\gamma to prune channels is an interesting one, compared to the usual approaches of sparsity on weights. However, the ISTA, which is equivalent to L1 penalty on \\gamma is in spirit same as “smaller-norm-less-informative” assumption. Hence, the title seems a bit misleading. \n\nThe quality and clarity of the paper can be improved in some sections. Some specific comments by section:\n\n3. Rethinking Assumptions:\n-\tWhile both issues outlined here are true in general, the specific examples are either artificial or can be resolved fairly easily. For example: L-1 norm penalties only applied on alternate layers is artificial and applying the penalties on all Ws would fix the issue in this case. Also, the scaling issue of W can be resolved by setting the norm of W to 1, as shown in He et. al., 2017. Can the authors provide better examples here?\n-\tCan the authors add specific citations of the existing works which claim to use Lasso, group Lasso, thresholding to enforce parameter sparsity?\n\n4. Channel Pruning\n-\tThe notation can be improved by defining or replacing “sum_reduced”\n-\tISTA – is only an algorithm, the basic assumption is still L1 -> sparsity or smaller-norm-less-informative. Can the authors address the earlier comment about “a theoretical gap questioning existing sparsity inducing formulation and actual computational algorithms”?\n-\tCan the authors address the earlier comment on “how to set thresholds for weights across different layers”, by providing motivation for choice of penalty for each layer? \n-\tCan the authors address the earlier comment on how their approach provides “guarantees for preserving neural net functionality approximately”?\n\n5. Experiments\n-\tCIFAR-10: Since there is loss of accuracy with channel pruning, it would be useful to compare accuracy of a pruned model with other simpler models with similar param.size? (like pruned-resnet-101 vs. resnet-50 in ISLVRC subsection)\n-\tISLVRC: The comparisons between similar param-size models is exteremely useful in highlighting the contribution of this. However, resnet-34/50/101 top-1 error rates from Table 3/4 in (He et.al. 2016) seem to be lower than reported in table 3 here. Can the authors clarify?\n-\tFore/Background: Can the authors add citations for datasets, metrics for this problem?\n\n\nOverall, the channel pruning with sparse \\gammas is an interesting concept and the numerical results seem promising. The authors have started with right motivation and the initial section asks the right questions, however, some of those questions are left unanswered in the subsequent work as detailed above.", "This paper is well written and it was easy to follow. The authors propose prunning model technique by enforcing sparsity on the scaling parameter of batch normalization layers. This is achieved by forcing the output of some channels being constant during training. This is achieved an adaptation of ISTA algorithm to update the batch-norm parameter. \n\nThe authors evaluate the performance of the proposed approach on different classification and segmentation tasks. The method seems to be relatively straightforward to train and achieve good performance (in terms of performance/parameter reduction) compared to other methods on Imagenet.\n\nSome of the hyperparameters used (alpha and specially rho) seem to be used very ad-hoc. Could the authors explain their choices? How sensible is the algorithm to these hyperparameters?\nIt would be nice to see empirically how much of computation the proposed approach takes during training. How much longer does it takes to train the model with the ISTA based constraint?\n\nOverall this is a good paper and I believe it should be accepted, given the authors are more clear on the details pointed above.\n", "This paper proposes an interesting approach to prune a deep model from a computational point of view. The idea is quite simple as pruning using the connection in the batch norm layer. It is interesting to add the memory cost per channel into the optimization process. \n\nThe paper suggests normal pruning does not necessarily preserve the network function. I wonder if this is also applicable to the proposed method and how can this be evidenced. \n\nAs strong points, the paper is easy to follow and does a good review of existing methods. Then, the proposal is simple and easy to reproduce and leads to interesting results. It is clearly written (there are some typos / grammar errors). \n\nAs weak points:\n1) The paper claims the selection of \\alpha is critical but then, this is fixed empirically without proper sensitivity analysis. I would like to see proper discussion here. Why is \\alpha set to 1.0 in the first experiment while set to a different number elsewhere. \n\n2) how is the pruning (as post processing) performed for the base model (the so called model A).\n\nIn section 4, in the algorithmic steps. How does the 4th step compare to the statement in the initial part of the related work suggesting zeroed-out parameters can affect the functionality of the network?\n\n3) Results for CIFAR are nice although not really impressive as the main benefit comes from the fully connected layer as expected.", "Hi! Thanks for your response and the revision. With your clarifications, now the difference is made more clear, and we highly acknowledge your contributions.\n", "Thanks for your thoughtful review. We have given serious considerations of your concerns and revise our manuscript to accommodate your suggestions. Please see the details below. \n\n1. \"However, the ISTA, which is equivalent to L1 penalty on \\gamma is in spirit same as “smaller-norm-less-informative” assumption. Hence, the title seems a bit misleading. \"\n\nISTA solving L1 regularization problem has been theoretically justified only for strongly convex problems. It is however a heuristic when applying to neural networks. We generally treat it as a sparsity promoting method, rather than a solid optimization tool. \n\nOur paper is not to challenge this assumption, but wants to emphasize when this assumption could be valid. Even for linear models, this assumption is questionable if predictor variables are not normalized ahead of time. We believe this is also the case for other regularized channel selection problems. \n\nWe have made our point clearer in the revision. (see Sec 3., the first paragraph)\n\n\n2. Q3.1\n\"… the specific examples are either artificial or can be resolved fairly easily.\"\n\nWe provides these two cases to exemplify how regularization could fail or be of limited usage. There definitely exist ways to avoid the specific failures. In the first case, we show that it is not easy to have fine-grained control of the Ws’ norms across different layers. One has to either choose a uniform penalty for Ws in all layers or struggle with the reparameterization patterns. In the second case, we show that batch normalization is not compatible with W regularization. By constraining W to norm 1., one has to deal with a non-convex feasible set of parameters, causing extra difficulties in developing optimization for data-dependent pruning method. (He et. al., 2017) is a data independent approach.\n \n3. Q3.2\n\"Can the authors add specific citations of the existing works which claim to use Lasso, group Lasso, thresholding to enforce parameter sparsity?\"\n\nWe have added related citations in the revision.\n\n\n\n4. Q4.1 \n\"The notation can be improved by defining or replacing “sum_reduced”\"\n\nWe added the definition of relevant notation. \n\n\n\n5. Q4.2\n\"Can the authors address the earlier comment about “a theoretical gap questioning existing sparsity inducing formulation and actual computational algorithms”?\"\n\nOur current work still falls short of a strong theoretical guarantee. But we believe by working with normalized feature inputs and their regularized coefficients together, one is closer to a more robust and meaningful approach. Sparsity is not the goal. The goal is to find less important channels using sparsity inducing formulation. \n\n\n6. Q4.3\n\"Can the authors address the earlier comment on “how to set thresholds for weights across different layers”, by providing motivation for choice of penalty for each layer?\"\n\nIn our method, we set the penalty for each layer to be proportional to the memory needed per channel. Thanks to the benefits of batch normalization, we could have this fine-grained control of penalties. (see Algorithm Step 1.)\n\n\n7. Q4.4\n\"Can the authors address the earlier comment on how their approach provides “guarantees for preserving neural net functionality approximately”?\"\n\nAs we mentioned ISTA is used as a heuristic in training the network and selecting channels to be pruned. Once this step is done, we do have guarantee that after pruning all channels whose gammas are zero, the functionality of CNN is identically kept if no padding is used. \n\n\n\n8. Q5.1\n\"CIFAR-10: Since there is loss of accuracy with channel pruning, it would be useful to compare accuracy of a pruned model with other simpler models with similar param.size?\"\n\nWe do have this comparison done, which were not included in the initial submission. We have included it in the revision. “We also train a reference ConvNet from scratch whose channel sizes are 32-64-64-128 with totally 224,008 parameters and test accuracy being 86.3\\%. The referenced model is not as good as Model B, which has smaller number of parameters and higher accuracy. ”\n\n\n9. Q5.2\n\"ISLVRC: … However, resnet-34/50/101 top-1 error rates from Table 3/4 in (He et.al. 2016) seem to be lower than reported in table 3 here. Can the authors clarify?\"\n\nIn (He et al. 2016), the error is evaluated based on ensembling 10 random crops of a test image. Our number reported is based on a single crop, which has been used as a practice in other related work. \n\n\n\n10. Q5.3\n\"Fore/Background: Can the authors add citations for datasets, metrics for this problem?\"\n\nWe added in the revision. Note some labels are originally created/collected by researchers of the company, which have not yet been released. We use the mean IOU ( https://www.tensorflow.org/api_docs/python/tf/metrics/mean_iou ) to evaluate the model. ", "Thanks for your review. Please see below how we address your concerns in the revision. \n\n1. \"Some of the hyperparameters used (alpha and specially rho) seem to be used very ad-hoc. Could the authors explain their choices? How sensible is the algorithm to these hyperparameters?\"\n\nThere are several hyperparameters one has to carefully choose, and they have a mixed effect on the number of iterations needed and model performance. In the revision, we include a section (Sec 4.3) guiding one to properly tune those parameters. \n\nHere is what we have summarized:\n\n\\mu (learning rate): larger \\mu leads to fewer iterations for convergence and faster progress of sparsity, but if \\mu is too large, the SGD approach wouldn’t converge. \n\n\\rho (sparse penalty): larger \\rho leads to more sparse model at convergence. If trained with a very large \\rho, all channels will be eventually pruned.\n\n\\alpha (rescaling): we use \\alpha other than one only for pretrained models, we usually choose \\alpha from {0.001, 0.01, 0.1, 1} and larger \\alpha slows down the progress of sparsity. \n\n\n\n2. \"It would be nice to see empirically how much of computation the proposed approach takes during training. How much longer does it takes to train the model with the ISTA based constraint?\"\n\nPer-iteration computation is almost the same as regular SGD. For train-from-scratch setting, the number of iterations is roughly the same as the number one needs for SGD. For pre-trained ImageNet model (which is typically trained with a hundred epochs), our method takes about 5~10 epochs for ISTA-based training, and takes another few epochs for regular fine-tuning. We have included this in the revision. \n", "Thanks for your review. It helps us to prepare the revision. We want to address all your concerns in the revision as below. \n\n1. \"The paper suggests normal pruning does not necessarily preserve the network function. I wonder if this is also applicable to the proposed method and how can this be evidenced. \"\n\nOur approach enforces sparsity of scaling parameters in BN, sharing a similar spirit of regularized linear models. By using Lasso/ridge regression to select important predictors in linear models, one always has to first normalize each predictor variable. Otherwise, the result might not be explanatory anyway. For example, Ridge penalizes more the predictors which have low variance, and Lasso likely enforces sparsity of coefficients which are already small in OLS. [1]\n\nThose issues are widely considered by statisticians, and we believe it also happens in the neural networks. We were not trying to prove our method should be superior, but rather want to remind the community that small magnitude is not necessarily a good reason to prune. Combining with some additional assumptions (e.g. normalized predictors, or in our case, using BN) might be a plausible excuse. We are interested to see if there is any supporting theoretical guarantee in the future. \n\n2. \"The paper claims the selection of \\alpha is critical but then, this is fixed empirically without proper sensitivity analysis. I would like to see proper discussion here. Why is \\alpha set to 1.0 in the first experiment while set to a different number elsewhere. \"\n\nIn the first experiment (CIFAR-10), we train the network from scratch and allocate enough steps for both $\\gamma$ and $W$ to adjust their own scales. Thus, initialization of an improper scale of $\\gamma$-$W$ is not really an issue given we optimize with enough steps. \n\nBut for the pre-trained models which were originally optimized without any constraint of $\\gamma$, the $\\gamma$’s scales are often unanticipated (compared to learning rate and penalty). It actually takes as many steps as that of training from scratch for $\\gamma$ to warm up. By adopting the rescaling trick setting $\\alpha$ to a smaller value, we are able to skip the warm-up stage and quick start to sparsify $\\gamma$s. For example, it might take more than a hundred epochs to train ResNet-101, but it only takes about 5-10 epochs to complete the pruning and a few more epochs to fine-tune. \nWe have added those details in the revision. \n\n\n3. \"How is the pruning (as post processing) performed for the base model (the so called model A). In section 4, in the algorithmic steps. How does the 4th step compare to the statement in the initial part of the related work suggesting zeroed-out parameters can affect the functionality of the network?\"\n\nBy the end of stochastic training, some gamma will stay at (exact) zero by the nature of ISTA. We then work on removing the channels with those zero-gamma from the base model (so-called post-processing) using the method described in Sec 4.1. Previous work adopting regularization method still needs to select a threshold for each layer’s parameters in order to be zeroed out. Zero-out small magnitude parameters changes the functionality of CNN, but our approach keeps the identical functionality of CNN after post-processing, if there is no padding in the base model. \n\n\n4. \"Results for CIFAR are nice although not really impressive as the main benefit comes from the fully connected layer as expected.\"\n\nOur experimented two base models (CNN and ResNet-20) for CIFAR-10 are convolutional networks with all layers being convolutions (except the last one). We have reported the number of channels in convolution layer that have been pruned in the paper (See Table 1 & 2). The result suggests our method is able to significantly reduce the channel size for each convolution layer without sacrificing a lot performance. For example, comparing Model A with Model B/C, the pruning mainly happened at convolution layers. For ResNet-20 experiment, we only attempt to prune convolutions in the residual modules, the last layer is an average pooling (with no parameters). \n\n\n\n[1] Hastie, T et al. The Elements of Statistical Learning, Springer. \n\n", "Hi Zhuang and your collaborators, \n\nThanks for your comment! \n\nIn the revision, we have acknowledged that [1] is a prior work, but it has been available on arXiv after we fully developed our method in the summer. We noticed [1] during its presentation at ICCV 2017 (occurred in late October). We felt obliged to cite the work in our submission, as we all agreed it is a very relevant paper. We were sincerely not aware of the arXiv version posted on late August.\n\nAlthough the basic idea “enforcing sparsity on BN’s gamma” was firstly discovered in the work of [1], there are several differences in techniques and experiments that we want to highlight here in order to justify our work makes a significant contribution to this new concept.\n\n1. As mentioned by your comments, the major difference is we use ISTA rather than regularization to enforce the sparsity of BN’s gamma. The use of ISTA enables to explicitly monitor the sparsification progress of gammas. As we explained in the revised version, this helps one to effectively tune the hyper-parameters. \n\nThe ISTA when applied to linear models is a standard technique for handling regularization. But there has been no theoretical guaranteed for its use in non-convex problems. In fact, we use ISTA in our work as a sparsity promoting heuristic. \n\n\n2. Based on our understanding, the mentioned \"sparsify BN's \\gamma and prune channels accordingly\" technique we use is not exactly the same as one used by authors of [1]. \n\nAccording to [1], their method “... prune channels with near-zero scaling factors, by removing all their incoming and outgoing connections and corresponding weights.” and “Pruning may temporarily lead to some accuracy loss, when the pruning ratio is high. But this can be largely compensated by the followed fine-tuning process on the pruned network.” \n\nIn our method of ISTA, some gammas will stay exactly at zero (rather than near zero) after training. We prune channels with zero scaling factors. In addition to remove connections as they did, we also adjust the bias term in the follow-up layers to compensate the effects of removing BN’s beta. By doing such procedures, the network functionality is exactly preserved if there is no padding in convolution. (See Sec. 4.1)\n\n\n3. The gamma-W rescaling effects are not explicitly addressed in [1]. We feel it is an important issue to be addressed in order to use this new technique for large pre-trained models. \n\nIn our experiments, we discovered that if gamma is initially not properly scaled before training, it takes a lot more steps for them to be sparse. Such cases could happen because the original pre-trained model was not trained with any constraints on gammas. Without explicitly address this issue, It could be impractical for applying the method to many large pre-trained model as we experimented with ImageNet challenge. In the ImageNet benchmark of [1], they experimented their method on VGG-A. It is unclear how much computation [1] could take to prune larger models (e.g. ResNets or Inceptions on ImageNet).\n\n\n\n4. We essentially hold a quite different set of rationales and motivations for proposing the concept from those were argued in [1]. We feel encouraged that our thoughts have been recognized by you as well as three other reviewers. Besides the techniques and empirical evidences developed in our paper, we believe those discussions will also be good additions to the field. \n\nWe have accommodated above discussions in our revision. \n", "Hi authors, nice paper, well done! In particular, I think the rationale for using Batch Normalization's \\gamma scaling factors as the pruning criterion is explained in a very clear fashion. \n\nHowever, given this \"sparsify BN's \\gamma and prune channels accordingly\" technique is already proposed by the ICCV paper [1], I think it would be great to include an experimental comparison. In my understanding, the essential difference is that you use ISTA to sparsify \\gamma instead of L1 regularization as in [1].\n\nMoreover, I think the \"parallel work\" argument might not be fully valid, as [1] was published and presented before ICLR submission.\n\nNevertheless, I believe the paper is still a good contribution to ICLR if [1] is cited as prior work (rather than parallel work), and a comparison is conducted.\n\n[1] Liu et al. ICCV 2017. Learning Efficient Convolutional Networks through Network Slimming. https://arxiv.org/abs/1708.06519" ]
[ 5, 7, 6, -1, -1, -1, -1, -1, -1 ]
[ 5, 3, 5, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJ94fqApW", "iclr_2018_HJ94fqApW", "iclr_2018_HJ94fqApW", "ByUWjHhZf", "BJtJ3c_gG", "B1rak-5eG", "B1KcBUqlz", "ByWHEgg-z", "iclr_2018_HJ94fqApW" ]
iclr_2018_SJiHXGWAZ
Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting
Spatiotemporal forecasting has various applications in neuroscience, climate and transportation domain. Traffic forecasting is one canonical example of such learning task. The task is challenging due to (1) complex spatial dependency on road networks, (2) non-linear temporal dynamics with changing road conditions and (3) inherent difficulty of long-term forecasting. To address these challenges, we propose to model the traffic flow as a diffusion process on a directed graph and introduce Diffusion Convolutional Recurrent Neural Network (DCRNN), a deep learning framework for traffic forecasting that incorporates both spatial and temporal dependency in the traffic flow. Specifically, DCRNN captures the spatial dependency using bidirectional random walks on the graph, and the temporal dependency using the encoder-decoder architecture with scheduled sampling. We evaluate the framework on two real-world large-scale road network traffic datasets and observe consistent improvement of 12% - 15% over state-of-the-art baselines
accepted-poster-papers
The paper received highly diverging scores: 5 (R1) ,9 (R2), 4(R3). Both R1 and R3 complained about the comparisons to related methods. R3 suggested some kNN and GP baselines, while R1 mentioned concurrent work using deepnets for trafffic prediction. R3 is real expert on field. R2 and R1, not so. R2 review very positive, but vacuous. Rebuttal seems to counter R1 and R3 well. It's a close all but the AC is inclined to accept since it's an interesting application of (graph-based) deepnets.
train
[ "r1zoeeFgf", "r1pn22FeG", "H1AlgBcxf", "S1W9DvMmM", "ryjAIwfQG", "Hy2t8DfQz", "B1IMLDfmf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper proposes to build a graph where the edge weight is defined using the road network distance which is shown to be more realistic than the Euclidean distance. The defined diffusion convolution operation is essentially conducting random walks over the road segment graph. To avoid the expensive matrix operation for the random walk, it empirically shows that K = 3 hops of the random walk can give a good performance. The outputs of the graph convolutionary operation are then fed into the sequence to sequence architecture with the GRU cell to model the temporal dependency. Experiments show that the proposed architecture can achieve good performance compared to classic time series baselines and several simplified variants of the proposed model. \n\nAlthough the paper argues that several existing deep-learning based approaches may not be directly applied in the current setting either due to using Euclidean distance or undirected graph structure, the comparisons are not persuasive. For example, the approach in the paper \"DeepTransport: Learning Spatial-Temporal Dependency for Traffic Condition Forecasting\" also consider directed graph and a diffusion effect from 2 or 3 hops away in the neighboring subgraph of a target road segment. \n\nFurthermore, the paper proposes to use two convolution components in Equation 2, each of which corresponds to out-degree and in-degree direction, respectively. This effectively increase the number of model parameters to learn. Compared to the existing spectral graph convolution approach, it is still not clear how its performance will be by using the same number of parameters. The experiments will be improved if it can compare with \"Spatio-temporal graph convolutional neural network: A deep learning framework for traffic forecasting\" using roughly the same number of parameters.", "The paper adresses an important task to build a data-driven model traffic forecasting model. The paper takes into consideration the spatio-temporal autocorralation and tackles this with a diffusion process for convolutional recurrent neural networks. The paper lacks a comparison to other models that aim to include spatio-temporal dependencies used for this problem - namely Gaussian Processes, spatial k-NN.\nThe paper motivates the goal to obtain smooth traffic predictions, but traffic is not a smooth process, e.g. traffic lights and intersections cause non smooth effects. therefore it is difficult to follow this argumentation. Statements as 'is usually better at predicting start and end of peak hours' (caption of Figure 6) should be supported by statistical test that stress significance of the statement.\nThe method performs well on the presented data - in comparison to the models that do not consider autocorrelation. This might be because tests were not performed with commonly used traffic models or the traffic data was in favour for this model - it remains unclear, whether the proposed method really contributes better predictions or higher scalibility or faster computation to the mentioned problem. How does the proposed model behave in case of a shift in the traffic distribution? How do sudden changes (accident, road closure, etc.) affect the performance?", "Summary of the reviews:\nPros:\n•\tA Novel Diffusion Convolutional Recurrent Neural Network framework for traffic forecasting\n•\tApply bidirectional random walks with nice theoretical analysis to capture the spatial dependency\n•\tNovel applications of sequence to sequence architecture and the scheduled sampling technique into modeling the temporal dependency in the traffic domain\n•\tThis work is very well written and easy to follow\nCons:\n•\tNeeds some minor re-organization of contents between the main sections and appendix\n\nDetailed comments:\nD1: Some minor re-organization of contents between the main sections and the appendix will help the reader reduce cross-section references. Some examples:\n1.\tLemma 2.1 can be put into appendix since it is not proposed by this work while the new theoretical analysis of Appendix 5.2 (or at least a summary) can be moved to the main sections\n2.\tFigure 9 can be moved earlier to the main section since it well supports one of the contributions of the proposed method (using the DCRNN to capture the spatial correlation)\nD2: Some discussions regarding the comparison of this work to some state-of-the-arts graph embedding techniques using different deep neural network architectures would be a plus", "We sincerely thank the reviewer for the detailed and constructive comments. We will follow the suggestions to reorganize the content and add more discussions about the relationship between the proposed approach and state-of-the-art deep neural network based graph embedding techniques.", "We sincerely thank the reviewer for the detailed and helpful comments. First, we would like to clarify the main contribution of our work: the proposed model, DCRNN, achieves significant performance gain with theoretical justifications - we derive diffusion convolution from the property of random walk [1] capturing the diffusion nature of traffic. And we show in theory that the popular spectral convolution, i.e., ChebNet [2], is a special case of our method (Proposition 2.2). We further incorporate diffusion convolution with RNN in a non-trivial way and improve long-term forecasting performance with scheduled sampling. Compared with existing work, our model is one of the first with significant performance improvement in traffic prediction and insightful theoretical justifications. \n\nBelow we address specific questions/concerns:\n>> \"Compared to the existing spectral graph convolution approach, it is still not clear how its performance will be by using the same number of parameters.\" \nIn Table 2, we compare to ChebNet [4], one of the most popular spectral graph convolutions with roughly the same number of parameters, and our approach achieves improved results.\n\nHere is the detailed architecture of these models:\nDCRNN (proposed model), K=3, #layers=2, # filters in each layer=64, #params=373,376 \nGCRNN (ChebNet + GRU), K=3, #layers=2, # filters in each layer=83, #params=376,156\n\n>> Comparison with DeepTransport [3].\n\nDeepTransport [3] is a very recent unpublished work and has not been peer-reviewed, it first becomes available on arXiv on Sep 27th, 2017. In [3], the authors model the spatial dependency by explicitly collecting upstream and downstream roads for each individual road and then conduct traditional convolution on these roads respectively. Comparing with [3], we model the spatial dependency in a more systematic way, i.e., generalizing convolution to the traffic sensor graph based on the diffusion nature of traffic. Besides, we derive DCRNN from the property of random walk and show that the popular spectral convolution ChebNet is a special case of our method.\n\nWe will reference and differentiate those work in the next version.\n\nReference:\n[1] S. Teng et al. Scalable algorithms for data and network analysis. Foundations and Trends\nR in Theoretical Computer Science.\n[2] M.l Defferrard et al. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering.\n[3] X. Cheng et al. DeepTransport: Learning Spatial-Temporal Dependency for Traffic Condition Forecasting. http://arxiv.org/abs/1709.09585", ">> \"How does the proposed model behave in case of a shift in the traffic distribution? How do sudden changes (accident, road closure, etc.) affect the performance?\"\nWe have not explicitly studied the model's behavior in case of the shift in the traffic distribution or accidents. Generally, RNNs can capture nonlinear dynamics better than traditional linear models [9] and the diffusion convolution can model the dependency among nearby sensors which should be helpful. Finally, studying the effect of sudden changes is an interesting problem and can be a potential future work.\n\nReference: \n[1] S. Teng et al. Scalable algorithms for data and network analysis. Foundations and Trends\nR in Theoretical Computer Science.\n[2] M.l Defferrard et al. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering\n[3] Y. Xie et al. Gaussian Processes for Short-Term Traffic Volume Forecasting.\n[4] P. Cai et al. A spatiotemporal correlative k-nearest neighbor model for short-term traffic multi-step forecasting\n[5] G. Fusco. Short-term speed predictions exploiting big data on large urban road networks.\n[6] S. R. Chandra and H. Al-Deek. Predictions of Freeway Traffic Speeds and Volumes Using Vector Autoregressive Models.\n[7] M Lippi et al. Short-Term Traffic Flow Forecasting: An Experimental Comparison of Time-Series Analysis and Supervised Learning.\n[8] B. L. Smith et al. Comparison of parametric and nonparametric models for traffic flow forecasting.\n[9] J. T. Connor et al. Recurrent Neural Networks and Robust Time Series Prediction. \n", "Response to ICLR 2018 Conference Paper804 AnonReviewer3\nWe sincerely thank the reviewer for the detailed comments. However, we cannot agree with the reviewer’s assessment regarding the significance of this paper. The paper proposes a novel Diffusion Convolutional Recurrent Neural Network which captures both the spatial and temporal dependencies among traffic time series with theoretical justifications. We derive diffusion convolution from the property of random walk [1] capturing the diffusion nature of traffic, and we show in theory that the popular spectral convolution, i.e., ChebNet [2], is a special case of our method (Proposition 2.2). We further incorporate diffusion convolution with RNN in a non-trivial way and improve long-term forecasting performance with scheduled sampling. Compared with existing work, our model is one of the first with significant performance improvement in traffic prediction and insightful theoretical justifications. \n\nBelow we address specific questions/concerns:\n\n>> \"The paper lacks a comparison to other models that aim to include spatiotemporal dependencies used for this problem - namely Gaussian Processes, spatial k-NN.\"\n\nWe do compare models that consider spatial and temporal dependencies, e.g., VAR and FC-LSTM, which take as input time series from all sensors, and model both the dependency among different sensors and the dependency among different time steps. \n\nComparison with Gaussian Processes (GPs) [3] and Spatial KNN [4]:\n- GPs are hard to scale to the large dataset and are generally not suitable for relatively long-term traffic prediction like 1 hour (i.e.,12 steps ahead), as the variance can be accumulated and becomes extremely large.\n- Though spatial KNN [4] considers both the spatial and the temporal dependencies, it has the following drawbacks:\n 1) As shown in [5], Spatial KNN performs independent forecasting for each individual road. The prediction of a road is a weighted combination of its own historical traffic speeds. This makes it hard for Spatial KNN to fully utilize information from neighbors.\n 2) Spatial KNN is a non-parametric approach and each road is modeled and calculated separately [5], which may make it hard to generalize to unseen situations and to scale to large datasets. \n 3) In Spatial KNN, all the similarities are calculated using hand-designed metrics with few learnable parameters which may limit its representation power. \n\nWe will add more discussion about these methods in the next version.\n\n>> \"The paper motivates the goal to obtain smooth traffic predictions...\" \nOur motivation is to capture the temporal and spatial dependency among traffic time series rather than to generate smooth predictions. The reviewer may misunderstand the description of Fig. 6. Though in Fig. 6, we show that the model's predictions are usually smooth, this is not our motivation. The smoothness can be explained by the following facts:\n- The objective is to predict the traffic speed averaged over 5 mins which is usually smooth even with traffic lights and intersections.\n- The traffic speed time series are collected from the highway where traffic lights and intersections are less common.\n\nBesides, for clarification, by saying \"traffic signal\" we mean the graph signal of traffic as described in Section 2.1 rather than \"traffic light\".\n\n>> \"The method performs well on the presented data - in comparison to the models that do not consider autocorrelation\"\nIn fact, most baselines that our method beat do consider autocorrelation. For example, VAR, FC-LSTM, which take as input multiple time series, are able to model the autocorrelation among different time series and among different time steps. Specifically, we visualize the spatial-temporal dependencies captured by the VAR model in Fig. 9. \n\n>> \"This might be because tests were not performed with commonly used traffic models or the traffic data was in favour for this model\"\nWe do not agree with this comment. We validate our methods on two real-world datasets and have observed consistent improvements. We compared the proposed model with ARIMA, historical average, SVR, VAR, FNN and LSTM based method, which are widely used for traffic forecasting [6, 7, 8].\nBesides, the datasets we used come from Los Angeles and the Bay Area/San Francisco, with different traffic patterns. These datasets are fair representatives for traffic prediction. In addition, we want to emphasize that the traffic sensors were not cherry-picked in our experiments. We simply chose an area and performed experiments on all the working sensors in that area. " ]
[ 5, 4, 9, -1, -1, -1, -1 ]
[ 3, 5, 5, -1, -1, -1, -1 ]
[ "iclr_2018_SJiHXGWAZ", "iclr_2018_SJiHXGWAZ", "iclr_2018_SJiHXGWAZ", "H1AlgBcxf", "r1zoeeFgf", "B1IMLDfmf", "r1pn22FeG" ]
iclr_2018_SkHDoG-Cb
Simulated+Unsupervised Learning With Adaptive Data Generation and Bidirectional Mappings
Collecting a large dataset with high quality annotations is expensive and time-consuming. Recently, Shrivastava et al. (2017) propose Simulated+Unsupervised (S+U) learning: It first learns a mapping from synthetic data to real data, translates a large amount of labeled synthetic data to the ones that resemble real data, and then trains a learning model on the translated data. Bousmalis et al. (2017) propose a similar framework that jointly trains a translation mapping and a learning model. While these algorithms are shown to achieve the state-of-the-art performances on various tasks, it may have a room for improvement, as they do not fully leverage flexibility of data simulation process and consider only the forward (synthetic to real) mapping. While these algorithms are shown to achieve the state-of-the-art performances on various tasks, it may have a room for improvement, as it does not fully leverage flexibility of data simulation process and consider only the forward (synthetic to real) mapping. Inspired by this limitation, we propose a new S+U learning algorithm, which fully leverage the flexibility of data simulators and bidirectional mappings between synthetic data and real data. We show that our approach achieves the improved performance on the gaze estimation task, outperforming (Shrivastava et al., 2017).
accepted-poster-papers
Split opinions on paper: 6 (R1), 3 (R2), 6 (R3). Much of the debate centered on the novelty of the algorithm. R2 felt that the paper was a straight-forward combination of CycleGAN with S+U, while R3 felt it made a significant contribution. The AC has looked at the paper and the reviews and discussion. The topic is very interesting and topical. The experiments are ok, but would be helped a lot by including the real/synth car data currently in appendix B: seeing the method work on natural images is much more compelling. The approach still seems a bit incremental: yes, it's not a straight combination but the extra stuff isn't so profound. The AC is inclined to accept, just because this is an interesting problem.
train
[ "S1uLIj8lG", "BJ__hY9lz", "BJ7oBjolf", "HyXoBh3zM", "Bk_imn2fz", "ByisG2nMG", "Hk-y9aJZz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "* sec.2.2 is about label-preserving translation and many notations are introduced. However, it is not clear what label here refers to, and it does not shown in the notation so far at all. Only until the end of sec.2.2, the function F(.) is introduced and its revelation - Google Search as label function is discussed only at Fig.4 and sec.2.3.\n* pp.5 first paragraph: when assuming D_X and D_Y being perfect, why L_GAN_forward = L_GAN_backward = 0? To trace back, in fact it is helpful to have at least a simple intro/def. to the functions D(.) and G(.) of Eq.(1). \n* Somehow there is a feeling that the notations in sec.2.1 and sec.2.2 are not well aligned. It is helpful to start providing the math notations as early as sec.2.1, so labels, pseudo labels, the algorithm illustrated in Fig.2 etc. can be consistently integrated with the rest notations. \n* F() is firstly shown in Fig.2 the beginning of pp.3, and is mentioned in the main text as late as of pp.5.\n* Table 2: The CNN baseline gives an error rate of 7.80 while the proposed variants are 7.73 and 7.60 respectively. The difference of 0.07/0.20 are not so significant. Any explanation for that?\nMinor issues:\n* The uppercase X in the sentence before Eq.(2) should be calligraphic X", "Review, ICLR 2018, Simulated+Unsupervised Learning With Adaptive Data Generation and Bidirectional Mappings\n\nSummary:\n\nThe paper presents several extensions to the method presented in SimGAN (Shirvastava et al. 2017). \nFirst, it adds a procedure to make the distribution of parameters of the simulation closer to the one in real world images. A predictor is trained on simulated images created with a manually initialized distribution. This predictor is used to estimate pseudo labels for the unlabeled real-world data. The distribution of the estimated pseudo labels is used produce a new set of simulated images. This process is iterated. \nSecond, it adds the idea of cycle consistency (e.g., from CycleGAN) in order to counter mode collapse and label shift. \nThird, since cycle consistency does not fully solve the label shift problem, a feature consistency loss is added.\nFinally, in contrast to ll related methods, the final system used for inference is not a predictor trained on a mix of real and “fake” images from the real-world target domain. Instead the predictor is trained purely on synthetic data and it is fed real world examples by using the back/real-to-sim-generator (trained in the conjunction with the forward mapping cycle) to map the real inputs to “fake” synthetic ones.\n\nThe paper is well written. The novelty is incremental in most parts, but the overall system can be seen as novel. \n\nIn particular, I am not aware of any published work that uses of the (backwards) real-to-sim generator plus sim-only trained predictor for inference (although I personally know several people who had the same idea and have been working on it). I like this part because it perfectly makes sense not to let the generator hallucinate real-world effects on rather clean simulated data, but the other way around, remove all kinds of variations to produce a clean image from which the prediction should be easier.\n\nThe paper should include Bousmalis et al., “Unsupervised Pixel-Level Domain Adaptation With Generative Adversarial Networks”, CVPR 2017 in its discussion, since it is very closely related to Shirvastava et al. 2017.\n\nWith respect to the feature consistency loss the paper should also discuss related work defining losses over feature activations for very similar reasons, such as in image stylization (e.g. L. A. Gatys et al. “Image Style Transfer Using Convolutional Neural Networks” CVPR 2016, L. A. Gatys et al. “Controlling Perceptual Factors in Neural Style Transfer” CVPR 2017), or the recently presented “Photographic Image Synthesis with Cascaded Refinement Networks”, ICCV 2017.\nBousmalis et al., “Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping”, arXiv:1709.07857, even uses the same technique in the context of training GANs.\n\nAdaptive Data Generation:\nI do not fully see the point in matching the distribution of parameters of the real world samples with the simulated data. For the few, easily interpretable parameters in the given task it should be relatively easy to specify reasonable ranges. If the simulation in some edge cases produces samples that are beyond the range of what occurs in the real world, that is maybe not very efficient, but I would be surprised if it ultimately hurt the performance of the predictor. \nI do see the advantage when training the GAN though, since a good discriminator would learn to pick out those samples as generated. Again, though, I am not very sure whether that would hurt the performance of the overall system in practice.\n\nLimiting the parameters to those values of the real world data also seems rather restricting. If the real world data does not cover certain ranges, not because those values are infeasible or infrequent, but just because it so happens that this range was not covered in the data acquisition, the simulation could be used to fill in those ranges. \nAdditionally, the whole procedure of training on sim data and then pseudo-labeling the real data with it is based on the assumption that a predictor trained on simulated data only already works quite well on real data. It might be possible in the case of the task at hand, but for more challenging domain adaptation problem it might not be feasible. \nThere is also no guarantee for the convergence of the cycle, which is also evident from the experiments (Table 1. After three iterations the angle error increases again. (The use of the Hellinger distance is unclear to me since it, as explained in the text, does not correspond with what is being optimized). In the experiments the cycle was stopped after two iterations. However, how would you know when to stop if you didn’t have ground truth labels for the real world data?\n\nComparisons:\nThe experiments should include a comparison to using the forward generator trained in this framework to train a predictor on “fake” real data and test it on real data (ie. a line “ours | RS | R | ?” in Table 2, and a more direct comparison to Shrivastava). This would be necessary to prove the benefit of using the back-generator + sim trained predictor.\n\n\nDetailed comments:\n* Figure 1 seems not to be referenced in the text.\n* I don’t understand the choice for reduction of the sim parameters. Why was, e.g., the yaw and pitch parameters of the eyeball set equal to those of the camera? Also, I guess there is a typo in the last equality (pitch and yaw of the camera?).\n* The Bibliography needs to be checked. Names of journals and conferences are inconsistent, long and short forms mixed, year several times, “Proceedings” multiple times, ...\n", "General comment:\n\nThis paper proposes a GAN-based method which learns bidirectional mappings between the real-data and the simulated data. The proposed methods builds upon the CycleGAN and the Simulated+Unsupervised (S+U) learning frameworks. The authors show that the proposed method is able to fully leverage the flexibility of simulators by presenting an improved performance on the gaze estimation task.\n\nDetailed comments:\n\n1. The proposed method seems to be a direct combination of the CycleGAN and the S+U learning. Firstly, the CycleGAN propose a to learn a bidirectional GAN model between for image translation. Here the author apply it by \"translating\" the the simulated data to real-data. Moreover, the mapping from simulated data to the real-data is learned, the S+U learning framework propose to train a model on the simulated data.\n\nHence, this paper seems to directly apply S+U learning to CycleGAN. The properties of the proposed method comes immediately from CycleGAN and S+U learning. Without deeper insights of the proposed method, the novelty of this paper is not sufficient.\n\n2. When discussing CycleGAN, the authors claim that CycleGAN is not good at preserving the labels. However, it is not clear what the meaning of preserving labels is. It would be nice if the authors clearly define this notion and rigorously discuss why CycleGAN is insufficient to reach such a goal and why combining with S+U learning would help.\n\n3. This work seems closely related to S+U learning. It would be nice if the authors also summarize S+U learning in Section 2, in the similar way they summarize CycleGAN in Section 2.2.\n\n4. In Section 2.2, the authors claim that the Cycle-consistency loss in CycleGAN is not sufficient for label preservation. To improve, they propose to use the feature consistency loss. However, the final loss function also contains this cycle-consistency loss. Moreover, in the experiments, the authors indeed use the cycle-consistency loss by setting \\lambda_{cyc} = 10. But the feature consistency loss may not be used by setting \\lambda_{feature} = 0 or 0.5. From table Two, it appears that whether using the feature-consistency loss does not have significant effect on the performance.\n\nIt would be nice to conduct more experiments to show the effect of adding the feature-consistent loss. Say, setting \\lambda_{cyc} = 0 and try different values of \\lambda_{feature}. Otherwise it is unclear whether the feature-consistent loss is necessary.\n", "Dear reviewers,\n\nFirst of all, we really appreciate your constructive review and detailed comments on our work. Please find our detailed responses below. We also uploaded our revised manuscript where the reviewers' comments and feedback are reflected. Some of the major changes are colored in blue to help the reviewer locate them. \n\nPlease let us know if the reviewers have any further questions or comments. \n\nThanks,\nAuthors", "Dear Reviewer1, \n\nFirst of all, we really appreciate your constructive review and detailed comments on our work. We would like to share our responses to the concerns raised by the reviewer.\n\n1. pp.5 first paragraph: when assuming D_X and D_Y being perfect, why L_GAN_forward = L_GAN_backward = 0? To trace back, in fact it is helpful to have at least a simple intro/def. to the functions D(.) and G(.) of Eq.(1). \n=> Assume a perfect discriminator that always outputs 1 if the image is from the target (real) domain and 0 if it is from the source (fake) domain, regardless of how precise the generators are. Then, the standard GAN loss becomes 0 since L_{GAN}(X,Y) = E_{Y}[log(1)] + E_{X}[log(1-0)] = 0. To make the descriptions clearer, we added the definition of G() to Sec 2.1 along with other notations, and the definition of D() before equation (1). \n\n2. Somehow there is a feeling that the notations in sec.2.1 and sec.2.2 are not well aligned. It is helpful to start providing the math notations as early as sec.2.1, so labels, pseudo labels, the algorithm illustrated in Fig.2 etc. can be consistently integrated with the rest notations. \n=> We appreciate your great suggestion. As mentioned above, we included a separate subsection on notations and definitions. Further, we modified Sec. 2 to align the notations used in the two different sections. \n\n3. F() is firstly shown in Fig.2 the beginning of pp.3, and is mentioned in the main text as late as of pp.5. \n=> We fixed this. \n\n4. Minor issues: * The uppercase X in the sentence before Eq.(2) should be calligraphic X\n=> We fixed this. \n\n5. Table 2: The CNN baseline gives an error rate of 7.80 while the proposed variants are 7.73 and 7.60 respectively. The difference of 0.07/0.20 are not so significant. Any explanation for that? \n=> We would like to first clarify that the CNN baseline alone gives error rate of 11.2. The scheme that achieves the error rate of 7.80 is not a simple CNN baseline but the SimGAN (CVPR’17) approach (forward mapping + training a predictor on the forward-translated images), which is the previous state-of-the-art. With regards to the significance of the improvements, we believe that the error rate achieved by our algorithm is very close to the best error rate that one can hope for. Hence, in this regime, improving the state-of-the-art performance even by a small margin requires much efforts. \n\nThanks,\nAuthors", "Dear Reviewer3, \n\nFirst of all, we really appreciate your constructive review and detailed comments on our work. We would like to share our responses to the concerns raised by the reviewer.\n\n1. In particular, I am not aware of any published work that uses of the (backwards) real-to-sim generator plus sim-only trained predictor for inference… \n=> We thank the reviewer for constructive feedback. We agree that our backward approach has some advantages over the forward approach exactly due to the reasons the reviewer mentioned.\n\n2-1. The paper should include Bousmalis et al…\n2-2. With respect to the feature consistency loss the paper should also discuss related work …\n=> We thank the reviewer for sharing with us the key reference on the S+U learning and the works that proposed the feature consistency. We added them in our revision.\n\n3. Adaptive Data Generation: I do not fully see the point in matching the distribution of parameters of the real world samples with the simulated data..\n=> We agree that having samples that are beyond the range of real sample should not hurt the prediction performance. As the reviewer pointed out, the adaptive data generation part is for generating synthetic samples in a “sample-efficient” way. To see this, consider the following genie-aided experiments. The first label distribution of the simulator is set such that the mean and variance match those of the true label distribution, measured from the test set. The second one is set such that the mean is the same but the variance is twice larger than that of the ture one. (Note that this also generates all possible labels.) We ran our algorithms with these label distributions, and observed that the first one achieves the error rate of “7.74” and the second one achieves “8.88”. As the reviewer expected, we believe that the gap will diminish as the dataset size grows.\n\n4. Limiting the parameters to those values of the real world data also seems rather restricting…\n=> If the real data provided in the training set is not representative enough, our approach may not be able to generalize well on unseen data. One may address this limitation by incorporating prior knowledge about the label distributions or by manually tuning the parameters. A thorough study is needed to understand how one could obtain diverse synthetic images via such methods in a systematic way. We added a remark on this in the revised draft.\n\n5. Additionally … the assumption that a predictor trained on simulated data only already works quite well on real data …\n=> Indeed, our adaptive data generation method assumes that the predictor trained on simulated data works quite well on real data. Hence, if the predictor trained solely on simulated data provide completely wrong pseudo-labels, matching the synthetic label distribution with the pseudo-label distribution may not be helpful at all. For instance, when we pseudo-labeled the images in the SVHN dataset using a digit classifier that is trained on the MNIST dataset, our first stage failed to refine the synthetic label distribution. It is an interesting open question whether or not one can devise a similar adaptive data generation method for such cases. We added a remark on this limitation in the revised draft.\n\n6. There is also no guarantee for the convergence of the cycle … However, how would you know when to stop …? \n=> Yes, the convergence is not guaranteed, and hence one needs to choose the right result to proceed with. Actually, we mentioned in the first manuscript that we made use of a small validation set (1% of the test data set) consisting of real data with labels to do so. For clarity, we also added the validation errors to the table. \n\n7. The use of the Hellinger distance is unclear to me since it, as explained in the text, does not correspond with what is being optimized.\n=> We used the Hellinger distance for quantifying the progress of the adaptive data generation algorithm. To clarify this, we added the sequence of means and variances to Table 1 instead of the Hellinger distance.\n\n8. The experiments should include a comparison to using the forward generator trained in this framework to train a predictor on “fake” real data and test it on real data (ie. a line “ours | RS | R | ?” in Table 2, and a more direct comparison to Shrivastava)…. \n=> We added the performance corresponding to “ours | RS | R” to Table 2: It achieves a performance that is slightly better than the SimGAN’s one but worse than that of our back-generator.\n\n9. I don’t understand the choice for reduction of the sim parameters… \n=> Our internal experiments (not reported in the earlier draft) revealed that the choice of parameter reduction barely affects the performance of the algorithm. For clarification, in the revision (see appendix), we added experimental results comparing two different reduction methods. \n\n10. Figure 1 not referenced & A typo in the last equality & The Bibliography needs to be checked\n=> We fixed them.\n\nThanks,\nAuthors\n", "Dear Reviewer2, \n\nFirst of all, we really appreciate your constructive review and detailed comments on our work. We would like to share our responses to the concerns raised by the reviewer.\n\n1. The proposed method seems to be a direct combination of the CycleGAN and the S+U learning…. Without deeper insights of the proposed method, the novelty of this paper is not sufficient. \n\n=> While we fully agree that some components of our framework are largely inspired by the CycleGAN and the S+U learning, our framework has its own novelty as follows.\n1) It starts with a novel adaptive data generation process, which we observed necessary to achieve the state-of-the-art performances in our own experiments. To see this, we added the test errors to Table 1 in the revision. One can observe that one cannot achieve the state-of-the-art performances without having a “good” synthetic label distribution, which can be obtained by our adaptive data generation process.\n2) Our approach is different from the traditional S+U learning framework: In the original S+U learning framework, the synthetic data is mapped to the real data, and then a predictor is trained on the translated data. In our work, we do not train our predictors after we learn the bidirectional mapping: Instead, we simply map test images to the synthetic domain, and directly apply predictors, which are trained solely with the synthetic data set. \nOur approach has a two-fold advantages over the traditional framework. First, we observe that our backward approach can achieve an improved prediction performance. To see this, we also added the test error measure with the traditional forward mapping approach, which is worse than our backward mapping approach. Further, our approach also has a significant saving in terms of computation over the traditional S+U learning since one does not have to retrain predictors for each target domain. (Having one good predictor trained on the synthetic domain suffices!) \n\n2. When discussing CycleGAN, the authors claim that CycleGAN is not good at preserving the labels. However, it is not clear what the meaning of preserving labels is. It would be nice if the authors clearly define this notion and rigorously discuss why CycleGAN is insufficient to reach such a goal and why combining with S+U learning would help. \n=> According to the reviewer’s comment, we made the following changes in how we describe the label-loss problem and our approach to mitigate the problem. We added a subsection (Sec 2.1) where we define notations and introduce new terms. We also added references to several prior works, which proposed a similar concept called ‘content representation’ or ‘feature matching’ for other related tasks. Further, we would like to clarify that we are proposing the use of “feature consistency loss” to address this challenge. \n\n3. This work seems closely related to S+U learning. It would be nice if the authors also summarize S+U learning in Section 2, in the similar way they summarize CycleGAN in Section 2.2. \n=> We now added more detailed description of the original S+U learning papers to Sec 2.4 in the revision.\n\n4. In Section 2.2, the authors claim that the Cycle-consistency loss in CycleGAN is not sufficient for label preservation. To improve, they propose to use the feature consistency loss. However, the final loss function also contains this cycle-consistency loss. \n=> First of all, we would like to clarify the following. In this work, our claim was that the cycle-consistency loss “alone” may not preserve labels well, and hence we propose to use it together with “feature-consistency loss”. That is, we are proposing to use both of them. For clarification, we revised the relevant descriptions. \n\n5. Moreover, in the experiments, the authors indeed use the cycle-consistency loss by setting \\lambda_{cyc} = 10. But the feature consistency loss may not be used by setting \\lambda_{feature} = 0 or 0.5. From table Two, it appears that whether using the feature-consistency loss does not have significant effect on the performance. It would be nice to conduct more experiments to show the effect of adding the feature-consistent loss. Say, setting \\lambda_{cyc} = 0 and try different values of \\lambda_{feature}. Otherwise it is unclear whether the feature-consistent loss is necessary. \n=> According to the reviewer’s comment, we included additional experimental results to see the roles of \\lambda_{feature} and \\lambda_{cycle}. More specifically, we run our algorithm with different combinations of \\lambda_{feature} \\in {0, 0.1, 0.5, 1.0} and \\lambda_{cycle} \\in {0,1,5,10,50}. As a result, we observe that setting “\\lambda_{feature} = 0.5, \\lambda_{cycle} = 10” achieved the best performance among the tested cases, proving the necessity of using both consistency terms. For details, see the experimental results in the appendix of the revision.\n\nThanks,\nAuthors" ]
[ 6, 6, 3, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_SkHDoG-Cb", "iclr_2018_SkHDoG-Cb", "iclr_2018_SkHDoG-Cb", "iclr_2018_SkHDoG-Cb", "S1uLIj8lG", "BJ__hY9lz", "BJ7oBjolf" ]
iclr_2018_ryH20GbRW
Relational Neural Expectation Maximization: Unsupervised Discovery of Objects and their Interactions
Common-sense physical reasoning is an essential ingredient for any intelligent agent operating in the real-world. For example, it can be used to simulate the environment, or to infer the state of parts of the world that are currently unobserved. In order to match real-world conditions this causal knowledge must be learned without access to supervised data. To address this problem we present a novel method that learns to discover objects and model their physical interactions from raw visual images in a purely unsupervised fashion. It incorporates prior knowledge about the compositional nature of human perception to factor interactions between object-pairs and learn efficiently. On videos of bouncing balls we show the superior modelling capabilities of our method compared to other unsupervised neural approaches that do not incorporate such prior knowledge. We demonstrate its ability to handle occlusion and show that it can extrapolate learned knowledge to scenes with different numbers of objects.
accepted-poster-papers
All three reviewers recommend acceptance. The authors did a good job at the rebuttal which swayed the first reviewer to increase the final rating. This is a clear accept.
train
[ "S1OZaPI4z", "BkApAXalG", "ByDGdV9ef", "B1USI22xz", "Skl0rrTmz", "HyR2QHKzM", "B18VfrYMM", "HkrHQHFMf", "ryE0frKzM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "The rebuttal and revision addressed enough of my concerns for me to increase the score to 8. \nGood work on the additional experiments and the discussion of limitations in the conclusion!", "Summary:\nThe manuscript extends the Neural Expectation Maximization framework by integrating an interaction function that allows asymmetric pairwise effects between objects. The network is demonstrated to learn compositional object representations which group together pixels, optimizing a predictive coding objective. The effectiveness of the approach is demonstrated on bouncing balls sequences and gameplay videos from Space Invaders. The proposed R-NEM model generalizes\n\nReview:\nVery interesting work and the proposed approach is well explained. The experimental section could be improved.\nI have a few questions/comments:\n1) Some limitations could have been discussed, e.g. how would the model perform on sequences involving more complicated deformations of objects than in the Space Invaders experiment? As you always take the first frame of the 4-frame stacks in the data set, do the objects deform at all?\n2) It would have been interesting to vary K, e.g. study the behaviour for K in {1,5,10,25,50}. In Space Invaders the model would probably really group together separate objects. What happens if you train with K=8 on sequences of 4 balls and then run on 8-ball sequences instead of providing (approximately) the right number of components both at training and test time (in the extrapolation experiment).\n3) One work that should be mentioned in the related work section is Michalski et al. (2014), which also uses noise and predictive coding to model sequences of bouncing balls and NORBvideos. Their model uses a factorization that also discovers relations between components of the frames, but in contrast to R-NEM the components overlap.\n4) A quantitative evaluation of the bouncing balls with curtain and Space Invaders experiments would be useful for comparison.\n5) I think the hyperparameters of the RNN and LSTM are missing from the manuscript. Did you perform any hyperparameter optimization on these models?\n6) Stronger baselines would improve the experimental section, maybe Seo et al (2016). Alternatively, you could train the model on Moving MNIST (Srivastava et al., 2015) and compare with other published results.\n\nI would consider increasing the score, if at least some of the above points are sufficiently addressed.\n\nReferences:\nMichalski, Vincent, Roland Memisevic, and Kishore Konda. \"Modeling deep temporal dependencies with recurrent grammar cells\"\".\" In Advances in neural information processing systems, pp. 1925-1933. 2014.\nSeo, Youngjoo, Michaël Defferrard, Pierre Vandergheynst, and Xavier Bresson. \"Structured sequence modeling with graph convolutional recurrent networks.\" arXiv preprint arXiv:1612.07659 (2016).\nSrivastava, Nitish, Elman Mansimov, and Ruslan Salakhudinov. \"Unsupervised learning of video representations using lstms.\" In International Conference on Machine Learning, pp. 843-852. 2015.", "Summary\n---\nThis work applies a representaion learning technique that segments entities to learn simple 2d intuitive physics without per-entity supervision. It adds a relational mechanism to Neural Expectation Maximization and shows that this mechanism provides a better simulation of bouncing balls in a synthetic environment.\n\nNeural Expectation Maximization (NEM) decomposes an image into K latent variables (vectors of reals) theta_k. A decoder network reconstructs K images from each of these latent variables and these K images are combined into a single reconstruction using pixel-wise mixture components that place more weight on pixels that match the ground truth. An encoder network f_enc() then updates the latent variables to better explain the reconstructions they produced.\nThe neural nets are learned so that the latent variables reconstruct the image well when used by the mixture model and match a prior otherwise. Previously NEM has been shown to learn variables which represent individual objects (simple shapes) in a compositional manner, using one variable per object.\n\nOther recent neural models can learn to simulate simple 2d physics environments (balls bouncing around in a 2d plane). That work supervises the representation for each entity (ball) explicitly using states (e.g. position and velocity of balls) which are known from the physics simulator used to generate the training data. The key feature of these models is the use of a pairwise embedding of an object and its neighbors (message passing) to predict the object's next state in the simulation.\n\nThis paper paper combines the two methods to create Relational Neural Expectation Maximization (R-NEM), allowing direct interaction at inference time between the latent variables that encode a scene. The encoder network from NEM can be seen as a recurrent network which takes one latent variable theta_k at time t and some input x to produce the next latent variable theta_k at time t+1. R-NEM adds a relational module which computes an embedding used as a third input to the recurrent encoder. Like previous relational models, this one uses a pairwise embedding of the object being updated (object k) and its neighbors. Unlike previous neural physics models, R-NEM uses a soft attention mechanism to determine which objects are neighbors and which are not. Also unlike previous neural models, this method does not require per-object supervision.\n\nExperiments show that R-NEM learns compositional representations that support intuitive physics more effectively than ablative baselines. These experiments\nshow:\n1) R-NEM reconstructs images more accurately than baselines (RNN/LSTM) and NEM (without object interaction).\n2) R-NEM is trained with 4 objects per image. It does a bit worse at reconstructing images with 6-8 objects per image, but still performs better than baselines.\n3) A version of R-NEM without neighborhood attention in the relation module matches the performance of R-NEM using 4 objects and performs worse than R-NEM at 6-8 objects.\n4) R-NEM learns representations which factorize into one latent variable per object as measured by the Adjusted Rand Index, which compares NEM's pixel clustering to a ground truth clustering with one cluster per object.\n5) Qualitative and quantitative results show that R-NEM can simulate 2d ball physics for many time steps more effectively than an RNN and while only suffering gradual divergence from the ground truth simulation.\n\nQualitative results show that the attentional mechanism attends to objects which are close to the context object together, acting like the heuristic neighborhood mechanism from previous work.\n\nFollow up experiments extend the basic setup significantly. One experiment shows that R-NEM demonstrates object permanence by correctly tracking a collision when one of the objects is completely occluded. Another experiment applies the method to the Space Invaders Atari game, showing that it treats columns of aliens as entities. This representation aligns with the game's goal.\n\n\nStrengths\n---\n\nThe paper presents a clear, convincing, and well illustrated story.\n\nWeaknesses\n---\n\n* RNN-EM BCE results are missing from the simulation plot (right of figure 4).\n\nMinor comments/concerns:\n\n* 2nd paragraph in section 4: Are parameters shared between these 3 MLPs (enc,emb,eff)? I guess not, but this is ambiguous.\n\n* When R-NEM is tested against 6-8 balls is K set to the number of balls plus 1? How does performance vary with the number of objects?\n\n* Previous methods report performance across simulations of a variety of physical phenomena (e.g., see \"Visual Interaction Networks\"). It seems that supervision isn't needed for bouncing ball physics, but I wonder if this is the case for other kinds of phenomena (e.g., springs in the VIN paper). Can this method eliminate the need for per-entity supervision in this domain?\n\n* A follow up to the previous comment: Could a supervised baseline that uses per-entity state supervision and neural message passsing (like the NPE from Chang et. al.) be included?\n\n* It's a bit hard to qualitatively judge the quality of the simulations without videos to look at. Could videos of simulations be uploaded (e.g., via anonymous google drive folder as in \"Visual Interaction Networks\")?\n\n* This uses a neural message passing mechanism like those of Chang et. al. and Battaglia et. al. It would be nice to see a citation to neural message passing outside of the physics simulation domain (e.g. to \"Neural Message Passing for Quantum Chemistry\" by Gilmer et. al. in ICML17).\n\n* Some work uses neighborhood attention coefficients for neural message passing. It would be nice to see a citation included.\n * See \"Neighborhood Attention\" in \"One-Shot Imitation Learning\" by Duan et. al. in NIPS17\n * Also see \"Programmable Agents\" by Denil et. al.\n\n\nFinal Evaluation\n---\n\nThis paper clearly advances the body of work on neural intuitive physics by incorporating NEM entity representation to allow for less supervision. Alternatively, it adds a message passing mechanism to the NEM entity representation technique. These are moderately novel contributions and there are only minor weaknesses, so this is a clear accept.", "This was a pretty interesting read. Thanks. A few comments: \n\nOne sentence that got me confused is this: “Unlike other work (Battaglia et al., 2016) this function is not commutative and we opt for a clear separation between the focus object k and the context object i as in previous work (Chang et al., 2016)”. What exactly is not commutative? The formulation seems completely align with the work of Battaglia et al, with the difference that one additionally has an attention on which edges should be considered (attention on effects). What is the difference to Battaglia et al. that this should highlight? \n\nI don’t think is very explicit what k is in the experiments with bouncing balls. Is it 5 in all of them ? When running with 6-8 balls, how are balls grouped together to form just 5 objects? \n\nIs there any chance of releasing the code/ data used in this experiments? ", "We have updated the draft of the paper to reflect the comments of the reviewers and incorporated the following changes:\n\n- Results for R-NEM with K=8 on the bouncing balls task\n- Loss comparison of R-NEM to RNN and LSTM on the curtain task\n- Link to videos of all balls experiments: https://sites.google.com/view/r-nem-gifs\n- Clarification of experimental set-up\n- Simulation results of RNN-EM to figure 4\n- Discussion on the limitations of our approach\n- Suggested references\n- General changes to improve readability\n\nWe were unable to incorporate a stronger unsupervised baseline (PredNet) as of yet. This is still something that we are looking into.", "We updated the title of our paper to better reflect the nature of our approach", "Thank you for the careful consideration of our paper and for the useful feedback. Regarding your comments:\n\n1) We agree that adding a general discussion on the limitations of our approach is valuable (and currently lacking) and we intend to include this in the new draft. In response to your specific examples: we expect the performance of R-NEM to be similar in regard to N-EM when confronted with high variability within an object class. In N-EM when trained on sequences of moving MNIST digits (that highly vary, since each MNIST digit is unique) it is more difficult to group pixels into coherent objects (digits in this case) as all variation needs to be captured. However, once pixels have been clustered to belong to an object, and its deformation is consistent/predictable R-NEM should be able to accurately capture it (as is for example the case for the occlusion experiment). If it is not consistent/predictable as is for example the case in Atari due to randomness and deformation due to down-sampling, then R-NEM will continuously try to adjust its predictions as a function of feedback from the environment (eg. by means of the masked difference between prediction and reality that is fed into the system) .\n\n2) We have tried a range of values for K={4,5,6,7,8} on Atari, before settling onto K=4. Since the Aliens move together they are mostly grouped together into a single component, which leaves only bullets / shields and the space-ship as remaining objects. The bullets from the Aliens can not be predicted and so usually one of the alien columns takes care of it. Bullets from the Spaceship can be predicted (as we feed in the action) and for this a remaining group does help. It should be noted that (similar to N-EM) very large values of K > 9-10 cause instabilities during training as there may be too many components competing for the same pixel in the E-step. We have therefore not explored extreme cases of K=25/50 further. We are in the process of training R-NEM with K=8 on the balls 4 dataset to provide further insight into its behavior of 4-balls and 678-balls.\n\n3) Thank you, we’ve missed this relevant work and will include it in the next draft.\n\n4) We are in the process of training RNN / LSTM models on the bouncing balls with curtain task to provide a quantitative evaluation. With regard to a quantitative evaluation on Atari we feel that it would take an unjustified/disproportionate amount of effort and computational time to provide a comparison that carries any value. In particular, since we have not studied the performance of RNN / LSTM on this domain previously, we would need to perform a general search to ensure that our baseline isn’t just underfitted. \n\n5) We use the same encoder / decoder architecture for the LSTM / RNN variations, that also receive the difference between prediction from the previous timestep and current frame as input. We use the same layer size also (250 units). This is reported in the Appendix. Some experiments that we ran but did not report involve adding more units (500) for the LSTM / RNN, for which we did not observe significant differences in performance. Moreover, we experimented with an RHN, but were unable to improve upon the LSTM and therefore left it out of the final comparison. We have tried several deeper/wider variations of the interaction function and settled on the smallest architecture that was able to achieve the reported performance. This last observation is mentioned in the Appendix.\n\n6) We are happy to incorporate stronger baselines in our quantitative evaluation of R-NEM. However we are unsure whether Seo et al. (2016) / Srivastava et al. 2015 (2015) are suitable. Both approaches are encoder / decoder architectures that first encode a sequence of time-steps in an LSTM, to then use a (or multiple) decoder LSTMs initialized with the encoded state to reconstruct the input sequence in reverse, or predict future time-steps. Since our model is trained with next-step prediction, this reduces the Decoder LSTM of such an approach to a simple feedforward decoder. Moreover, we are only computing gradients from next-step prediction, eliminating the reverse decoder LSTM from such an approach such that the model is essentially reduced to a standard LSTM again that we do compare to. In case of Seo et al. (2016) the only addition would be to use a graph-convolutional LSTM in stead. Neither of these seem much stronger than our LSTM baseline on the balls environments. Instead we would like to propose to use PredNet (Lotter et al. https://arxiv.org/pdf/1605.08104.pdf) as an alternative to your suggestions. In response to your suggestion to evaluate on the moving MNIST dataset we would like to point out that this dataset does involve any interactions between the digits and is unsuitable to evaluate the impact of our interaction function. ", "Thank you for the careful consideration of our paper and for the useful feedback. Regarding your comments:\n\n- We will incorporate the RNN-EM BCE results in the simulation plot (right of figure 4).\n- Videos of the performance of R-NEM (compared to the RNN) on all balls tasks are available at https://sites.google.com/view/r-nem-gifs/home \n- We will incorporate related work on neural message passing as you suggested\n\nIndeed the parameters between the 3 MLPs (enc, emb, eff) are not shared and we will clarify this in the text.\n\nWhen R-NEM is tested against 6-8 balls we set K to 8. We have tried K=9 also and obtained similar results. In general we observe (as is in line with the findings in the Neural Expectation Maximization paper) that increasing K benefits performance (independent of the number of balls) at training and at test time. Very large values of K > 9-10 cause instabilities during training as there may be too many components competing for the same pixel in the E-step. Choosing K < # objects hinders performance, as expected. We are in the process of training R-NEM with K=8 on the balls 4 dataset to provide further insight into this. \n\nWe expect R-NEM to be able to fully eliminate the need for per-entity state supervision in various other domains that revolve around interactions between entities. The interaction function incorporated in N-EM is not restricted to local interaction and therefore there is no apparent reason for it not to be able to handle springs. \n\nAlthough we are happy to include a supervised baseline, we have not been able to come up yet with a fair comparison measure. None of the supervised methods reconstructs in pixel-space and therefore comparing in terms of BCE would put R-NEM at a significant disadvantage. Being unable to disentangle error due to poorly modeling physical dynamics (including interactions) from error due to poor visual reconstruction only allow for comparing to approaches that reconstruct in pixel-space. We are currently looking into incorporating a stronger unsupervised baseline in the form of PredNet (https://arxiv.org/pdf/1605.08104.pdf). ", "Thank you for the careful consideration of our paper and for the useful feedback. Regarding your comments:\n\nWe were mistaken in that we thought that the interaction function in Battaglia et al. 2016 was commutative in that it did not distinguish a sender and a receiver in computing interactions between objects. However upon reviewing their work again it turns out that that segment (https://arxiv.org/pdf/1612.00222.pdf page 3 - bottom) only referred to the ordering of the arguments in the a-function. We remove this part in the new draft.\n\nThe number of components is K=5 for the 4 balls experiment, K=8 for the 678 balls experiment, and K=5 for the occlusion experiment. In general we observe (as is in line with the findings in the Neural Expectation Maximization paper) that increasing K benefits performance (independent of the number of balls) at training and at test time. Very large values of K > 9-10 cause instabilities during training as there may be too many components competing for the same pixel in the E-step. Choosing K < # objects hinders performance, as expected. We are in the process of training R-NEM with K=8 on the balls 4 dataset to provide further insight into this. \n\nOur current codebase is an adaptation of the code provided by the authors of the Neural Expectation Maximization paper (found here: https://github.com/sjoerdvansteenkiste/Neural-EM). We will release our adaptation of this code including all datasets upon publication. " ]
[ -1, 8, 7, 7, -1, -1, -1, -1, -1 ]
[ -1, 5, 4, 3, -1, -1, -1, -1, -1 ]
[ "B18VfrYMM", "iclr_2018_ryH20GbRW", "iclr_2018_ryH20GbRW", "iclr_2018_ryH20GbRW", "iclr_2018_ryH20GbRW", "iclr_2018_ryH20GbRW", "BkApAXalG", "ByDGdV9ef", "B1USI22xz" ]
iclr_2018_HkCsm6lRb
Generative Models of Visually Grounded Imagination
It is easy for people to imagine what a man with pink hair looks like, even if they have never seen such a person before. We call the ability to create images of novel semantic concepts visually grounded imagination. In this paper, we show how we can modify variational auto-encoders to perform this task. Our method uses a novel training objective, and a novel product-of-experts inference network, which can handle partially specified (abstract) concepts in a principled and efficient way. We also propose a set of easy-to-compute evaluation metrics that capture our intuitive notions of what it means to have good visual imagination, namely correctness, coverage, and compositionality (the 3 C’s). Finally, we perform a detailed comparison of our method with two existing joint image-attribute VAE methods (the JMVAE method of Suzuki et al., 2017 and the BiVCCA method of Wang et al., 2016) by applying them to two datasets: the MNIST-with-attributes dataset (which we introduce here), and the CelebA dataset (Liu et al., 2015).
accepted-poster-papers
All three reviewers recommend acceptance. Good work, accept
train
[ "BJDxbMvez", "S1UuHvwgf", "rky3x-5lG", "HyzaUOvGz", "HygQU_DMM", "B1jeeYPMG", "HkXT0_Pzf", "HkvKuGGfG", "r1W-W7fMM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public" ]
[ "The authors propose a generative method that can produce images along a hierarchy of specificity, i.e. both when all relevant attributes are specified, and when some are left undefined, creating a more abstract generation task. \n\nPros:\n+ The results demonstrating the method's ability to generate results for (1) abstract and (2) novel/unseen attribute descriptions, are generally convincing. Both quantitative and qualitative results are provided. \n+ The paper is fairly clear.\n\nCons:\n- It is unclear how to judge diversity qualitatively, e.g. in Fig. 4(b).\n- Fig. 5 could be more convincing; \"bushy eyebrows\" is a difficult attribute to judge, and in the abstract generation when that is the only attribute specified, it is not clear how good the results are.\n", "This paper presented a multi-modal extension of variational autoencoder (VAE) for the task \"visually grounded imagination.\" In this task, the model learns a joint embedding of the images and the attributes. The proposed model is novel but incremental comparing to existing frameworks. The author also introduced new evaluation metrics to evaluate the model performance concerning correctness, coverage, and compositionality. \n\nPros:\n1. The paper is well-written, and the contribution (both the model and the evaluation metric) potentially can to be very useful in the community. \n2. The discussion comparing the related work/baseline methods is insightful. \n3. The proposed model addresses many important problems, such as attribute learning, disentanged representation learning, learning with missing values, and proper evaluation methods. \n\nCons/questions:\n1. The motivation of the model choice of q is not clear.\nComparing to BiVCCA, apart from the differences that the author discussed, a big difference is the choice of q. BiVCCA uses two inference networks q(z|x) and q(z|y), while the proposed method uses three. q(z|x), q(z|y), and q(z|x,y). How does such model choice affect the final performance? \n\n2. Baselines are not necessarily sufficient. \nThe paper compared the vanilla version of BiVCCA but not the one with factorized representation version. In the original VAECCA paper, the extension of using factorized representation (private and shared) improved the performance]. The author should also compare this extension of VAECCA.\n\n3. Some details are not clear. \na) How to set/learn the scaling parameter \\lambda_y and \\beta_y? If it is set as hyper-parameter, how does the performance change concerning them? \nb) Discussion of the experimental results is not sufficient. For example, why JMVAE performs much better than the proposed model when all attributes are given. What is the conclusion from Figure 4(b)? The JMVAE seems to generate more diverse (better coverage) results which are not consistent with the claims in the related work. The same applies to figure 5. \n", "The paper proposes a method for generating images from attributes. The core idea is to learn a shared latent space for images and attributes with variational auto-encoder using paired samples, and additionally learn individual inference networks from images or attributes to the latent space using unpaired samples. During training the auto-encoder is trained on paired data (image, attribute) whereas during testing one uses the unpaired data to generate an image corresponding to an attribute or vice versa. The authors propose handling missing data using a product of experts where the product is taken over available attributes, and it sharpens the prior distribution. The authors evaluate their method using correctness i.e. if the generated images have the desired attributes, coverage i.e. if the generated images sample unspecified attributes well, and compositionality i.e. if images can be generated from unseen attributes. Although the proposed method performs slightly poor compared to JMVAE in terms of concreteness when all attributes are provided, it outperforms when some of the attributes are missing (Figure 4a). It also outperforms existing methods in terms of coverage and compositionality.\n\nMajor comments:\n\nThe paper is well written, and summarizes its contribution succinctly.\n\nI did not fully understand the 'retrofitting' idea. If I understood correctly, the authors first train \\theta and \\phi and then fix \\theta to train \\phi_x and \\phi_y. If that is true, then is \\calL(\\theta, \\phi, \\phi_x, \\phi_y) are right cost function since one does not maximize all three ELBO terms when optimizing \\theta? Please clarify?\n\nMinor comments:\n\n- 'in order of increasing abstraction', does the order of gender-> smiling or not -> hair color matter? Or, is male, *, blackhair a valid option?\n\n- what are the image sizes for the CelebA dataset\n\n- page 5: double the\n\n- Which multi-label classifier is used to classify images in attributes?", "We thank the reviewer for the encouraging feedback on the paper. Below we provide clarifications to specific concerns.\n\n1. Clarifications about “retrofitting”:\nAs explained on Page. 3, last paragraph, and by the reviewer, the retrofitting idea is that we first train \\theta and \\phi and then fix \\theta to train \\phi_x and \\phi_y. As explained in the paper, we just optimize the first term in the cost function with respect to \\theta and fix \\theta_x and \\theta_y when training the last two ELBO terms respectively. This is what we refer to as ‘retrofitting’. To clarify things further, we write the cost in terms of L(\\theta_x, \\theta_y, \\phi_x, \\phi_y, \\phi) to clarify which parameters get used in each of the ELBO terms. However, conceptually, we still retain the notation specifying the overall cost using \\theta for succinctness, and for the possibility of performing semi-supervised learning with the objective (footnote, Page. 4), which is beyond the scope of the current paper.\n\n2. Order of attributes:\nThanks a lot for raising this point! We have updated the paper with clarifications (Page. 1) explaining that the order of attributes does not matter.\n\n3. Image sizes for CelebA:\nThe updated version of the paper contains the image sizes used for both MNIST-A and CelebA. The sizes used are the same across both the datasets, namely 64x64.\n\n4. Details of the multi-label classifier:\nPage. 15 provides more details of the multi-label classifier used for evaluating imagination on MNIST-A dataset. The multi-label classifier is a 2 layer convolutional neural network with four heads, where each head is a 1 hidden layer MLP trained end-to-end to optimize for the log-likelihood of the correct attribute label for an image.", "We thank the reviewers for the thoughtful feedback and are encouraged the reviewers thought the paper was well written and clear (R1, R2, R3), addresses many important problems (R1), and makes useful contributions (R1).\n\nWe also thank the community for providing public comments on reproducibility of our work. We are glad that our conclusions were found to be reproducible and well substantiated, and will incorporate specific feedback for improving reproducibility.\n\nWe address the major points raised by the reviewers in an updated version of the paper. The key text to look at is highlighted in blue. Below we provide a summary of the major changes.\n\n1. (R1, R2) Better explanation of qualitative results for both MNIST-A and CelebA:\nThe new version provides a clearer caption to Fig. 4 b) explaining the qualitative results for MNIST-A and adds new text on Page. 9 explaining the qualitative results on CelebA. In addition, we show more qualitative results in Appendix A.9 (Page. 19, Figure. 13) comparing TELBO and JMVAE in terms of diversity.\n\n2. (R1) Explanation of hyperparameter choices:\nThanks for raising this point. The new version adds a discussion of how the hyperparameter choices for different objectives change performance in Appendix A.6.\n\n3. (R1) Clarification on why JMVAE generates more correct samples:\nPage. 6 explains the tradeoff between correctness and coverage implicit in choices of JMVAE v.s. TELBO objectives. In particular, while the earlier version explained why TELBO has better coverage, this version also adds the explanation for why JMVAE has better correctness for seen concepts.\n\n4. (R3) Clarification on the order of attributes being important v.s. Not:\nPage. 1 adds text to clarify that the order in which the attributes are dropped is not important and that the tree shown in Fig. 1 is just a subset of the overall DAG.\n\n5. (R3) Details of the multi-label classifier used for evaluation:\nPage. 15 provides more details of the multi-label classifier used for evaluating imagination on MNIST-A dataset.\n\n6. We add more clarifications explaining how we pick hyperparameters (Page. 4, Page.7), typical training times for our models (Page. 7), and provide the size of the images we generate (Page. 7) (R3).\n\nAs mentioned earlier, all the changes above can be conveniently located by following text highlighted in blue across the paper. We elaborate further on these points and address specific reviewer concerns in individual replies.", "We are glad the reviewer thought that our paper was clear and has convincing results. Below we provide some clarifications to address the cons raised by the reviewer:\n\n1. How to judge diversity in Fig. 4 b):\nThe new version provides a clearer caption to Fig. 4 b) explaining the qualitative results for MNIST-A. Hopefully this should make judging diversity easier.\n\n2. Explanation for Fig. 5:\nWe have updated the text on Page. 9, to better explain the qualitative results for CelebA, in addition to further examples in Appendix A.9.\n\n Further, we would like to point out a subtle distinction between attributes which are ‘absent’ and attributes which are ‘unspecified’. ‘Absence’ means that the value of an attribute is set to 0, while unspecified means that we don’t know whether the attribute should be set (1) or unset (0). Thus, for Fig. 5, there are a total of 16 attributes specified (15 unset and 1 set) (for the most abstract generation example, male=*, smiling=*), as opposed to just ‘bushy eyebrows’ being specified (which is one of the premises of the reviewer’s concern).\n", "We are glad the reviewer thought that our contribution would be useful to the community and that our work addresses important problems. Below we address specific cons/ questions raised by the reviewer:\n\n1. The motivation for the model choice of q:\nIn general, when there is information asymmetry in the two modalities (i.e. one instance in modality y corresponds to many instances in modality x), we believe it is beneficial to learn with a joint inference network and retrofit (like TELBO does) as opposed to trying to explain both modalities using the unimodal inference network (say for q(z| y)) where I(y) < I(x), since it is difficult for the latent variable in this case to explain 'all' the instances of the second (information rich) modality.\n\n2. Comparison to private-VCCA:\nA direct comparison to private-VCCA (from Wang.et.al. (VAECCA)) cannot be made since the private-VCCA model does not have a mechanism to do ‘imagination’. Concretely, the private-VCCA model cannot be applied for multimodal translation, i.e. going from x to y or vice versa, since in this model, p(x| h_x, z) needs access to h_x and z, while the inference networks that condition on 'y' are just q(h_y| y) and q(z| y), meaning there is no path to go from ‘y’ to ‘x’, which is needed for conditional generation.\n We implemented a modified version of private-VCCA and /extended/ it to the BiVCCA setting (i.e. learning two inference networks) to obtain a version that /can/ do conditional generation. We call this model imagination-private-BiVCCA. Concretely, we learn three inference networks per modality: q(h_x| y), q(h_y|y) and q(z| y). Now, we see that given ‘y’, we can sample h_x and z, and thus, feed it through p(x| h_x, z) to generate images. Similar to Wang. et.al., we regularize the latent space in this model using dropout, searching for values in the range {0, 0.2, 0.4}. As with the BiVCCA model we present in this paper, we also search for \\mu in range {0.3, 0.5, 0.7}. We find that this approach does about as well as the best BiVCCA method we already report in the paper and is still substantially worse than JMVAE or TELBO.\n\n3. Details of the hyperparameters, \\lambda_y and \\beta_y:\nFirstly, we clarified notation, replacing \\beta_y with \\gamma in the revised version. We have added details of how performance changes based on different hyperparameters for all the objective functions in appendix A.6. Concretely, the important parameters to tweak for TELBO are \\lambda_y (with \\gamma) being less important. For JMVAE, both \\lambda_y and \\alpha turn out to be fairly important, while for BiVCCA, \\lambda_y is important while the choice of \\mu is marginally important.\n\n4. Why is JMVAE better at correctness when all attributes are specified? \nPage. 6 explains the tradeoff between correctness and coverage implicit in choices of JMVAE v.s. TELBO objectives. Briefly, our initial submission showed in appendix A.1. that JMVAE fits an inference network optimizing the KL divergence between the aggregate posterior (for the set of images given a label y_i) and the unimodal inference network q(z| y_i), to ‘cover’ the aggregate posterior. This ensures that the samples from q(z| y_i) are better tuned to work with p(x| z), since the elbo(x, y) term has a likelihood function p_{\\theta_x}(x| z) which is fed samples from q(z| x, y), whose aggregate posterior q(z| y_i) tries to match. In contrast TELBO regularizes q(z| y_i) to be ‘close’ to p(z) which leads to better coverage, but does not lead naturally to the expectation that we would achieve better correctness than a model like JMVAE, which in some sense has a tighter coupling between the likelihood and the q(z| y) terms. If the aggregate posterior matches the prior better, the gap in correctness between JMVAE and TELBO would reduce. Making this happen is an active research area in variational inference (see [A], [B]).\n\n5. Clarifications about qualitative results:\nThe new version provides a clearer caption to Fig. 4 b) explaining the qualitative results for MNIST-A and adds new text on Page. 9 explaining the qualitative results on CelebA. In addition, we show more qualitative results in Appendix A.9 (Page. 18, Figure. 13) discussing results comparing TELBO and JMVAE in terms of diversity.\n\nReferences:\n[A]: Tomczak, Jakub M., and Max Welling. 2017. “VAE with a VampPrior.” arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1705.07120.\n[B]: Hoffman, Matthew D., and Matthew J. Johnson. 2016. “Elbo Surgery: Yet Another Way to Carve up the Variational Evidence Lower Bound.” In Workshop in Advances in Approximate Bayesian Inference, NIPS. http://approximateinference.org/accepted/HoffmanJohnson2016.pdf.", "The challenge that this paper is addressing is to generate images from novel abstract concepts and compositionally novel concrete concepts, input as a set of attribute descriptions. The authors propose a product-of-experts inference network using a joint variational autoencoder model, with a new objective called triple evidence lower bound, or TELBO. It is claimed in the paper that this objective performs on par with same architecture with JMVAE objective and significantly outperforms one with BiVCCA objective in correctness. Also, it is shown in the paper that TELBO outperforms both baselines when the given attributes in an image description are not fully specified (coverage and compositionality).\nIn an effort to reproduce the results of this paper and confirm the claims, our team worked on a project aiming to replicate the methodology of the authors. In this path, we find the paper well-written and clear. The source code is publicly available on Github, which facilitates the reproduction of the models. While the MNIST-A dataset used in paper is proprietary to Google, the provided datasets are very similar but not identical to the ones that the authors used. Furthermore, the details of the infrastructure and computational setup were not mentioned in the paper.\nIn our project, we decided to narrow to scope of this project by focusing on reproducing the results of the paper for i.i.d MNIST-A dataset, using the source code provided by the author. We ran our experiments on a Google Compute Engine, using one Nvidia Tesla P100 accelerator. The hyperparameter set that gives the best performance is missing. Our testing results has a significant resemblance with the results in the paper. The average percentage difference between our and their results is at 1.18%. This is a small difference, therefore confirming the methodology and results presented in this paper. \nThe code implements the architecture and the objective functions described in the paper. Also, our quantitative results match nicely with those in the paper. However, we noticed a misalignment in the qualitative results, namely between the reproduced image and the images in the paper. Our reproduced images are not found in the paper, creating some confusion. Therefore, we were not able to obtain a full understanding the process that transforms the raw images we have to the findings presented in the paper. We believe that with more clarification, we can do more qualitative analysis on the output images.\nA key difference between our experiments and those of the authors is the number of steps that we used. We ran TELBO and JMVAE for 250,000 steps, and BiVCCA for 50,000 steps, while the authors trained their models for 500,000 steps. The similarity in coverage and correctness implies that 250,000 steps is enough to train a well-performed model. The additional steps in author’s experiments did not lead to overfitting, but it did not make a significant improvement either. Moreover, the author noted in their paper that the models do not tend to overfit in their experiment. Indeed, our reproduced results verify this claim.\nOverall, the process of reproducing this paper’s result is straightforward. Feasibility of the experiments largely depends on the time and hardware setup. Given more time and resources, we believe that we could reproduce additional experiments such as examining different hyperparameter settings and running the models on CelebA dataset.\n\nYou can find our paper at this address: https://goo.gl/kscwAA", "The paper focuses on the ability to create diverse images corresponding to novel concrete or partially specified (abstract) concepts, on both familiar and unseen concepts. The authors define a set of simple evaluation metrics for assessing a thorough understanding of concepts by the generative model. The evaluation metrics introduced by the authors present a new way of evaluating the deep understanding of concepts for generative models in the case of abstract or unseen queries. The learning is carried out through joint embedding of images and attributes. The proposed method presents a new objective function that takes into account two additional unpaired inference networks. The authors have also proposed a methodology to handle missing data, through product of experts. The results show that the proposed method performs better than all other methods on abstract queries and the baseline JMVAE method performs better than other methods on unseen queries. The experiments are conducted on different joint multimodal variational auto-encoder models differentiated by their objective functions, namely BiVCCA, JMVAE and TELBO.\n\nHaving access to the entirety of the code, we have reproduced one of the baseline methods presented in the paper, the JMVAE method, a method proposed by (Suzuki et al., 2017), against which the proposed method is compared. This method has been reproduced against a fairly similar version of the MNIST-A dataset used in the paper both on a iid and compositional split. The error on the compositional split was ~1%. The error on the coverage ranges from ~ 0.5 to 4%. The error on the correctness ranges from ~ 2 to 11%. However, the model has been trained on far fewer steps than those trained by the authors (20,000 against 183,564 for the compositional split and 62,862 against 266,054 for the iid split), access to time, expertise and computational power was limited. The limitations on time were 3 weeks, and the computational power available was a simulated environment on a Google Compute Engine with a NVIDIA Tesla K80 GPU. We did not have access to the Slurm Workload Manager used by the authors of the paper. The generated samples presented in the paper were in general different from the images resulting from our experiments, which was expected, as the authors mention the examples being cherry-picked. The paper has presented a detailed structure of the model, which closely align with the source code. All the result are clearly represented for the three different models.\n\nIn conclusion, the requirements for reproducing this paper were clear and concise throughout the process. As the models are expensive and complex, the main limitations were time and computational power. We are confident that the entirety of the paper could have been reproduced had it been more time and a more powerful hardware structure." ]
[ 7, 7, 7, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HkCsm6lRb", "iclr_2018_HkCsm6lRb", "iclr_2018_HkCsm6lRb", "rky3x-5lG", "iclr_2018_HkCsm6lRb", "BJDxbMvez", "S1UuHvwgf", "iclr_2018_HkCsm6lRb", "iclr_2018_HkCsm6lRb" ]
iclr_2018_r1wEFyWCW
Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions
Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset.
accepted-poster-papers
This paper incorporates attention in the PixelCNN model and shows how to use MAML to enable few-shot density estimation. The paper received mixed reviews (7,6,4). After rebuttal the first reviewer updated the score to accept. The AC shares the concern of novelty with the first reviewer. However, it is also not trivial to incorporate attention and MAML in PixelCNN, thus the AC decided to accept the paper.
train
[ "rkHhxN2lG", "HyKWS3KxM", "rJ3vXv5xf", "ryfVtml7f", "ByMcMEGJG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "This paper focuses on the density estimation when the amount of data available for training is low. The main idea is that a meta-learning model must be learnt, which learns to generate novel density distributions by learn to adapt a basic model on few new samples. The paper presents two independent method.\n\nThe first method is effectively a PixelCNN combined with an attention module. Specifically, the support set is convolved to generate two sets of feature maps, the so called \"key\" and the \"value\" feature maps. The key feature map is used from the model to compute the attention in particular regions in the support images to generate the pixels for the new \"target\" image. The value feature maps are used to copmpute the local encoding, which is used to generate the respective pixels for the new target image, taking into account also the attention values. The second method is simpler, and very similar to fine-tuning the basis network on the few new samples provided during training. Despite some interesting elements, the paper has problems.\n\nFirst, the novelty is rather limited. The first method seems to be slightly more novel, although it is unclear whether the contribution by combining different models is significant. The second method is too similar to fine-tuning: although the authors claim that \\mathcal{L}_inner can be any function that minimizes the total loss \\mathcal{L}, in the end it is clear that the log-likelihood is used. How is this approach (much) different from standard fine-tuning, since the quantity P(x; \\theta') is anyways unknown and cannot be \"trained\" to be maximized.\n\nBesides the limited novelty, the submission leaves several parts unclear. First, why are the convolutional features of the support set in the first methods divided into \"key\" and \"value\" feature maps as in p_key=p[:, 0:P], p_value=p[:, P:2*P]? Is this division arbitrary, or is there a more basic reason? Also, is there any different between key and value? Why not use the same feature map for computing the attention and computing eq (7)?\n\nAlso, in the first model it is suggested that an additional feature can be having a 1-of-K channel for the supporting image label: the reason is that you might have multiple views of objects, and knowing which view contributes to the attention can help learning the density. However, this assumes that the views are ordered, namely that the recording stage has a very particular format. Isn't this a bit unrealistic, given the proposed setup anyways?\n\nRegarding the second method, it is not clear why leaving this room for flexibility (by allowing L_inner to be any function) to the model is a good idea. Isn't this effectively opening the doors to massive overfitting? Besides, isn't the statement that the function \\mathcal{L}_inner void? At the end of the day one can also claim the same for gradient descent: you don't need to have the true gradients of the true loss, as long as the objective function obtains gradually lower and lower values?\n\nLast, it is unclear what is the connection between the first and the second model. Are these two independent models that solve the same problem? Or are they connected?\n\nRegarding the evaluation of the models, the nature of the task makes the evaluation hard: for real data like images one cannot know the true distribution of particular support examples. Surrogate tasks are explored, first image flipping, then likelihood estimation of Omniglot characters, then image generation. Image flipping does not sound a very relevant task to density estimation, given that the task is deterministic. Perhaps, what would make more sense would be to generate a new image given that the support set has images of a particular orientation, meaning that the model must learn how to learn densities from arbitrary rotations. Regarding Omniglot character generation, the surrogate task of computing likelihood of known samples gives a bit better, however, this is to be expected when combining a model without attention, with an attention module.\n\nAll in all, the paper has some interesting ideas. I encourage the authors to work more on their submission and think of a better evaluation and resubmit.\n\n", "This paper considers the problem of one/few-shot density estimation, using metalearning techniques that have been applied to one/few-shot supervised learning. The application is an obvious target for research and some relevant citations are missing, e.g. \"Towards a Neural Statistician\" (Edwards et al., ICLR 2017). Nonetheless, I think the current paper seems interesting enough to merit publication.\n\nThe paper is well-produced, i.e. the overall writing, visuals, and narrative flow are good. It was easy to read the paper straight through while understanding both the technical details and more intuitive motivations.\n\nI have some concerns about the architectures and experiments presented in the paper. For architectures: the attention-based model seems powerful but difficult to scale to problems with more inputs for conditioning, and the meta PixelCNN model is a standard PixelCNN trained with the MAML approach by Finn et al. For experiments: the ImageNet flipping task is clearly tailored to the strengths of the attention-based model, and the presentation of the general Omniglot results could be improved. The image flipping experiment is neat, but the attention-based model's strong performance is unsurprising. I think the results in Tables 1/2 should be merged into a single table. It would make it clear that the MAML-based and attention-based models achieve similar performance on this task.\n\nOverall, I think the paper makes a nice contribution. The paper could be improved significantly, e.g., by showing how to scale the attention-based architecture to problems with more data or by designing an architecture specifically for use with MAML-based inference.", "This paper focuses on few shot learning with autoregressive density estimation. Specifically, the paper improves PixelCNN with 1) neural attention, 2) meta learning techniques, and shows that the model achieve STOA few showt density estimation on the Omniglot dataset and demonstrate the few showt image generation on the Stanford Online Products dataset. \n\nThe model is interesting, however, several details are not clear, which makes it harder to repeat the model and the experimental results. For example, what is the reason to use the (key, value) pair to encode these support images, what does the \"key\" means and what is the difference between \"keys\" and \"values\"? In the experiments, the author did not explain the meaning of \"nats/dim\" and how to compute it. Another question is about the repetition of the experimental results. We know that PixelCNN is already a quite complicated model, it would be even harder to implement the proposed model. I wonder whether the author will release the official code to public to help the community?", "We sincerely thank all reviewers for their thoughtful feedback. We note that AR2 and AR3 both recommend accepting the paper. AR1 recommends reject, although we believe there are a few critical factual errors in that review that, once corrected, should be reflected in a higher score.\n\nBelow we respond to each review:\n\nAR1:\n\nWe think there are a few important factual errors in this review regarding our method, which should have a substantial effect on the review score. We address these below, and will attempt to improve our writing in the paper to make these points clearer.\n\nFine-tuning: Meta PixelCNN inference is in fact different than standard fine-tuning by gradient descent. With traditional fine-tuning, the procedure is ad-hoc (e.g. how many fine-tuning gradient steps, what learning rate, what batch size) and needs to be carefully designed to avoid under- or over-fitting. With Meta PixelCNN (and model-agnostic meta learning approaches in general), the critical difference is that the fine-tuning process itself is learned. The key reference that will further clarify this point is https://arxiv.org/abs/1703.03400. https://arxiv.org/abs/1710.11622 provides further theoretical justification.\n\nInner loss: In fact L_{inner} is learned; we do not use likelihood as the inner loss. So we indeed learn to maximize likelihood without computing likelihoods at test time, as claimed in the paper.\n\nBelow we respond to the rest of the review feedback.\n\nClarity regarding contribution of different model aspects: For the first method (Attention PixelCNN), we demonstrate a clear quantitative benefit of adding attention to the baseline PixelCNN. Although (Attention + Autoregressive Image Model) is a natural idea, we prove that it does indeed work and show a simple and effective implementation, which will be valuable to the research community.\n\nWhy use separate key and value? As you suggest it is possible to use the same vector as both key and value. However, separating them may give the network greater flexibility. An ablation here where key and value are the same could be a good experiment, which we are happy to add to the paper.\n\nAssumption of ordered support set: The order can be randomly chosen (and in fact is in our experiments), so the use of a channel for support image identifier should not limit the generality of the method.\n\nWhy flexibility of L_{inner} is useful: There are several reasons that we might want L_{inner} to be flexible. For example, a learned L_{inner} may be more efficient to compute than alternatives, as in this paper, or L_{inner} may require less supervision, for example see https://arxiv.org/abs/1709.04905.\n\nConnection between first and second model: The only connection is that they are autoregressive models based on PixelCNN. They are independent models.\n\nAR2:\n\nPresentation: Thank you for the suggestions on how to improve the presentation; indeed combining the tables seems like a good idea.\n\nScalability of attention: Indeed, this is one of the major challenges in scaling to high-resolution images. Potentially the memory would need to become hierarchical, or we would need to delve more into multiscale variations of the few-shot learning model, which is an interesting area of future research.\n\nAR3:\n\nMeaning of keys/values: The pairs of (query, key) vectors are used to compute the attention scores. Then, the “read” output of memory is the sum of all value vectors in memory each weighted by the normalized attention scores.\n\nLog-likelihood results units: “Bits/dim” results are interpretable as the number of bits that a compression scheme based on the PixelCNN model would need to compress every RGB color value (see e.g. https://arxiv.org/pdf/1601.06759.pdf page 6 for discussion). Nats/dim is the same but multiplied by ln(2). Concretely, in TensorFlow we can compute this value using tf.softmax_cross_entropy_with_logits or tf.sigmoid_cross_entropy_with_logits and then dividing by the total number of dimensions in the image.\n\nPublic PixelCNN replication: A great resource for this is https://github.com/openai/pixel-cnn, which is state of the art, and straightforward to modify. Furthermore, we are happy to help guide researchers replicate our experiments, especially on Omniglot which is now a common benchmark.", "Hi I think it would be worth adding https://arxiv.org/abs/1612.02192 to your related work." ]
[ 6, 7, 6, -1, -1 ]
[ 5, 4, 4, -1, -1 ]
[ "iclr_2018_r1wEFyWCW", "iclr_2018_r1wEFyWCW", "iclr_2018_r1wEFyWCW", "iclr_2018_r1wEFyWCW", "iclr_2018_r1wEFyWCW" ]
iclr_2018_rknt2Be0-
Compositional Obverter Communication Learning from Raw Visual Input
One of the distinguishing aspects of human language is its compositionality, which allows us to describe complex environments with limited vocabulary. Previously, it has been shown that neural network agents can learn to communicate in a highly structured, possibly compositional language based on disentangled input (e.g. hand- engineered features). Humans, however, do not learn to communicate based on well-summarized features. In this work, we train neural agents to simultaneously develop visual perception from raw image pixels, and learn to communicate with a sequence of discrete symbols. The agents play an image description game where the image contains factors such as colors and shapes. We train the agents using the obverter technique where an agent introspects to generate messages that maximize its own understanding. Through qualitative analysis, visualization and a zero-shot test, we show that the agents can develop, out of raw image pixels, a language with compositional properties, given a proper pressure from the environment.
accepted-poster-papers
This paper investigates emergence of language from raw pixels in a two-agent setting. The paper received divergent reviews, 3,6,9. Two ACs discussed this paper, due to a strong opinion from both positive and negative reviewers. The ACs agree that the score "9" is too high: the notion of compositionality is used in many places in the paper (and even in the title), but never explicitly defined. Furthermore, the zero-shot evaluation is somewhat disappointing. If the grammar extracted by the authors in sec. 3.2 did indeed indicate the compositional nature of the emergent communication, the authors should have shown that they could in fact build a message themselves, give it to the listener with an image and ask it to answer. On the other hand, "3" is also too low of a score. In this renaissance of emergent communication protocol with multi-agent deep learning systems, one missing piece has been an effort toward seriously analyzing the actual properties of the emergent communication protocol. This is one of the few papers that have tackled this aspect more carefully. The ACs decided to accept the paper. However, the authors should take the reviews and comments seriously when revising the paper for the camera ready.
val
[ "H1T0ZOBEG", "rkKuB71xM", "BJsMQVvxM", "B11XUD_gM", "B1TvQWpmz", "rkV7ieaXf", "BkS1LAiQf", "B101HAiQG", "SkTBrAjXz", "rJcRXRjQf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "Dear authors,\n\nThank you very much for the detailed response. I've spent a while thinking about this, and my score stays the same. 3 points:\n\n1. It is claimed that \"the messages in the gray boxes of Figure 3 do actually follow the patterns nicely\". I disagree. The central problem is that the paper provides no standard by which to adjudicate our disagreement. If the intention of this paper \"was not to develop a rigorous, quantifiable definition of compositionality and its evaluation strategy\", then it should not have made any claims that require such a definition; at present these claims are the main attraction.\n\n2. Re: Table 4. You're right about the 88 -> 97.5% (I was lumping the held-out objects together). My 94% was based on imagining that agents predict as normal for objects seen at training time, and only guess based on one attribute for held-out objects. For the identically-1 possibility, the current version of the paper doesn't rule out the possibility that this is the model's strategy on some subset of the held-out objects. Again, I think these are serious issues for the core claim in the paper.\n\n3. Re: citations. It appears the complaint that this paper doesn't cite any pre-deep-learning work has been addressed by acknowledging that earlier work exists, and continuing not to cite it. You can do better.", "This paper proposes to apply the obverter technique of Batali (1998) to a multi-agent communication game. The main novelty with respect to Batali's orginal work is that the agents in this paper have to communicate about images represented at the raw pixel level. The paper presents an extensive analysis of the patterns learnt by the agents, in particular in terms of how compositional they are.\n\nI greatly enjoyed reading the paper: while the obverter idea is not new, it's nice to see it applied in the context of modern deep learning architectures, and the analysis of the results is very interesting.\n\nThe writeup is somewhat confusing, and in particular the reader has to constantly refer to the supplementary materials to make sense of the models and experiments. At least the model architecture could be discussed in more detail in the main text.\n\nIt's a pity that there is no direct quantitative comparison between the obverter technique and a RL architecture.\n\nSome more detailed comments:\n\nIt would be good if a native speaker could edit the paper for language and style (I only annotated English problems in the abstract, since there were just too many of them).\n\nAbstract:\n\ndistinguishing natures -> aspects\n\ndescribe the complex environment -> describe complex environments\n\nwith the accuracy -> with accuracy\n\nIntroduction:\n\nhave shown great progress -> has...\n\nThe claim that language learning in humans is entirely based on communication needs hedging. See, e.g., cultures where children must acquire quite a bit of their language skills from passive listening to adult conversations (Schieffelin & Ochs 1983; Shore 1997).\n\nAlso, while it's certainly the case that humans do not learn from the neatly hand-crafted features favored in early language emergence studies, their input is most definitely not pixels (and, arguably, it is something more abstract than raw visual stimuli, see, e.g., the work on \"object biases\" in language acquisition).\n\nMethod:\n\nThe description of the experiment sometimes refers to the listener having to determine whether it is seeing the same image as the speaker, whereas the game consists in the listener telling whether it sees the same object as the speaker. This is important, since, crucially, the images can present the same object in different locations.\n\nSection 2.2 is very confusing, since the obverter technique has not been introduced, yet, so I kept wondering why you were only describing the listener architecture. Perhaps, current section 2.3 should be moved before 2.2.\n\nAlso in 2.2: what is the \"meaning vector\" that BAtali's RNN hidden vector has to be close to?\n\nIt's confusing to refer to the binary 0 and 1 vectors as outputs of the sigmoid: they are rather the binary categories you want the agents to approximate with the sigmoid function.\n\nThe obverter technique should be described in more detail in the main text. In particular, with max message length 20 and a vocabulary of 5 symbols, you'll have an astronomical number of possible inputs to evaluate at each step: is this really what you're doing?\n\nExperiments:\n\ndistinctness: better call it distinctiveness\n\nIs training accuracy in Fig 4 averaged across both agents? And what were the values the Jaccard similarity was computed over?\n\nIt's nice that the agents discovered some non-strictly agglutinative morphology, consisting in removing symbols from the prefix. Also, it looks like they are developing some notion of \"inflectional class\" (Aronoff 1994) (the color groups), which is also intriguing. However, I did not understand rules such as: \"remove two as and add one a\"... isn't that the same as: \"add one a\"?\n\nDiscussion and future work:\n\nI didn't follow the discussion about partial truth and using negations.\n\nThere is another ICLR submission that proposes a quantitative measure of compositionality you might find interesting: https://openreview.net/pdf?id=HJGv1Z-AW\n", "This paper presents a technique for training a two-agent system to play a simple reference game involving recognition of synthetic images of a single object. Each agent is represented by an RNN that consumes an image representation and sequence of tokens as input, and generates a binary decision as output. The two agents are initialized independently and randomly. In each round of training, one agent is selected to be the speaker and the other to be the listener. The speaker generates outputs by greedily selecting a sequence of tokens to maximize the probability of a correct recognition w/r/t the speaker's model. The listener then consumes these tokens, makes a classification decision, incurs a loss, and updates its parameters. Experiments find that after training, the two agents converge to approximately the same language, that this language contains some regularities, and that agents are able to successfully generalize to novel combinations of properties not observed during training.\n\nWhile Table 3 is suggestive, this paper has many serious problems. There isn't an engineering contribution---despite the motivation at the beginning, there's no attempt to demonstrate that this technique could be used either to help comprehension of natural language or to improve over the numerous existing techniques for automatically learning communication protocols. But this also isn't science: \"generalization\" is not the same thing as compositionality, and there's no testable hypothesis articulated about what it would mean for a language to be compositional---just the post-hoc analysis offered in Tables 2 & 3. I also have some concerns about the experiment in Section 3.3 and the overall positioning of the paper.\n\nI want to emphasize that these results are cool, and something interesting might be going on here! But the paper is not ready to be published. I apologize in advance for the length of this review; I hope it provides some useful feedback about how future versions of this work might be made more rigorous.\n\nWHAT IS COMPOSITIONALITY?\n\nThe title, introduction, and first three sections of this paper emphasize heavily the extent to which this work focuses on discovering \"compositional\" language. However, the paper doesn't even attempt to define what is meant by compositionality until the penultimate page, where it asserts that the ability to \"accurately describe an object [...] not seen before\" is \"one of the marks of compositional language\". Various qualitative claims are made that model outputs \"seem to be\" compositional or \"have the strong flavor of\" compositionality. Finally, the conclusion notes that \"the exact definition of compositional language is somewhat debatable, and, to the best of our knowledge, there was no reliable way to check for the compositionality of an arbitrary language.\"\n\nThis is very bad.\n\nIt is true that there is not a universally agreed-upon definition of compositionality. In my experience, however, most people who study these issues do not (contra the citation-less 5th sentence of section 4) think it is simply an unstructured capacity for generalization. And the implication that nobody else has ever attempted to provide a falsifiable criterion, or that this paper is exempt from itself articulating such a criterion, is totally unacceptable. (You cannot put \"giving the reader the tools to evaluate your current claims\" in future work!)\n\nIf this paper wishes to make any claims about compositionality, it must _at a minimum_:\n\n1. Describe a test for compositionality.\n\n2. Describe in detail the relationship between the proposed test and other definitions of compositionality that exist in the literature.\n\n3. If this compositionality is claimed to be \"language-like\", extend and evaluate the definition of compositionality to more complex concepts than conjunctions of two predicates.\n\nSome thoughts to get you started: when talking about string-valued things, compositionality almost certainly needs to say something about _syntax_. Any definition you choose will be maximally convincing if it can predict _without running the model_ what strings will appear in the gray boxes in Figure 3. Similarly if it can consistently generate analyses across multiple restarts of the training run. The fact that analysis relies on seemingly arbitrary decisions to ignore certain tokens is a warning sign. The phenomenon where every color has 2--3 different names depending on the shape it's paired with would generally be called \"non-compositional\" if it appeared in a natural language.\n\nThis SEP article has a nice overview and bibliography: https://plato.stanford.edu/entries/compositionality/. But seriously, please, talk to a linguist.\n\nMODEL\n\nThe fact that the interpretation model greedily chooses symbols until it reaches a certain confidence threshold would seem to strongly bias the model towards learning a specific communication strategy. At the same time, it's not actually possible to guarantee that there is a greedily-discoverable sequence that ever reaches the threshold! This fact doesn't seem to be addressed.\n\nThis approach also completely rules out normal natural language phenomena (consider \"I know Mary\" vs \"I know Mary will be happy to see you\"). It is at least worth discussing these limitations, and would be even more helpful to show results for other architectures (e.g. fixed-length codes or loss functions with an insertion penalty) as well.\n\nThere's some language in the appendix (\"We noticed that a larger vocabulary and a longer message length helped the agents achieve a high communication accuracy more easily. But the resulting messages were challenging to analyze for compositional patterns.\") that suggests that even the vague structure observed is hard to elicit, and that the high-level claim made in this paper is less robust than the body suggests. It's _really_ not OK to bury this kind of information in the supplementary material, since it bears directly on your core claim that compositional structure does arise in practice. If the emergence of compositionality is sensitive to vocab size & message length, experiments demonstrating this sensitivity belong front-and-center in the paper.\n\nEVALUATION\n\nThe obvious null hypothesis here is that unseen concepts are associated with an arbitrary (i.e. non-compositional) description, and that to succeed here it's enough to recognize this description as _different_ without understanding anything about its structure. So while this evaluation is obviously necessary, I don't think it's powerful enough to answer the question that you've posed. It would be helpful to provide some baselines for reference: if I understand correctly, guessing \"0\" identically gives 88% accuracy for the first two columns of Table 4, and guessing based on only one attribute gives 94%, which makes some of the numbers a little less impressive. \n\nPerhaps more problematically, these experiments don't rule out the possibility that the model always guesses \"1\" for unseen objects. It would be most informative to hold out multiple attributes for each held-out color (& vice-versa), and evaluate only with speakers / listeners shown different objects from the held-out set.\n\nPOSITIONING AND MOTIVATION\n\nThe first sentence of this paper asserts that artificial general intelligence requires the ability to communicate with humans using natural language. This paper has nothing to do with AGI, humans, or human language; to be blunt, this kind of positioning is at best inappropriate and at worst irresponsible. It must be removed. For the assertion that \"natural language processing has shown great progress\", the paper provides a list of citations employing neural networks exclusively and beginning in 2014 (!). I would gently remind the authors that NLP research did not begin with deep learning, and that there might be slightly earlier evidence for their claim.\n\nThe attempt to cite relevant work in philosophy and psychology is commendable! However, many of these citations are problematic, and some psycho-/historico-linguistic claims are still missing citations. A few examples: Ludwig Wittgenstein died in 1951, so it is somewhat surprising to see him cited for a 2010 publication (PI appeared in 1953); similarly Zipf (2016). The application of this Zipf citation is dubious; the sentence preceded by footnote 7 is false and has nothing to do with the processes underlying homophony in natural languages. I would encourage you to consult with colleagues in the relevant fields.", "Pros:\n1. Extend the input from disentangled feature to raw image pixels\n2. Employ “obverter” technique, showing that it can be an alternative approach comparing to RL \n3. The authors provided various experiments to showcase their approach\n\nCons:\n1. Comparing to previous work (Mordatch & Abbeel, 2018), the task is relatively simple, only requiring the agent to perform binary prediction.\n2. Sharing the RNN for speaking and consuming by picking the token that maximizes the probability might decrease the diversity.\n3. This paper lack original technical contribution from themselves. \n\nThe paper is well-written, clearly illustrating the goal of this work and the corresponding approach. The “obverter” technique is quite interesting since it incorporates the concept from the theory of mind which is similar to human alignment or AGI approach. Authors provided complete experiments to prove their concept; however, the task is relatively easy compared to Mordatch & Abbeel (2018). Readers would be curious how this approach scales to a more complex problem.\n\nWhile the author sharing the RNN for speaking and consuming by picking the token that maximizes the probability, the model loses its diversity since it discards the sampling process. The RL approach in previous works can efficiently explore sentence space by sampling from policy distribution. However, I can not see how the author tackles this issue. One of the possible reason is that the task is relatively easy, therefore, the agent does not need explicitly exploration to tackle this task. Otherwise, some simple technique like “beam search” or perform MC rollout at each time could further improve the performance.\n\nIn conclusion, this paper does not have a major flaw. The “obverter” approach is interesting; however, it is originally proposed in Batali (1998). Generating language based only on raw image pixels is not difficult. The only thing you need to do is replacing FC layers with CNN layers. Though this paper employs an interesting method, it lacks some technical contribution from themselves. \n\n\n[1] Igor Mordatch and Pieter Abbeel. Emergence of grounded compositional language in multi-agent populations. AAAI 2018\n\n[2] John Batali. Computational simulations of the emergence of grammar. Approaches to the evolution of language: Social and cognitive bases, 405:426, 1998.", "We thank you for your response.\nBased on AnonReviewer1's comment, we also decided that providing a more explicit definition of compositionality would improve the paper. As we wrote in our second item to AnonReviewer1's review, we revised the paper to incorporate this point. Specifically as follows:\n- In the second paragraph of Introduction in the revised manuscript, we provide an explicit definition of compositionality.\n- In the last paragraph of page 7 of the revised manuscript, we link the definition of compositionality to the experiment results.", "Thanks for your clarifications. I generally disagree with AnonReviewer1 negative opinion, however it would help if you did define more clearly what you mean for Compositionality.", "We thank you for your review and constructive comments. We address your concerns as follows.\n \n1. The claim that language learning in humans is entirely based on communication needs hedging:\nWe appreciate the suggested reading. We revised our paper to be more appropriate in our claims.\n \n2. Human language acquisition is based on somewhere between hand-engineered features and pixels:\nYour point is true, and it led us re-check the contribution of our paper. The agents in our work learns to develop both the linguistic ability and visual cognition simultaneously. The CNN and the RNN of the agents are initialized with random parameters. Through training, CNN is updated to disentangle shapes and colors, while RNN is updated to describe the shapes and colors. Therefore, similar to your point, after the CNN is updated to a certain level, RNN no longer deals with completely raw signals. However, at the early training rounds, the RNN does deal with seemingly random values (since the CNN has not learned much yet) and maybe that is why it takes several thousand training rounds for the agents to show some performance gain. We appreciate this insightful comment, and we revised the paper to incorporate this comment.\n \n3. The description of the game confusingly uses the word “image” and “object”:\nWe revised the paper to be more consistent in using those two words.\n \n4. Maybe section 2.3 should come before section 2.2:\nIn the revised paper, we mention in section 2.2 that the obverter strategy will be described in detail in section 2.3.\n \n5. What is the meaning vector in Batali’s work?\nWe revised the paper so that there is a brief description of the meaning vector, and we also inform the readers to refer to the supplementary material for detailed information.\n \n6. The output of the agents are not vectors but sigmoid values:\nWe revised the paper to remove the confusion.\n \n7. The obverter technique should be described in more detail:\nWe revised the paper to provide more detailed description of the obverter strategy. Also, we explain that the characters are chosen greedily at each timestep. We also discuss the deterministic nature of the obverter strategy in the revised paper (please see our second explanation for Reviewer 3 for more detail)\n \n8. Is training accuracy in Fig 4 averaged across both agents?\nIn a single training round, only one agent (i.e. the learner) makes the prediction. We aggregated those predictions to calculate the accuracy. Therefore, in round 0, agent 0’s prediction accuracy is taken. In round 1, agent 1’s prediction accuracy is taken. And we repeat this process. This is described this in line 146 by saying “Training accuracy was calculated by rounding the learner’s sigmoid output by 0.5”. \n \n9. What were the values the Jaccard similarity was computed over?\nIf agent0 generates “aaa”, “aba”, “abb” to describe red box, and agent1 generates “aaa”, “aba”, “abc” to describe red box, then we calculate the Jaccard similarity by number_of_duplicate_messages/number_of_unique_messages, which is 2/4=0.5. We do this for all object types (total 40) and average the Jaccard similarity values to obtain the final value.\n \n10. However, I did not understand rules such as: \"remove two as and add one a\"... isn't that the same as: \"add one a\"?\nThe suffix used to describe the color gray is “a_hat, a_hat, a” The two ‘a_hat’s indicate removing two ‘a’s from the prefix (which is responsible for describing shapes). For example, gray box uses “aaaa” as the prefix and “a_hat a_hat a” as the suffix. Therefore putting them together gives us “aaa”, which is the message that describes gray box as shown in the bottom of Table 1. This is different from adding a single ‘a’ to the prefix “aaaa”, which gives us “aaaaa”.\nIn the revised paper, we made it clear that a_hat is for deleting the prefix.\n \n11. I didn't follow the discussion about partial truth and using negations:\nLet’s assume I am aware of red circle, blue square and green triangle. If I came upon a blue circle for the first time and had to describe it to the listener, I could say “blue circle”. But I could also say “blue not_square not_triangle”. If the listener had a similar knowledge as I did, then we would have a successful communication. However, it is debatable whether saying “blue not_square not_triangle” is as compositional as “blue circle”.\nWe provide this explanation in the supplementary material in the revised paper.\n \n12. Another ICLR submission a quantitative measure of compositionality:\nThank you for the pointer, we will look into this.", "\n- Greedily choosing the characters has limits\nPlease see our second explanation for Reviewer 3.\n \n- Hyperparameters (vocabulary size, maximum length of the message) should not be in the supplementary material.\nThank you for pointing this out. We agree that they are more appropriately discussed in section 2.4 where we discuss the environmental pressure. This is reflected in the revised paper.\n \n- Table 4 is not sufficient to confirm compositionality:\nWe described this in line 244 (section 4). We agree that currently many works in this field rely on evaluations that check the necessary conditions of compositional language, and we lack ones that check the sufficient conditions of compositional language. Although, we are not sure if there are such measures. It is another line of future work for all of us.\n \n- Table 4. Always guessing 0 gives 88% accuracy:\nTo be precise, always guessing 0 gives 97.5% accuracy. Because out of 40 object pairs, you would only be wrong once (when the same object type is paired). However, as the numbers in Table 4 shows, the agents do not always guess 0.\n \n- Table 4. Guessing based on only one attribute gives 94% accuracy:\nIf the agents only focus on colors (which is the dominant attribute, since there are 8 colors and 5 shapes), then out of 40 object pairs, they would be wrong 4 times (when the object pair consists of the same color but different shapes). So that gives us 90% accuracy. However, Table 4 shows that the agents are doing better than 90% most of the time, telling us that they are not just focusing on one attribute.\n \n- Maybe the agents always guess 1 for unseen objects.\nIf they did, then the last column of Table 4 should be all 1.0.\n \n- Saying artificial intelligence requires the ability to communicate with humans is problematic:\nIt’s hard to understand why that statement is problematic. We already have simple AI agents around us that try to communicate with us, such as Google Home, Siri, and Alexa. It is hard to imagine that, in the future, we will not verbally communicate with AI.\n \n- NLP has longer history than deep learning:\nYes, it is true. We agree that our citations were biased due to the venue we are submitting this work to, ICLR, which was born recently after the renaissance of neural networks. We incorporated this point in the revised paper.\n \n- The sentence preceded by footnote 7 is false and has nothing to do with the processes underlying homophony in natural languages:\nWe can generally observe that homonyms are seldom used in the same context (e.g. mean1(definition: being rude) and mean2(definition: average)), as that will create confusion, leading to inefficient communication. The same could be observed in the emergent language as described in the second paragraph of section G.\nHowever, we do agree that our logic for connecting the principle of least effort with homonyms might be a bit of a stretch. We removed the sentence in the revised paper.", "We thank you for your detailed review and helpful comments. We address your concerns as follows.\n \n- Lack of engineering contribution:\nPlease see our third explanation for Reviewer3.\n \n- The paper doesn't even attempt to define what is meant by compositionality until the penultimate page:\nAlthough we provided the definition of compositionality in line 33-34, we do agree that the definition could be more explicit. In the revised paper, we explicitly provide the definition of compositionality (or at least what people generally agree to be the definition), and connect that definition to the experiment section to claim that our resulting language does meet the requirement of the compositional language qualitatively.\nAnd it was never our intention to equate “compositionality” and “being able to do zero-shot test”, since we did define compositionality as “we express our intentions using words as building blocks to compose a complex meaning”. In the revised paper, we changed any sentence that might confuse the readers to think that we equate “compositionality” and “being able to do zero-shot test”, because we certainly do not.\n \n- Citation-less fifth sentence of section 4:\nThank you for pointing this out. We added proper citations for this.\n \n- If this paper wishes to make any claims about compositionality, it must at a minimum:\nWe would like to point out that ICLR is not a linguistics conference (nor is it a language-oriented one such as ACL) and our intention was not to develop a rigorous, quantifiable definition of compositionality and its evaluation strategy, for which even linguists have not been successful yet. The recent papers demonstrating the emergence of grounded/compositional communication using neural agents use a relatively relaxed definition of compositionality. But that does not make their work less interesting or meaningful. In the context of learning representations (which is the main theme of ICLR), we care about certain aspects more (e.g. pushing the limits of neural agents) and other aspects less (e.g. coming up with a definition of compositionality that is both linguistically and computationally meaningful). In this spirit, we believe our work has shown sufficient contributions in several aspects as described in the introduction (section 1).\nLastly, the researchers in this field (neural agents and communication) have just begun to discuss the need and strategy for quantitative measure of how much the artificial language resembles human language (in aspects such as compositionality), and we believe our discussion in the last section raises relevant issues in accordance with the movement. Over time, such discussion will result in a practical, quantifiable measure of compositionality that most researchers can agree upon.\n \n- Claim for compositionality would be convincing if we can predict the string to be appeared in the gray boxes in Figure 3.\nWe would like to point out that the messages in the gray boxes of Figure 3 do actually follow the patterns nicely, except for yellow ellipsoid. After seeing all blue objects and all box objects except blue box, we can infer that the blue box will be very likely be described by “eeeeee ee”. Granted, the compositional patterns in Table 3 are weaker than Table 2, but it is undeniable that there are patterns.", "We thank you for your thoughtful review. We address each of your concerns as follows.\n \n1. The task is relatively simple:\nTo the best of our knowledge, this is the first attempt to observe the emergence of compositional communication based on pure pixels. Our aim was to test the possibility of the emergence of compositionality based on a straightforward task so that we can perform extensive analysis on the outcome confidently, based on full control of the experiment. Especially, since our experiment maps a sequence of symbols to each factor (color and shape), instead of mapping one symbol to each factor (such as Mordatch & Abbeel 2017 [1] or Kottur 2017 [2]), rashly taking on complex tasks would have made the qualitative analysis of the emergent language exponentially difficult. As mentioned in our paper, a systematic method to evaluate the compositionality of a given language is an on-going effort in the emergent communication community, and as the evaluation strategy advances we will be more prepared to take on interesting, complex tasks to encourage the emergence of human-like language. We clarified our motivation for choosing a relatively simple task in the revised paper.\n \n2. Using a single RNN for both speaking and listening hinders the agents’ chance to explore:\nWhile it is true that our deterministic approach in the paper might not match the typical exploratory behavior of RL, there are many ways to encourage the agents to explore the message space while using the obverter strategy. In fact, since only the listener’s parameters are updated and the speaker’s parameters are fixed during training, we can be quite creative with the message generation process and still be able to use gradient descent. One of the straightforward ways to increase the message diversity is to sample the character from a multinomial distribution at each timestep, instead of deterministically selecting the one that maximizes the output (y_hat). We can go a little further and adjust the temperature of the softmax function, so that the agents explore the message space a little more actively during the early training rounds and gradually convert to the deterministic behavior. As you suggested, this might help the agents discover a more optimal language when facing complex tasks. We reflected this comment in our revised paper.\n \n3. This paper lacks original technical contribution:\nThis may seem so based on our relatively light-weight model structure, and the fact that we employed the obverter strategy. The aim of the paper was to guide the agents to develop compositional communication, and we have shown strong evidence that it could be done with our approach, which combines the recent, advanced neural network techniques and a philosophy well motivated by psychology and linguistics. We believe proposing a method that solves an interesting problem (although, in our case, how successfully we solved the emergence of compositionality is hard to quantify) is itself a technical contribution, although the amount of contribution is subject to opinion. Also, the obverter strategy is not a specific algorithm like ADAM, but more like a general philosophy such as GAN, which has been used in various works besides Batali 1998 [3]. And we believe that introducing a successful philosophy from a different, yet relevant field to the machine learning community is a valuable contribution. We added a small description in the revised paper regarding how obverter strategy was originated in Hurford 1989 [4] and has been used in many works including Kirby and Hurford 2002 [5].\n \n[1] Igor Mordatch and Pieter Abbeel. Emergence of grounded compositional language in multi-agent populations. AAAI 2018\n[2] Satwik Kottur, Jose MF Moura, Stefan Lee, and Dhruv Batra. Natural language does not emerge ’naturally’ in multi-agent dialog. EMNLP 2017\n[3] John Batali. Computational simulations of the emergence of grammar. Approaches to the evolution of language: Social and cognitive bases, 405:426, 1998\t\t \t \t \t\t\n[4] James R Hurford. Biological evolution of the saussurean sign as a component of the language acquisition device. Lingua, 77(2):187–222, 1989\n[5] Simon Kirby and James R Hurford. The emergence of linguistic structure: An overview of the iterated learning model. In Simulating the evolution of language, pp. 121–147. Springer, 2002" ]
[ -1, 9, 3, 6, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 4, 3, -1, -1, -1, -1, -1, -1 ]
[ "B101HAiQG", "iclr_2018_rknt2Be0-", "iclr_2018_rknt2Be0-", "iclr_2018_rknt2Be0-", "rkV7ieaXf", "BkS1LAiQf", "rkKuB71xM", "BJsMQVvxM", "BJsMQVvxM", "B11XUD_gM" ]
iclr_2018_rkN2Il-RZ
SCAN: Learning Hierarchical Compositional Visual Concepts
The seemingly infinite diversity of the natural world arises from a relatively small set of coherent rules, such as the laws of physics or chemistry. We conjecture that these rules give rise to regularities that can be discovered through primarily unsupervised experiences and represented as abstract concepts. If such representations are compositional and hierarchical, they can be recombined into an exponentially large set of new concepts. This paper describes SCAN (Symbol-Concept Association Network), a new framework for learning such abstractions in the visual domain. SCAN learns concepts through fast symbol association, grounding them in disentangled visual primitives that are discovered in an unsupervised manner. Unlike state of the art multimodal generative model baselines, our approach requires very few pairings between symbols and images and makes no assumptions about the form of symbol representations. Once trained, SCAN is capable of multimodal bi-directional inference, generating a diverse set of image samples from symbolic descriptions and vice versa. It also allows for traversal and manipulation of the implicit hierarchy of visual concepts through symbolic instructions and learnt logical recombination operations. Such manipulations enable SCAN to break away from its training data distribution and imagine novel visual concepts through symbolically instructed recombination of previously learnt concepts.
accepted-poster-papers
This paper initially received borderline reviews. The main concern raised by all reviewers was a limited experimental evaluation (synthetic only). In rebuttal, the authors provided new results on the CelebA dataset, which turned the first reviewer positive. The AC agrees there is merit to this approach, and generally appreciates the idea of compositional concept learning.
train
[ "Bkyw7hwrf", "ByLBsIIBM", "BkeoFCjgG", "rkzoyZW-M", "H1vrGEM-G", "S1hTSkuXz", "SkPdrTcMG", "Hk-qNp5fz", "r1m0QT9zz" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Dear Reviewer,\n\nThank you for taking the time to comment on the updated version of our paper. You suggest that you do not find our additional experiments convincing enough because we do not train recombination operators on the celebA dataset. However, in our understanding your original review did not ask for these experiments. It suggested that we do a fair comparison with the JMVAE and TrELBO baselines on a real dataset, followed by a remark that the baselines were not explicitly designed for recombination operators. In our understanding it implied that the only fair comparison was to compare the abstract concept learning step across the original models. Furthermore, it is unfortunate that your request for the additional experiments with the recombination operators has arrived at this stage. While we cannot update our manuscript before the decision deadline, we would be happy to run the additional experiments for the camera ready version of the paper. \n\nYour original review had reservations about the technical novelty of our approach, which you stated in itself was not a problem as long as we could demonstrate that our approach outperforms the current state of the art methods on realistic datasets. We believe that our new experiments on CelebA demonstrate exactly that. \n\nIn your current comment you suggest that our additional CelebA experiments only demonstrate that beta-VAE can learn smooth interpolations and extrapolations of certain attributes. However, we believe that our additional experiments demonstrate that SCAN can learn new meaningful abstractions that are grounded in the basic visual factors discovered by beta-VAE, but which beta-VAE alone could not have, and in fact did not discover. \n\nIn addition, please note that unlike the CelebA experiments in the TrELBO paper, we did not remove mislabeled attributes from the training set, which consequently made the training task significantly harder for all models. The fact that SCAN was able to work well in such a setting is a further demonstration of the robustness of our approach. \n\nIn summary, we believe that we have demonstrated the usefulness and the power of our approach over the recent state of the art baseline methods on an important problem of learning hierarchical compositional visual concepts. Our approach may seem -- at first glance -- like a “straightforward” modification to existing VAE variants, but it is the only one that is currently able to discover meaningful compositional visual abstractions on realistic datasets. \n", "I appreciate the authors' effort along the direction. The additional experiments strengthened the paper, but I feel it still needs more work. \n\nThe technical innovation of the paper is to learn 'recombination operators'. As I said in the original review, methodologically the innovation is quite straightforward, but it can make a good paper if well evaluated. The additional experiments on celebA, however, are not evaluating the 'recombination operators'. It is basically suggesting beta-VAE can learn smooth interpolations (or extrapolations) given a certain attribute. This is nice, but connects better to the original beta-VAE paper than to this paper.\n\nIn general, this paper has great potentials but will benefit from another cycle. Would that be possible to really learn recombination operators on real images? If SCAN can learn concepts like pale-skin (and) big lips, or attractive (ignore) arched eyebrows, the paper will be much stronger.", "This paper proposed a novel neural net architecture that learns object concepts by combining a beta-VAE and SCAN. The SCAN is actually another beta-VAE with an additional term that minimizes the KL between the distribution of its latent representation and the first beta-VAE’s latent distribution. The authors also explored how this structure could be further expanded to incorporate another neural net that learns operators (and, in common, ignore), and demonstrated that the proposed system is able to generate accurate and diverse scenes given the visual descriptions.\n\nIn general, I think this paper is interesting. It’s studying an important problem with a newly proposed neural net structure. The experimental results are good and the model is compared with very recent baselines.\n\nI am, however, still lukewarm on this submission for its limited technical innovation and over-simplified experimental setup.\n\nThis paper does have technical innovations: the SCAN architecture and the way they learn “recombination operators” are newly proposed. However, there are in essence very straightforward extensions of VAE and beta-VAE (this is based on the fact that beta-VAE itself is a simple modification of VAE and the effect was discussed in a number of concurrent papers).\n\nThis would still be fine, as many small modifications of neural net architecture turn out to reveal fundamental insights that push the field forward. This is, however, not the case in this paper (at least not in the current manuscript) due to its over-simplified experiments. The authors are using images as input, but the images are all synthetic, and further, they are all synthesized to have highly regular structure. This suggests the network is likely to overfit the data and learn a straightforward mapping from input to the code. It’s unclear how well the system is able to generalize to real-world scenarios. Note that even datasets like MNIST has much higher complexity than the dataset used in this paper (though the dataset in this paper is more colorful).\n\nI agree that the proposed method performs better that its recent competitors. However, many of those methods like TripleELBO are not explicitly designed for these ‘recombination operators’. In contrast, they seem to perform well on real datasets. I would strongly suggest the authors perform additional experiments on standard benchmarks for a fair comparison.\n", "This paper introduces a VAE-based model for translating between images and text. The main way that their model differs from other multimodal methods is that their latent representation is well-suited to applying symbolic operations, such as AND and IGNORE, to the text. This gives them a more expressive language for sampling images from text.\n\nPros:\n- The paper is well written, and it provides useful visualizations and implementation details in the appendix.\n\n- The idea of learning compositional representations inside of a VAE framework is very appealing.\n\n- They provide a modular way of learning recombination operations.\n\nCons:\n- The experimental evaluation is limited. They test their model only on a simple, artificial dataset. It would also be helpful to see a more extensive evaluation of the model's ability to learn logical recombination operators, since this is their main contribution.\n\n- The approach relies on first learning a pretrained visual VAE model, but it is unclear how robust this is. Should we expect visual VAEs to learn features that map closely to the visual concepts that appear in the text? What happens if the visual model doesn't learn such a representation? This again could be addressed with experiments on more challenging datasets.\n\n- The paper should explain the differences and trade offs between other multimodal VAE models (such as their baselines, JMVAE and TrELBO) more clearly. It should also clarify differences between the SCAN_U baseline and SCAN in the main text.\n\n- The paper suggests that using the forward KL-divergence is important, but this does not seem to be tested with experiments.\n\n- The three operators (AND, IN COMMON, and IGNORE) can easily be implemented as simple transformations of a (binary) bag-of-words representation. What about more complex operations, such as OR, which seemingly cannot be encoded this way?\n\nOverall, I am borderline on this paper, due to the limited experimental evaluation, but lean slightly towards acceptance.\n", "Summary\n---\nThis paper proposes a new model called SCAN (Symbol-Concept Association Network) for hierarchical concept learning. It trains one VAE on images then another one on symbols and aligns their latent spaces. This allows for symbol2image and image2symbol inference. But it also allows for generalization to new concepts composed from existing concepts using logical operators. Experiments show that SCAN generates images which correspond to provided concept labels and span the space of concepts which match these labels.\n\nThe model starts with a beta-VAE trained on images (x) from the relevant domain (in this case, simple scenes generated from DeepMind Lab which vary across a few known dimensions). This is complemented by the SCAN model, which is a beta-VAE trained to reconstruct symbols (y; k-hot encoded concepts like {red, suitcase}) with a slightly modified objective. SCAN optimizes the ELBO plus a KL term which pushes the latent distribution of the y VAE toward the latent distribution of the x (image) VAE. This aligns the latent representations so now a symbol can be encoded into a latent distribution z and decoded as an image.\n\nOne nice property of the learned latent representation is that more specific concepts have more specific latent representations. Consider latent distributions z1 and z2 for a more general symbol {red} and a more specific symbol {red, suitcase}. Fewer dimensions of z2 have high variance than dimensions of z1. For example, the latent space could encode red and suitcase in two dimensions (as binary attributes). z1 would have high variance on all dimensions but the one which encodes red and z2 would have high variance on all dimensions but red and suitcase. In the reported experiments some of the dimensions do seem to be interpretable attributes (figure 5 right).\n\nSCAN also pays particular attention to hierarchical concepts. Another very simple model (1d convolution layer) is learned to mimic logical operators. Normally a SCAN encoder takes {red} as input and the decoder reconstructs {red}. Now another model is trained that takes \"{red} AND {suitcase}\" as input and reconstructs {red, suitcase}. The two input concepts {red} and {suitcase} are each encoded by a pre-trained SCAN encoder and then those two distributions are combined into one by a simple 1d convolution module trained to implement the AND operator (or IGNORE/IN COMMON). This allows images of concepts like {small, red, suitcase} to be generated even if small red suitcases are not in the training data.\n\nExperiments provide some basic verification and analysis of the method:\n1) Qualitatively, concept samples are correct and diverse, generating images with all configurations of attributes not specified by the input concept.\n2) As SCAN sees more diverse examples of a concept (e.g. suitcases of all colors instead of just red ones) it starts to generate more diverse image samples of that concept.\n3) SCAN samples/representations are more accurate (generate images of the right concept) and more diverse (far from a uniform prior in a KL sense) than JMVAE and TELBO baselines.\n4) SCAN is also compared to SCAN_U, which uses an image beta-VAE that learned an entangled (Unstructured) representation. SCAN_U performed worse than SCAN\nand baselines.\n5) Concepts expressed as logical combinations of other concepts generalize well for both the SCAN representation and the baseline representations.\n\n\nStrengths\n---\n\nThe idea of concept learning considered here is novel and satisfying. It imposing logical, hierarchical structure on latent representations in a general way. This suggests opportunities for inserting prior information and adds interpretability to the latent space.\n\n\nWeaknesses\n---\n\nI think this paper is missing some important evaluation.\n\nRole/Nature of Disentangled Features not Clear (major):\n\n* Disentangled features seem to be very important for SCAN to work well (SCAN vs SCAN_U). It seems that the only difference between the unstructured (entangled) and the structured (disentangled) visual VAE is the color space of the input (RGB vs HSV). If so, this should be stated more clearly in the main paper. What role did beta-VAE (tuning beta) as opposed to plain VAE play in learning disentangled features?\n\n* What color space was used for the JMVAE and TELBO baselines? Training these with HSV seems especially important for establishing a good comparison, but it would be good to report results for HSV and RGB for all models.\n\n* How specific is the HSV trick to this domain? Would it matter for natural images?\n\n* How would a latent representation learned via supervision perform? (Maybe explicitly align dimensions of z to red/suitcase/small with supervision through some mechanism. c.f. \"Discovering Hidden Factors of Variation in Deep Networks\" by Cheung et al.)\n\nEvaluation of sample complexity (major):\n\n* One of the main benefits of SCAN is that it works with less training data. There should be a more systematic evaluation of this claim. In particular, I would like to see a Number of Examples vs Performance (Accuracy/Diversity) plot for both SCAN and the baselines.\n\nMinor questions/comments/concerns:\n\n* What do the logical operators learn that the hand-specified versions do not?\n\n* Does training SCAN with the structure provided by the logical operators lead to improved performance?\n\n* There seems to be a mistake in figure 5 unless I interpreted it incorrectly. The right side doesn't match the left side. During the middle stage of training object hues vary on the left, but floor color becomes less specific on the right. Shouldn't object color become less specific?\n\n\nPrelimary Evaluation\n---\n\nThis clear and well written paper describes an interesting and novel way of learning a model of hierarchical concepts. It's missing some evaluation that would help establish the sample complexity benefit more precisely (a claimed contribution) and add important details about unsupervised disentangled representations. I would happy to increase my rating if these are addressed.", "Thanks for the response! It nicely addressed my concerns, so I increased my rating.", "Dear Reviewer,\n\nThank you for your feedback. Please find the responses to your points below:\n\nRole/Nature of Disentangled Features not Clear (major):\n\n* Disentangled features seem to be very important for SCAN to work well (SCAN vs SCAN_U). It seems that the only difference between the unstructured (entangled) and the structured (disentangled) visual VAE is the color space of the input (RGB vs HSV). If so, this should be stated more clearly in the main paper. What role did beta-VAE (tuning beta) as opposed to plain VAE play in learning disentangled features?\n\nThe statement about the “only difference” is not quite right. While an HSV colour space helps beta-VAE disentangle the particular DeepMind Lab dataset we used, the conversion from RGB to HSV is not sufficient for disentangling. As shown in our additional SCAN_U experiments in Table 1, it is still important to use a carefully tuned beta-VAE rather than a plain VAE to get good enough disentanglement for SCAN to work. Furthermore, we have added additional experiments with CelebA where we learn disentangled visual representations with a beta-VAE in RGB space. A plain VAE is unable to learn such disentangled representations, as was shown in Higgins et al, 2017.\n\n\n\n\n\n* What color space was used for the JMVAE and TELBO baselines? Training these with HSV seems especially important for establishing a good comparison, but it would be good to report results for HSV and RGB for all models.\n\nAll baselines are trained in HSV space when using the DeepMind Lab dataset in our paper. We have now added additional experiments on CelebA, where all models are now trained using the RGB colour space.\n\n\n\n\n\n* How specific is the HSV trick to this domain? Would it matter for natural images?\n\nThe HSV trick was useful for the DeepMind Lab dataset, but it is not necessary for all datasets as demonstrated in the new CelebA experiments.\n\n\n\n\n\n* How would a latent representation learned via supervision perform? (Maybe explicitly align dimensions of z to red/suitcase/small with supervision through some mechanism. c.f. \"Discovering Hidden Factors of Variation in Deep Networks\" by Cheung et al.)\n\nA latent representation learnt via supervision would also work, as long as the latent distribution is from the location/scale distributional family. Hence, the work by Cheung et al or DC-IGN by Kulkarni et al would both be suitable for grounding SCAN. We concentrated on the unsupervised beta-VAE, since we wanted to minimise human intervention and bias.\n\n\n\n\n\nEvaluation of sample complexity (major):\n\n* One of the main benefits of SCAN is that it works with less training data. There should be a more systematic evaluation of this claim. In particular, I would like to see a Number of Examples vs Performance (Accuracy/Diversity) plot for both SCAN and the baselines.\n\nWe have added a plot with this information in the supplementary materials.\n\n\n\n\n\nMinor questions/comments/concerns:\n\n* What do the logical operators learn that the hand-specified versions do not?\n\nIn general we find that the learnt operators have better accuracy and diversity, achieving 0.79 (learnt) vs 0.54 (hand crafted) accuracy (higher is better) and 1.05 (learnt) vs 2.03 (hand crafted) diversity (lower is better) scores. We have added a corresponding comment in the paper.\n\n\n\n\n\n* Does training SCAN with the structure provided by the logical operators lead to improved performance?\n\nWe find that the logical operators do improve the diversity of samples since the training of the logical operators relies on the visual grounding that is exactly the same as SCAN uses. For example, we can recover the diversity of SCAN_R samples by training its recombination operators with a forward KL. We have added a note about this to the paper.\n\n\n\n\n* There seems to be a mistake in figure 5 unless I interpreted it incorrectly. The right side doesn't match the left side. During the middle stage of training object hues vary on the left, but floor color becomes less specific on the right. Shouldn't object color become less specific?\n\nThank you for pointing it out. We have fixed it.\n\n\nHappy holidays!", "Dear Reviewer,\n\nThank you for your feedback. Please find the responses to your points below:\n\n\n- The experimental evaluation is limited. They test their model only on a simple, artificial dataset. It would also be helpful to see a more extensive evaluation of the model's ability to learn logical recombination operators, since this is their main contribution.\n\nWe have now added an additional section demonstrating that SCAN significantly outperforms both JMVAE and TrELBO on CelebA - a significantly more challenging and realistic dataset.\n\n\n\n\n- The approach relies on first learning a pretrained visual VAE model, but it is unclear how robust this is. Should we expect visual VAEs to learn features that map closely to the visual concepts that appear in the text? What happens if the visual model doesn't learn such a representation? This again could be addressed with experiments on more challenging datasets.\n\nSCAN does indeed rely on learning disentangled visual representations as defined in Bengio (2013) and Higgins et al (2017). The performance of SCAN drops as the quality of disentanglement drops, as demonstrated by the additional SCAN_U baselines we have added to Table 1. It has, however, been shown that beta-VAE is able to learn disentangled representation on more challenging datasets (Higgins et al, 2017a, b), and we have shown that SCAN can significantly outperform both JMVAE and TrELBO on CelebA in the additional section we have added at the end of the paper. When training SCAN on CelebA, we show that SCAN is able to ignore symbolic (text) attributes that do not refer to anything meaningful in the image space, and ground the remaining attributes in whatever dictionary of visual primitives it has access to (not all of which map directly to the symbolic attributes). For example, the “attractiveness” attribute is subjective and has no direct mapping to a particular visual primitive, yet SCAN learns that in the CelebA dataset it tends to refer to young females. \n\n\n\n\n\n- The paper should explain the differences and trade offs between other multimodal VAE models (such as their baselines, JMVAE and TrELBO) more clearly. It should also clarify differences between the SCAN_U baseline and SCAN in the main text.\n\nWe have added the explanations in text. In summary, TrELBO tends to learn a flat and unstructured conceptual latent space, that results in very poor diversity of their samples. JMVAE, on the other hand, comes close to our approach in the limit where the text labels provide enough supervision to help disentangle the joint latent space q(z|x,y). In that case, the joint posterior q(z|x,y) and the symbolic posterior q(z|y) of JMVAE become equivalent to the visual posterior q(z|x) and symbolic posterior q(z|y) of SCAN, since both use forward KL to ground q(z|y). Hence, the biggest differences between our approach and JMVAE are: 1) we are able to learn disentangled visual primitives in an unsupervised manner while JMVAE relies on good structured labels to supervise this process; 2) we use a staged optimisation process, where we first learn vision, then concepts, while JMVAE performs joint optimisation. In practice we find that JMVAE training is more sensitive to architectural and hyperparameter choices and hence most of the time performs worse than SCAN.\n\nSCAN_U is a version of SCAN that grounds concepts in an unstructured visual latent space. We have now added extra experiments to show how the performance of SCAN drops as the level of visual disentanglement in SCAN_U is decreased. \n\n\n\n\n- The paper suggests that using the forward KL-divergence is important, but this does not seem to be tested with experiments.\n\nWe have added the additional baseline with reverse KL (SCAN_R) to Table 1 and showed that it has really bad diversity as predicted by our reasoning.\n\n\n\n\n- The three operators (AND, IN COMMON, and IGNORE) can easily be implemented as simple transformations of a (binary) bag-of-words representation. What about more complex operations, such as OR, which seemingly cannot be encoded this way?\n\nIn this work, we focus on operators that can be used to traverse the implicit hierarchy of concepts, and since OR is not one of such operators, it is outside the scope of the current paper. We agree that it is interesting to implement and study additional, more complex operations, which we leave for future work.\n\nHappy holidays!", "Dear Reviewer,\n\nThank you for your feedback. We have added an additional section describing the comparison of our approach to JMVAE and TrELBO on CelebA. Unlike the similar TrELBO experiments, we did minimal pre-processing of the dataset (only cropping to 64x64) and trained the models on the noisy attribute labels out of the box. As you may be aware, CelebA attributes are notoriously unreliable - many are subjective, refer to aspects of the images that get cropped away or are plain wrong. Our experiments demonstrate that SCAN significantly outperforms both baselines (but TrELBO in particular) and discovers a subset of attributes that refer to something meaningful based on the visual examples present in the dataset, while ignoring the uninformative attributes. SCAN is then able to traverse the individual directions of variation it has discovered and imagine both positive and negative examples of the attribute. This is unlike the baselines, which can only imagine positive examples after being trained on positive examples. \n\nWe hope that our experiments address your concerns about the technical innovation of our approach, since we demonstrate that currently SCAN is the only model that is able to learn compositional hierarchical visual concepts on real visual datasets.\n\nHappy holidays!\n" ]
[ -1, -1, 5, 6, 7, -1, -1, -1, -1 ]
[ -1, -1, 4, 4, 4, -1, -1, -1, -1 ]
[ "ByLBsIIBM", "r1m0QT9zz", "iclr_2018_rkN2Il-RZ", "iclr_2018_rkN2Il-RZ", "iclr_2018_rkN2Il-RZ", "SkPdrTcMG", "H1vrGEM-G", "rkzoyZW-M", "BkeoFCjgG" ]
iclr_2018_HJCXZQbAZ
Hierarchical Density Order Embeddings
By representing words with probability densities rather than point vectors, proba- bilistic word embeddings can capture rich and interpretable semantic information and uncertainty (Vilnis & McCallum, 2014; Athiwaratkun & Wilson, 2017). The uncertainty information can be particularly meaningful in capturing entailment relationships – whereby general words such as “entity” correspond to broad distributions that encompass more specific words such as “animal” or “instrument”. We introduce density order embeddings, which learn hierarchical representations through encapsulation of probability distributions. In particular, we propose simple yet effective loss functions and distance metrics, as well as graph-based schemes to select negative samples to better learn hierarchical probabilistic representations. Our approach provides state-of-the-art performance on the WordNet hypernym relationship prediction task and the challenging HyperLex lexical entailment dataset – while retaining a rich and interpretable probabilistic representation.
accepted-poster-papers
This paper marries the idea of Gaussian word embeddings and order embeddings, by imposing order among probabilistic word embeddings. Two reviewers vote for acceptance, and one finds the novelty of the paper incremental. The reviewer stuck to this view even after rebuttal, however, acknowledges the improvement in results. The AC read the paper, and agrees that the novelty is somewhat limited, however, the idea is still quite interesting, and the results are promising. The AC was missing more experiments on other tasks originally presented by Vendrov et al. Overall, this paper is slightly over the bar.
val
[ "HyGYccUxz", "SyRynl9eM", "rk63KZixz", "BJMKZCoXM", "BJ73RqEXf", "ryk5RqN7G", "SJIsJCXQG", "ryeRnpmmz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "The paper presents a method for hierarchical object embedding by Gaussian densities for lexical entailment tasks.Each word is represented by a diagonal Gaussian and the KL divergence is used as a directional distance measure. if D(f||g) < gamma then the concept represented by f entails the concept represented by g.\n\nThe main technical difference of the present work compared from the main prior work (Vendrov, 2015) is that in addition to mean vector representation they use here also a variance component. The main modeling challenge here to to define a good directional measure that can be suitable for lexical entailment. in Vendrov work they defined a partial ordering. Here, the KL is not symmetric but its directional aspect is not significant.\nFor example if we set all the variances to be a unit matrix than the KL is collapsed to be a simple symmetrical Euclidean distance. We can also see from Table 1 that if we replace KL by its symmetrical variant we get similar results. Hence, I was not convinced that the propose KL+Gaussian modeling is suitable for directional relations.\n\nThe paper also presents several methods for negative samplings and according to table 4 there is a lot of performance variability based on the method that is used for selecting negative sampling. I find this component of the proposed algorithm very heuristic.\n\nTo summarize, I don't think there is enough interesting novelty in this paper. If the focus of the paper is on obtaining good entailment results, maybe an NLP conference can be a more suitable venue.\n", "The paper presents a study on the use of density embedding for modeling hierarchical semantic relations, and in particular on the hypernym one. The goal is to capture hypernyms of some synsets, even if their occurrence is scarce on the training data.\n+++pros: 1) potentially a good idea, capable of filling an ontology of relations scarcely present in a given repository 2) solid theoretical background, even if no methodological novelty has been introduced (this is also a cons!)\n---cons: 1) Badly presented: the writing of the paper fails in let the reader aware of what the paper actually serves\n\nCOMMENTS:\nThe introduction puzzled me: the authors, once they stated the problem (the scarceness of the hypernyms' occurrences in the texts w.r.t. their hyponyms), proposed a solution which seems not to directly solve this problem. So I suggest the authors to better explain the connection between the told problem and their proposed solution, and how this can solve the problem.\n\nThis aspect is also present in the experiments section, since it is not possible to understand how much the problem (the scarceness of the hypernyms) is present in the HYPERLEX dataset.\n\nHow the 4000 hypernyms have been selected? Why a diagonal covariance has been estimated, and not a full covariance one? \n\nn Figure 4 middle, it is not clear whether the location and city concepts are intersecting the other synsets. It shouldn't be, but the authors should spend a little on this.\n\nApart from these comments, I found the paper interesting especially for the big amount fo comparisons carried out. \n\nAs a final general comment, I would have appreciated a paper more self explanative, without referring to the paper [Vilnis & McCallum, 2014] which makes appear the paper a minor improvement of what it is actually. ", "The paper introduces a novel method for modeling hierarchical data. The work builds on previous approaches, such as Vilnis and McCallum's Word2Gauss and Vendrov's Order Embeddings, to establish a partial order over probability densities via encapsulation, which allows it to model hierarchical information. The aim is to learn embeddings from supervised structured data, such as WordNet. The work also investigates various schemes for selecting negative samples. The evaluation consists of hypernym detection on WordNet and graded lexical entailment, in the shape of HyperLex. This is good work: it is well written, the experiments are thorough and the proposed method is original and works well.\n\nSection 3 could use some more signposting. Especially for 3.3 it would be good to explain (either at the beginning of section 3, or the beginning of section 3.3) why these measures matter and what is going to be done with them.\n\nIt's good that LEAR is mentioned and compared against, even though it was very recently published. Please do note that the authors' names are misspelled: Vuli\\'c not Vulic, Mrk\\v{s}i\\'c instead of Mrksic.\n\nIf I am not mistaken, the Vendrov WordNet test set is a set of positive pairs. I would like to see more details on how the evaluation is done here: presumably, the lower I set the threshold, the higher my score? Or am I missing something?\n\nIt would be useful to describe exactly the extent to which supervision is used - the method only needs positive and negative links, and does not require any additional order information (i.e., WordNet strictly contains more information than what is being used).\n\nI don't see what Socher et al. (2013) has to do with the loss in equation (7). Or did they invent the margin loss?\n\nWord2gauss also evaluates on similarity and relatedness datasets. Did you consider doing that here too?\n\n\"hypothesis proposed by Santus et al. which says\" is not a valid reference.", "We have incorporated the reviewers' comment into our new draft. The changes are as follow:\n\nSection 1. Introduction\nWe further described the distinction between our work and existing work and made our contributions more explicit.\n\nSection 4. Experiments\nSection 4.1 We added more explanation on the hypernym prediction test set.\nSection 4.2 We explained why ELK can work reasonably well on hypernym prediction. \nSection 4.4 We elaborated on the negative sample selection and discussed the results from using different combinations. We also added the result from using a symmetric model to Table 3.\n\nIn addition, we slightly changed the title and added additional keywords to facilitate better search for our paper.", "[novelty] Aside from the foundational importance of asymmetry in divergences for probabilistic order embeddings, there is much other interesting novelty in the paper too. We propose new training procedures that learn highly effective Gaussian embeddings in the supervised setting. The new changes include (1) using max-margin loss (Equation 7) instead of rank loss (Equation 1); (2) using a divergence threshold to induce soft partial orders and prevent unnecessary penalization; (3) a new scheme to select negative samples for max-margin loss; (4) investigating other hyperparameters that are important for density encapsulation such as adjusting variance scale; (5) proposing and investigating general alpha divergences as metrics between word densities. This direction is significant — analogous to exploring Renyi divergences instead of variational KL divergences for approximate Bayesian inference.\n\nIn general, despite the great promise of probabilistic embeddings, such approaches have not been widely explored, and their use in order embeddings -- where they are perhaps most natural -- is essentially uncharted territory.\n\nThe empirical results are generally quite strong. Not all results are positive (for example, we found alpha divergences to not improve on KL), but there is certainly great value in honestly reporting conceptually interesting experiments, especially if there are negative results, which tend to go under-reported. Many of the results (as discussed above) are also positive.\n\n[Effects of new negative sampling] Note that the traditional approach of Vendrov 2015 uses the sampling scheme S1. From Table 4, using S1 alone results in the HyperLex score of at most 0.527. However, using our proposed approaches (S1 together with S2, S3, S4 with different heuristic combination) results in a significant increase in performance in most (if not all) cases, allowing us to achieve a score of 0.590 (~12% increase). We would like to point out that this is a strength of our proposed negative sampling methods (S2, S3, S4) since most combinations provide an increase in performance. \n", "We thank the reviewer for thoughtful comments.\n\nTo address the importance of directionality, we clarify the differences in using symmetric measures to train versus to evaluate. We also highlight and explain the major empirical differences for the purpose of order embeddings.\n\n[Symmetric Measure for Training] In Section 3.3.2: Symmetric Divergence, we describe why symmetric measures can be used to train embeddings that reflect hierarchy:\n\t\t\t\n“Intuitively, the asymmetric measures should be more successful at training density order embeddings. However, a symmetric measure can result in the encapsulation order as well since a general entity often has to minimize the penalty with many specific elements and consequently ends up having a broad distribution.”\n\nFor further explanation, consider a scenario where we have true relationships a -> c and b -> c and use a symmetric training measure to train the embeddings (x -> y denotes x entails y). The desired outcome is such that the distribution of ‘c’ encapsulates the distribution of ‘a’ as well as ‘b’. To satisfy this, the distribution of ‘c’ ends up being broad and encompass both ‘a’ and ‘b’. to lower the average loss.\n\nWe emphasize that at evaluation time, if we use symmetric measures to generate entailment scores, we would end up discarding the directionality and thus hurt the performance. This brings us to the next point, where we discuss the empirical differences, underscoring the foundational importance of an asymmetric measure.\n\n[KL for directionality]\n\nThe directionality of KL is crucial to capture hierarchical information. In our experiments, when our model is trained with the symmetric ELK distance, using ELK to generate encapsulation scores results in poor performance of 0.455 whereas using KL yields 0.532. This difference is described in section 4.4.\n\nWith regards to the performance of ELK in Table 1, negative log ELK, a symmetric distance metric, can generate meaningful scores for word pairs that do not have directional relationships, such as ‘animal’ and ‘location’. This is because these non-relationship density pairs do not significantly overlap and ELK-based metric would yield high distance value. However, this metric is not suitable for capturing directionality of relationships such as ‘animal’ and ‘dog’.\n\nFor example, Table 2 which illustrates the values of KL divergences between many word pairs, underscores the foundational importance of KL’s directionality. For instance, KL(‘object’ | ‘physical_entity’) = 152, whereas the reverse case KL(‘physical_entity’ | ‘object’) = 6618. Based on the prediction threshold of 1900 (selected via validation set), we correctly predict that ‘object’ entails ‘physical_entity’ (since 152 << 1900) but ‘physical_entity’ does not entail ‘object’ (6618 >> 1900). The entailment score function (negative KL) also nicely assigns high scores to the true hypernym pairs such as (‘object’ -> ‘physical_entity’, with score -152) and low scores for the non-hypernym pairs such as (‘physical_entity’ -> ‘object’, with score -6618). This behavior is important to measure the degree of lexical entailment. If the evaluation divergence were symmetric, (‘object’ -> ‘physical_entity’) and (‘physical_entity’ -> ‘object’) would have the same score, which is very undesirable, since these pairs do not have the same degree of entailment.\n\n[unit variance] It is true that if the variance components are all the same (being 1), the KL becomes symmetric. However, our learned distributions tend to have very different covariance matrices. Figure 4, the log det(Sigma) vary markedly among different concepts. The concepts that are more general such as `physical_entity’ have very high log(det(Sigma)) of -219.67 compared to specific concepts such as ‘city’ with log(det(Sigma)) = -261.89. \n\n", "Thank you for your thoughtful comments and your interest in the paper. Please see our responses below.\n\n[Clarification on Introduction] \n\nIndeed the scarcity of lexical relationships in natural text corpus is not the problem we aim to solve in this paper. We brought this up to emphasize the importance of modeling hierarchical data via *supervised* learning directly on hierarchical data, without relying on word occurrence patterns in the unsupervised approach. Note that in the unsupervised setting, the Gaussian distributions are trained based on word occurrences in natural sentences. In the supervised setting, we model the hierarchical data by minimizing the loss directly on relationship pairs. \n\nThe supervised learning of Gaussian densities has not been thoroughly considered in the existing literature. The main goal of our paper is to investigate this highly consequential task. Vilnis (2015) proposed a Gaussian embedding model that works well for unsupervised task (semantic word embeddings). While the approach can be directly applied to supervised case, the performance is often quite poor based on results reported in Vendrov (2015) and Vuli\\'c (2016) . Our paper, on the other hand, shows the opposite finding that the performance of Gaussian embeddings can be highly competitive so long as we use our different new training approach. We would like to emphasize the significance of this direction: despite their intuitive benefits in providing rich representations for words, probabilistic word embeddings are relatively unexplored. Such embeddings are only considered in a small handful of papers, and in these papers there is no serious consideration of where these embeddings would be most natural, such as in ordered representations.\n\nWe would also like to emphasize that we do introduce new methodology in our paper. We propose new training procedures that learn highly effective Gaussian embeddings in the supervised setting. The new changes include (1) using max-margin loss (Equation 7) instead of rank loss (Equation 1); (2) using a divergence threshold to induce soft partial orders and prevent unnecessary penalization; (3) a new scheme to select negative samples for max-margin loss; (4) investigating other hyperparameters that are important for density encapsulation such as adjusting variance scale; (5) proposing and investigating general alpha divergences as metrics between word densities. This direction is significant — analogous to exploring Renyi divergences instead of variational KL divergences for approximate Bayesian inference.\nWe thank the reviewer for the questions/comments and we have modified the introduction to make our contributions more clear.\n\n[Hyperlex] HyperLex is an evaluation dataset where the instances in HyperLex have some lexical relationships. Instances that have true hypernym relationships have high scores, and instances \nwithout true hypernym relationships have lower scores. \n\n[Test Set] 4000 Hypernym pairs are selected randomly from the transitive closure of WordNet. The random split is 4000 for validation set, 4000 for test set, and the rest for training. \n\n[Diagonal Covariance] The diagonal covariance enables fast training. The complexity of the objective for \nd-dimensional is O(d) for diagonal case as opposed to O(d^3) in the full covariance case. \nThis is due to the inverse term in our divergences. Note we use a full diagonal covariance versus a scaled identity. Relative to standard word embeddings, such as word2vec, a probabilistic density, even with a diagonal covariance, is highly flexible.\n\n[Figure 4] Yes, we can see that ‘location’ and ‘living_thing’ are visually overlapping. However, \nKL(‘location’ | ‘living_thing’) = 6794 and KL(‘living_thing’ | ‘location’) = 4324, which is higher than the threshold of 1900 (picked using validation set), which means that we do not predict that these words entail one another in either direction. There are some mistakes, however, such as for ‘location’ | ‘object’, which can happen when there are not enough negative examples of the pair (‘location’ (not) > ‘object’) to contrast in the training. Our proposed negative sampling approach helps alleviate this problem.", "Thank you for your thoughtful and supportive comments. Please see our responses below. \n\n[Misspelled names and reference errors] Thank you for pointing this out. We have corrected the spelling and reference errors. \n\n[Equation 7] Socher et al. 2013 and Vendrov 2015 uses this loss in their tasks.\n\n[Effects of Divergence Threshold] \n\nVendrov’s WordNet test set is a test of 4000 positive pairs as well as 4000 negative pairs, where the negative pairs are selected using random sampling. We fix the same test set across all experiments. \n\nIn the Appendix, Section A.2.1, we show the effects of the divergence threshold (gamma) on the test accuracy. Figure 5 shows that there is an optimal gamma value which yields the best scores on hypernym prediction and lexical entailment.The intuition is as follows: zero gamma corresponds to penalizing any two distributions that do not perfectly overlap (since D is the lowest if and only if the two distributions are equal). This behaviour is undesirable: if a distribution is correctly encapsulated by a parent distribution we should not further penalize the embeddings. High gamma corresponds to low penalization among many distribution pairs — a gamma value that is “too high” is lenient, since there might not be sufficient penalization in the loss function to help learn optimal embeddings. \n\n[Similarity and Relatedness] \n\nWe believe that similarity and relatedness would be more suitable for word embeddings trained on word occurrences in a natural corpus (word2vec, GloVe, word2gauss), because our embeddings model the hierarchical structure rather than the semantics of concepts. In Section 5, we discussed the future direction where we plan to use the idea to enhance word embeddings (Gaussian word distributions) by incorporating the supervised training (using labels from WordNet, for instance) with the unsupervised training on text corpus. The ideal scenario would be that the Gaussian embeddings would have high word similarity scores and also exhibit hierarchical structure that yields good hypernym prediction and graded lexical entailment scores. \n" ]
[ 4, 6, 8, -1, -1, -1, -1, -1 ]
[ 3, 4, 5, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJCXZQbAZ", "iclr_2018_HJCXZQbAZ", "iclr_2018_HJCXZQbAZ", "iclr_2018_HJCXZQbAZ", "ryk5RqN7G", "HyGYccUxz", "SyRynl9eM", "rk63KZixz" ]
iclr_2018_BkN_r2lR-
Identifying Analogies Across Domains
Identifying analogies across domains without supervision is a key task for artificial intelligence. Recent advances in cross domain image mapping have concentrated on translating images across domains. Although the progress made is impressive, the visual fidelity many times does not suffice for identifying the matching sample from the other domain. In this paper, we tackle this very task of finding exact analogies between datasets i.e. for every image from domain A find an analogous image in domain B. We present a matching-by-synthesis approach: AN-GAN, and show that it outperforms current techniques. We further show that the cross-domain mapping task can be broken into two parts: domain alignment and learning the mapping function. The tasks can be iteratively solved, and as the alignment is improved, the unsupervised translation function reaches quality comparable to full supervision.
accepted-poster-papers
This paper builds on top of Cycle GAN ideas where the main idea is to jointly optimize the domain-level translation function with an instance-level matching objective. Initially the paper received two negative reviews (4,5) and a positive (7). After the rebuttal and several back and forth between the first reviewer and the authors, the reviewer was finally swayed by the new experiments. While not officially changing the score, the reviewer recommended acceptance. The AC agrees that the paper is interesting and of value to the ICLR audience.
train
[ "HyJww3cEz", "BkyDnj5VG", "ByECSWv4z", "SkHatuolz", "HJ08-bCef", "ryhcYB-bG", "rJ6aA85QG", "Hyj4tk1GM", "Sk0k9JkfG", "rklEiy1Mz", "Byqw2JyGf" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "I thank the authors for thoroughly responding to my concerns. The 3D alignment experiment looks great, and indeed I did miss the comment about the cell bio experiment. That experiment is also very compelling.\n\nI think with these two experiments added to the revision, along with all the other improvements, the paper is now much stronger and should be accepted!", "We are deeply thankful to AnonReviewer2 for holding an open discussion and for acknowledging the significance of the proposed problem setting, the work’s novelty, and the quality of the experiments.\nWe are also happy that AnonReviewer2 found the list of possible applications, provided in reply to the challenge posted in the review, to be exciting. We therefore gladly accept the new challenge that was set, to demonstrate the success of our method on one of the proposed applications in the list.\nSince the reviewer explicitly requested 3D point cloud matching, we have evaluated our method on this task. It should be noted that our method was never tested before in low-D settings, so this experiment is of particular interest.\nSpecifically, we ran the experiment using the Bunny benchmark, exactly as is shown in “Discriminative optimization: theory and applications to point cloud registration”, CVPR’17 available as an extended version at https://arxiv.org/pdf/1707.04318.pdf, Sec. 6.2.3 . In this benchmark, the object is rotated by a random degree, and we tested the success rate of our model in achieving alignment for various ranges of rotation angles. \nFor both CycleGAN and our method, the following architecture was used. D is a fully connected network with 2 hidden layers, each of 2048 hidden units, followed by BatchNorm and with Leaky ReLU activations. The mapping function is a linear affine matrix of size 3 * 3 with a bias term. Since in this problem, the transformation is restricted to be a rotation matrix, in both methods we added a loss term that encourages orthonormality of the weights of the mapper. Namely, ||WW^T-I||, where W are the weights of our mapping function.\nThe table below depicts the success rate for the two methods, for each rotation angle bin, where success is defined in this benchmark as achieving an RMSE alignment accuracy of 0.05.\nRotation angle | CycleGAN | Ours\n============================\n0-30 0.12000 1.00000 \n30-60 0.12500 1.00000 \n60-90 0.11538 0.88462 \n90-120 0.07895 0.78947 \n120-150 0.05882 0.64706 \n150-180 0.10000 0.76667\n \nComparing to the results reported in Fig. 3 of https://arxiv.org/pdf/1707.04318.pdf, middle column, our results seem to significantly outperform the methods presented there at large angles. Therefore, the proposed method outperforms all baselines and, once again, proves to be effective as well as broadly applicable.\nP.S. It seems that the comment we posted above, which was titled “A real-world application of our method in cell biology” (https://openreview.net/forum?id=BkN_r2lR-&noteId=rJ6aA85QG), went unnoticed. In a way, it already addressed the new challenge by presenting quantitative results on a real-world dataset for which there are no underlying ground truth matches. ", "Thank you for your detailed reply. I still think the paper could be much improved with more extensive experiments and better applications. However, I agree that the problem setting is interesting and novel, the method is compelling, and the experiments provide sufficient evidence that the method actually works. Therefore I would not mind seeing this paper accepted into ICLR, and, upon reflection, I think this paper is hovering around the acceptance threshold.\n\nI really like the real-world examples listed! I would be excited to see the proposed method applied to some of these problems. I think that would greatly improve the paper. (Although, I would still argue that several of the listed examples are cases where the data would naturally come in a paired format, and direct supervision could be applied.)\n\nIt's a good point that previous unsupervised, cross domain GANs were also evaluated on contrived datasets with exact matches available at training time. However, I'd argue that these papers were convincing mainly because of extensive qualitative results on datasets without exact matches. Those qualitative results were enough to demonstrate that unpaired translation is possible. The current paper aims to go further, and show that the proposed method does _better_ at unpaired translation than previous methods. Making a comparison like this is harder than simply showing that the method can work at all, and I think it calls for quantitative metrics on real unpaired problems (like the examples listed in the rebuttal).\n\nThere are a number of quantitative ways to evaluate performance on datasets without exact matches. First, user studies could be run on Mechanical Turk. Second, unconditional metrics could be evaluated, such as Inception score or moment matching (do the statistics of the output distribution match the statistics of the target domain?).\n\nHowever, I actually think it is fine to evaluate on ground truth matches as long as the training data is less contrived. For example, I would find it compelling if the system were tested on 3D point cloud matching, even if the training data contains exact matches, as long as there is no trivial way of finding these matches.\n\n", "This paper presents an image-to-image cross domain translation framework based on generative adversarial networks. The contribution is the addition of an explicit exemplar constraint into the formulation which allows best matches from the other domain to be retrieved. The results show that the proposed method is superior for the task of exact correspondence identification and that AN-GAN rivals the performance of pix2pix with strong supervision.\n\n\nNegatives:\n1.) The task of exact correspondence identification seems contrived. It is not clear which real-world problems have this property of having both all inputs and all outputs in the dataset, with just the correspondence information between inputs and outputs missing.\n2.) The supervised vs unsupervised experiment on Facades->Labels (Table 3) is only one scenario where applying a supervised method on top of AN-GAN’s matches is better than an unsupervised method. More transfer experiments of this kind would greatly benefit the paper and support the conclusion that “our self-supervised method performs similarly to the fully supervised method.” \n\nPositives:\n1.) The paper does a good job motivating the need for an explicit image matching term inside a GAN framework\n2.) The paper shows promising results on applying a supervised method on top of AN-GAN’s matches.\n\nMinor comments:\n1. The paper sometimes uses L1 and sometimes L_1, it should be L_1 in all cases.\n2. DiscoGAN should have the Kim et al citation, right after the first time it is used. I had to look up DiscoGAN to realize it is just Kim et al.", "The paper presents a method for finding related images (analogies) from different domains based on matching-by-synthesis. The general idea is interesting and the results show improvements over previous approaches, such as CycleGAN (with different initializations, pre-learned or not). The algorithm is tested on three datasets.\n\nWhile the approach has some strong positive points, such as good experiments and theoretical insights (the idea to match by synthesis and the proposed loss which is novel, and combines the proposed concepts), the paper lacks clarity and sufficient details.\n\nInstead of the longer intro and related work discussion, I would prefer to see a Figure with the architecture and more illustrative examples to show that the insights are reflected in the experiments. Also, the matching part, which is discussed at the theoretical level, could be better explained and presented at a more visual level. It is hard to understand sufficiently well what the formalism means without more insight.\n\nAlso, the experiments need more details. For example, it is not clear what the numbers in Table 2 mean.\n\n\n\n", "This paper adds an interesting twist on top of recent unpaired image translation work. A domain-level translation function is jointly optimized with an instance-level matching objective. This yields the ability to extract corresponding image pairs out of two unpaired datasets, and also to potentially refine unpaired translation by subsequently training a paired translation function on the discovered matches. I think this is a promising direction, but the current paper has unconvincing results, and it’s not clear if the method is really solving an important problem yet.\n\nMy main criticism is with the experiments and results. The experiments focus almost entirely on the setting where there actually exist exact matches between the two image sets. Even the partial matching experiments in Section 4.1.2 only quantify performance on the images that have exact matches. This is a major limitation since the compelling use cases of the method are in scenarios where we do not have exact matches. It feels rather contrived to focus so much on the datasets with exact matches since, 1) these datasets actually come as paired data and, in actual practice, supervised translation can be run directly, 2) it’s hard to imagine datasets that have exact but unknown matches (I welcome the authors to put forward some such scenarios), 3) when exact matches exist, simpler methods may be sufficient, such as matching edges. There is no comparison to any such simple baselines.\n\nI think finding analogies that are not exact matches is much more compelling. Quantifying performance in this case may be hard, and the current paper only offers a few qualitative results. I’d like to see far more results, and some attempt at a metric. One option would be to run user studies where humans judge the quality of the matches. The results shown in Figure 2 don’t convince me, not just because they are qualitative and few, but also because I’m not sure I even agree that the proposed method is producing better results: for example, the DiscoGAN results have some artifacts but capture the texture better in row 3.\n\nI was also not convinced by the supervised second step in Section 4.3. Given that the first step achieves 97% alignment accuracy, it’s no surprised that running an off-the-shelf supervised method on top of this will match the performance of running on 100% correct data. In other words, this section does not really add much new information beyond what we could already infer given that the first stage alignment was so successful.\n\nWhat I think would be really interesting is if the method can improve performance on datasets that actually do not have ground truth exact matches. For example, the shoes and handbags dataset or even better, domain adaptation datasets like sim to real.\n\nI’d like to see more discussion of why the second stage supervised problem is beneficial. Would it not be sufficient to iterate alpha and T iterations enough times until alpha is one-hot and T is simply training against a supervised objective (Equation 7)?\n\nMinor comments:\n1. In the intro, it would be useful to have a clear definition of “analogy” for the present context.\n2. Page 2: a link should be provided for the Putin example, as it is not actually in Zhu et al. 2017.\n3. Page 3: “Weakly Supervised Mapping” — I wouldn’t call this weakly supervised. Rather, I’d say it’s just another constraint / prior, similar to cycle-consistency, which was referred to under the “Unsupervised” section.\n4. Page 4 and throughout: It’s hard to follow which variables are being optimized over when. For example, in Eqn. 7, it would be clearer to write out the min over optimization variables.\n5. Page 6: The Maps dataset was introduced in Isola et al. 2017, not Zhu et al. 2017.\n6. Page 7: The following sentence is confusing and should be clarified: “This shows that the distribution matching is able to map source images that are semantically similar in the target domain.”\n7. Page 7: “This shows that a good initialization is important for this task.” — Isn’t this more than initialization? Rather, removing the distributional and cycle constraints changes the overall objective being optimized.\n8. In Figure 2, are the outputs the matched training images, or are they outputs of the translation function?\n9. Throughout the paper, some citations are missing enclosing parentheses.", "Two reviewers were concerned that the problem of unsupervised simultaneous cross-domain alignment and mapping, while well suited to the existing ML benchmarks, may not have real-world applications. In our rebuttal, we responded to the challenge posed by AnonReviewer2 to present examples of applications with many important use cases.\n\nIn order to further demonstrate that the task has general scientific significance, we present results obtained using our method in the domain of single cell expression analysis. This field has emerged recently, due to new technologies that enable the measurement of gene expression at the level of individual cells. This capability already led to the discovery of quite a few previously unknown cell types and holds the potential to revolutionize cell biology. However, there are many computational challenges since the data is given as sets of unordered measurements. Here, we show how to use our method to map between gene expression of cell samples from two individuals and find interpersonal matching cells.\n\nFrom the data of [1], we took the expressions of blood cells (PMBC) extracted for donors A and B (available online at https://support.10xgenomics.com/single-cell-gene-expression/datasets; we used the matrices of what is called “filtered results”). These expressions are sparse matrices, denoting 3k and 7k cells in the two samples and expressions of around 32k genes. We randomly subsampled the 7k cells from donor B to 3k and reduced the dimensions of each sample from 32k to 100 via PCA. Then, we applied our method in order to align the expression of the two donors (find a transformation) and match between the cell samples in each. Needless to say, there is no supervision in the form of matching between the cells of the two donors and the order of the samples is arbitrary. However, we can expect such matches to exist. \n\nWe compare three methods:\nThe mean distance between a sample in set A and a sample in set B (identity transformation). \nThe mean distance after applying a CycleGAN to compute the transformation from A to B (CG for CycleGAN).\nThe mean distance after applying our complete method.\n\nThe mean distance with the identity mapping is 3.09, CG obtains 2.67, and our method 1.18. The histograms of the distances are shown in the anonymous url:\nhttps://imgur.com/xP3MVmq\n\nWe see a great potential in further applying our method in biology with applications ranging from interspecies biological network alignment [2] to drug discovery [3], i.e. aligning expression signatures of molecules to that of diseases.\n \n[1] Zheng et al, “Massively parallel digital transcriptional profiling of single cells”. Nature Communications, 2017.\n\n[2] Singh, Rohit, Jinbo Xu, and Bonnie Berger. \"Global alignment of multiple protein interaction networks with application to functional orthology detection.\" Proceedings of the National Academy of Sciences 105.35 (2008): 12763-12768.\n\n[3] Gottlieb, et al. \"PREDICT: a method for inferring novel drug indications with application to personalized medicine.\" Molecular systems biology 7.1 (2011): 496.\n", "We thank you for highlighting the novelty and successful motivation of the exemplar-based matching loss. \n\nWe think that the exact-analogy problem is very important. Please refer to our comment to AnonReviewer2 for an extensive discussion. \n\nFollowing your request, we have added AN-GAN supervised experiments for the edges2shoes and edges2handbags datasets. The results as for the Facades case are very good.\n\nThank you for highlighting the inconsistency in L_1 notation and the confusing reference. This has been fixed in the revised version.\n", "Thank you for your positive feedback on the theoretical and experimental merits of this paper.\n\nFollowing your feedback on the clarity of presentation of the method. we included a diagram (including example images) illustrating the algorithm. To help keep the length under control, we shortened the introduction and related work section as you suggested.\n\nWe further clarified the text of the experiments. Specifically the numbers in Tab 2 are the top-1 accuracy for both directions (A to B and B to A) when 0%, 10% and 25% of examples do not have matches in the other domain. If some details remain unclear, we would be glad to clarify them.\n\nWe hope that your positive opinion of the content of the paper with the improvement in clarity of presentation will merit an acceptance.\n", "We thank the reviewer for the extensive style and reference comments. They have been fixed in the revised version:\n1. A definition of “analogy” for the present context added to intro.\n2. Putin example removed for need of space.\n3. “Weakly Supervised Mapping” previous work section removed and references merged for need of space.\n4. Optimization variables have been explicitly added to equations.\n5. Maps dataset citation was changed to Isola et al. 2017\n6. Removed confusing comment: “This shows that the distribution matching is able to map source images that are semantically similar in the target domain.”\n7. “This shows that a good initialization is important for this task.”: one way of looking at it, is that the exemplar loss optimizes the matching problem that we care about but is a hard optimization task. The two other losses are auxiliary losses that help optimization converge. Clarification added in text.\n8. The results shown for inexact matching are as follows: For alpha iterations and ANGAN we show the matches recovered by our methods, The DiscoGAN results are the outputs of the translation function.\n9. Parentheses added to all citations.\n\nWe hope that this has convinced the reviewer of the importance of this work and are keen to answer any further questions.\n", "Thank you for the detailed and constructive review. It highlighted motivation and experimental protocols that were further clarified in the revised version.\n\nThis paper is focused on exact analogy identification. A core question in the reviews was the motivation for the scenario of exact matching, and we were challenged by the reviewer to find real world applications for it. \n\nWe believe that finding exact matches is an important problem and occurs in multiple real-world problems. Exact or near-exact matching occurs in: \n* 3D point cloud matching.\n* Matching between different cameras panning the same scene in different trajectories (hard if they are in different modalities such as RGB and IR).\n* Matching between the audio samples of two speakers uttering the same set of sentences.\n* Two repeats of the same scripted activity (recipe, physics experiment, theatrical show)\n* Two descriptions of the same news event in different styles (at the sentence level or at the story level).\n* Matching parallel dictionary definitions and visual collections.\n* Learning to play one racket sport after knowing to play another, building on the existing set of acquired movements and skills.\n\nIn all these cases, there are exact or near exact analogies that could play a major rule in forming unsupervised links between the domains.\n \nWe note that on a technical level, most numerical benchmarks in cross domain translation are already built using exact matches, and many of the unsupervised techniques could be already employing this information, even if implicitly. We show that our method is more effective at it than other methods.\n\nOn a more theoretical level, cognitive theories of analogy-based reasoning mostly discuss exact analogies from memory (see, e.g., G. Fauconnier, and M. Turner, “The way we think”, 2002 ). For example, a new situation is dealt with by retrieving and adopting a motor action that was performed before. Here, the chances of finding such analogies are high since the source domain is heavily populated due to life experiences. \n\nRegarding experiments. We believe that in some cases the requests are conflicting: we cannot provide numerical results in places for which there are no analogies and no metrics for success. We provide a large body of experiments for exact matches and show that our method far surpasses everything else. We have compared with multiple baselines covering all the reasonable successful approaches for matching between domains. \n\nThe experiments regarding cases without exact matches are, admittedly, less extensive, added for completeness, and not the focus of this paper.\n\nThe reviewer wondered if matching will likely work better with simpler methods. Our baselines test precisely this possibility and show that the simpler methods do not perform well. Specifically edge-based matches are well covered by the more general VGG feature baseline (which uses also low level maps - not just fc7). AN-GAN has easily outperformed this method. If it is possible to hand-craft a successful method for each task individually, these hand-crafted features are unlikely to generalize as well as the multi-scale VGG features or AN-GAN.\n\nWe put further clarification in the paper for the motivation for the second “supervised” step. In unsupervised semantic matching, larger neural architecture have been theoretically and practically shown to be less successful (due to overfitting and finding it less easy to recover the correct transformation). The distribution matching loss function (e.g. CycleGAN) is adversarial and is therefore less stable and might not optimize the quantity we care about (e.g. L1/L2 loss). Once the datasets are aligned and analogies are identified, however, the cross domain translation becomes a standard supervised deep learning problem where large architectures do well and standard loss functions can be used. This is the reason for the two steps. It might be possible to include the increase in architecture into the alpha-iterations but it’s non-trivial and we didn’t find it necessary.\n" ]
[ -1, -1, -1, 7, 5, 4, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 4, 4, 3, -1, -1, -1, -1, -1 ]
[ "BkyDnj5VG", "ByECSWv4z", "Byqw2JyGf", "iclr_2018_BkN_r2lR-", "iclr_2018_BkN_r2lR-", "iclr_2018_BkN_r2lR-", "iclr_2018_BkN_r2lR-", "SkHatuolz", "HJ08-bCef", "ryhcYB-bG", "ryhcYB-bG" ]
iclr_2018_B17JTOe0-
Emergence of grid-like representations by training recurrent neural networks to perform spatial localization
Decades of research on the neural code underlying spatial navigation have revealed a diverse set of neural response properties. The Entorhinal Cortex (EC) of the mammalian brain contains a rich set of spatial correlates, including grid cells which encode space using tessellating patterns. However, the mechanisms and functional significance of these spatial representations remain largely mysterious. As a new way to understand these neural representations, we trained recurrent neural networks (RNNs) to perform navigation tasks in 2D arenas based on velocity inputs. Surprisingly, we find that grid-like spatial response patterns emerge in trained networks, along with units that exhibit other spatial correlates, including border cells and band-like cells. All these different functional types of neurons have been observed experimentally. The order of the emergence of grid-like and border cells is also consistent with observations from developmental studies. Together, our results suggest that grid cells, border cells and others as observed in EC may be a natural solution for representing space efficiently given the predominant recurrent connections in the neural circuits.
accepted-poster-papers
This work shows how activation patterns of units reminiscent of grid and border cells emerge in RNNs trained on navigation tasks. While the ICLR audience is not mainly focused on neuroscience, the findings of the paper are quite intriguing, and grid cells are sufficiently well-known and "mainstream" that this may interest many people.
train
[ "SkDHZUXlG", "rk3jvePlf", "HyMQMl9eG", "By7T3PpmM", "HJ8d9D6Xz", "ByubuDpmM", "rktarP6mG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The authors train an RNN to perform deduced reckoning (ded reckoning) for spatial navigation, and then study the responses of the model neurons in the RNN. They find many properties reminiscent of neurons in the mammalian entorhinal cortex (EC): grid cells, border cells, etc. When regularization of the network is not used during training, the trained RNNs no longer resemble the EC. This suggests that those constraints (lower overall connectivity strengths, and lower metabolic costs) might play a role in the EC's navigation function. \n\nThe paper is overall quite interesting and the study is pretty thorough: no major cons come to mind. Some suggestions / criticisms are given below.\n\n1) The findings seem conceptually similar to the older sparse coding ideas from the visual cortex. That connection might be worth discussing because removing the regularizing (i.e., metabolic cost) constraint from your RNNS makes them learn representations that differ from the ones seen in EC. The sparse coding models see something similar: without sparsity constraints, the image representations do not resemble those seen in V1, but with sparsity, the learned representations match V1 quite well. That the same observation is made in such disparate brain areas (V1, EC) suggests that sparsity / efficiency might be quite universal constraints on the neural code.\n\n2) The finding that regularizing the RNN makes it more closely match the neural code is also foreshadowed somewhat by the 2015 Nature Neuro paper by Susillo et al. That could be worthy of some (brief) discussion.\n\nSussillo, D., Churchland, M. M., Kaufman, M. T., & Shenoy, K. V. (2015). A neural network that finds a naturalistic solution for the production of muscle activity. Nature neuroscience, 18(7), 1025-1033.\n\n3) Why the different initializations for the recurrent weights for the hexagonal vs other environments? I'm guessing it's because the RNNs don't \"work\" in all environments with the same initialization (i.e., they either don't look like EC, or they don't obtain small errors in the navigation task). That seems important to explain more thoroughly than is done in the current text.\n\n4) What happens with ongoing training? Animals presumably continue to learn throughout their lives. With on-going (continous) training, do the RNN neurons' spatial tuning remain stable, or do they continue to \"drift\" (so that border cells turn into grid cells turn into irregular cells, or some such)? That result could make some predictions for experiment, that would be testable with chronic methods (like Ca2+ imaging) that can record from the same neurons over multiple experimental sessions.\n\n5) It would be nice to more quantitatively map out the relation between speed tuning, direction tuning, and spatial tuning (illustrated in Fig. 3). Specifically, I would quantify the cells' direction tuning using the circular variance methods that people use for studying retinal direction selective neurons. And I would quantify speed tuning via something like the slope of the firing rate vs speed curves. And quantify spatial tuning somehow (a natural method would be to use the sparsity measures sometimes applied to neural data to quantify how selective the spatial profile is to one or a few specific locations). Then make scatter plots of these quantities against each other. Basically, I'd love to see the trends for how these types of tuning relate to each other over the whole populations: those trends could then be tested against experimental data (possibly in a future study).", "Congratulations on a very interesting and clear paper. While ICLR is not focused on neuroscientific studies, this paper clearly belongs here as it shows what representations develop in recurrent networks that are trained on spatial navigation. Interestingly, these include representations that have been observed in mammals and that have attracted considerable attention, even honored with a Nobel prize. \n\nI found it is very interesting that the emergence of these representations was contingent on some regularization constraint. This seems similar to the visual domain where edge detectors emerge easily when trained on natural images with sparseness constraints as in Olshausen&Field and later reproduced with many other models that incorporate sparseness constraints. \n\nI do have some questions about the training itself. The paper mentions a metabolic cost that is not specified in the paper. This should be added. \n\nMy biggest concern is about Figure 6a. I am puzzled why is the error is coming down before the boundary interaction? Even more puzzling, why does this error go up again for the blue curve (no interaction)? Shouldn’t at least this curve be smooth?\n", "This paper aims at better understanding the functional role of grid cells found in the entorhinal cortex by training an RNN to perform a navigation task.\n\nOn the positive side: \n\nThis is the first paper to my knowledge that has shown that grid cells arise as a product of a navigation task demand. I enjoyed reading the paper which is in general clearly written. I have a few, mostly cosmetic, complaints but this can easily be addressed in a revision.\n\nOn the negative side: \n\nThe manuscript is not written in a way that is suitable for the target ICLR audience which will include, for the most part, readers that are not expert on the entorhinal cortex and/or spatial navigation. \n\nFirst, the contributions need to be more clearly spelled out. In particular, the authors tend to take shortcuts for some of their statements. For instance, in the introduction, it is stated that previous attractor network type of models (which are also recurrent networks) “[...] require hand-crafted and fined tuned connectivity patterns, and the evidence of such specific 2D connectivity patterns has been largely absent.” This statement is problematic for two reasons: \n\n(i) It is rather standard in the field of computational neuroscience to start from reasonable assumptions regarding patterns of neural connectivity then proceed to show that the resulting network behaves in a sensible way and reproduces neuroscience data. This is not to say that demonstrating that these patterns can arise as a byproduct is not important, on the contrary. These are just two complementary lines of work. In the same vein, it would be silly to dismiss the present work simply because it lacks spikes. \n\n(ii) the authors do not seem to address one of the main criticisms they make about previous work and in particular \"[a lack of evidence] of such specific 2D connectivity patterns\". My understanding is that one of the main assumptions made in previous work is that of a center-surround pattern of lateral connectivity. I would argue that there is a lot of evidence for local inhibitory connection in the cortex. Somewhat related to this point, it would be insightful to show the pattern of local connections learned in the RNN to see how it differs from the aforementioned pattern of connectivity.\n\nSecond, the navigation task used needs to be better justified. Why training a network to predict 2D spatial location from velocity inputs? Why is this a reasonable starting point to study the emergence of grid cells? It might be obvious to the authors but it will not be to the ICLR audience. Dead-reckoning (i.e., spatial localization from velocity inputs) is of critical ecological relevance for many animals. This needs to be spelled out and a reference needs to be added. As a side note, I would have expected the authors to use actual behavioral data but instead, the network is trained using artificial trajectories based on \"modified Brownian motion”. This seems like an important assumption of the manuscript but the issue is brushed off and not discussed. Why is this a reasonable assumption to make? Is there any reference demonstrating that rodent locomotory behavior in a 2D arena is random?\n\nFigure 4 seems kind of strange. I do not understand how the “representative units” are selected and where the “late” selectivity on the far right side in panel a arises if not from “early” units that would have to travel “far” from the left side… Apologies if I am missing something obvious.\n\nI found the study of the effect of regularization to be potentially the most informative for neuroscience but it is only superficially treated. It would have been nice to see a more systematic treatment of the specifics of the regularization needed to get grid cells. ", "We thank all three reviewers for their careful reading and the positive assessment of our manuscript. We appreciate the reviewers’ detailed and constructive feedback, which have helped us substantially improve various aspects of the manuscript, in particular the presentation of our results. We have revised and uploaded a new version of the manuscript, taking into account the reviewers’ suggestions.", "Thank you for your positive assessment and feedback. \n\nThe reviewer raised the interesting point on the potential connection between the sparse coding work by Olshausen and Field (1996) and our work (note that Reviewer 3 also raised the same point). Indeed, the initial idea of this work is partly inspired by sparse coding. In the Discussion section, we have now included a detailed discussion of the relation of our work to the sparse coding work. While there are important conceptual similarities, there are also some important differences that need to be mentioned. For example, the grid-like response patterns are averaged responses, and are not linear filters based on the input to the network as the Gabor filters. In our network, the velocity response needs to be integrated and maintained in order for the neurons to be spatially tuned. Also, the sparsity constraint in Olshausen and Field could be interpreted as imposing a particular form of Bayesian prior, while in our work, we find it difficult to interpret that way. We also include a discussion on Sussillo et al. (2015), which we agree is very relevant to our work. Thank you for both suggestions.\n\nConcerning the different initializations, we want to clarify that we tried a few different initializations and found the results to be robust. As a way to show the robustness, we have presented the results with different initializations. We have slightly changed the text to make this point more explicit. We apologize for the confusion.\n\nOn the question of ongoing training- we think this is a great idea. We have thought about these issues, including training the same network to perform tasks in various environments, but on the other hand, we also think a systematic treatment of these issues goes beyond the scope of the current paper, which is just a proof-of-principle of the approach.\n\nThe reviewer also suggests a more quantitative characterization of the relation between different kinds of tuning properties. We have computed the selectivity indexes for different tuning properties along these lines. It appears that at the population level, the relation between these different indexes is complex. We now incorporate a figure in the Appendix. We also examined the experimental literature on the relation between different types of tuning in EC, and there is not enough evidence to draw any conclusions. Thus, we think this is an interesting and important question that should be informative for future experiments. But one might need some more sophisticated ways to perform this analysis in order to better reveal the dependence between different types of tuning. For example, maybe one could first do a systematic clustering based on the neurons’ response, and examine the dependence of the tuning based on the clustering. We would like to look deeper into these issues in the near future. Thank you for this suggestion.\n", "Thank you for your positive assessment and feedback. \n\nThe reviewer raised the interesting point on the potential connection between the sparse coding work by Olshausen and Field (1996) and our work (note that Reviewer 1 raised a similar point). Indeed, the initial idea behind this work is partly inspired by sparse coding. In the Discussion section, we have now included a detailed discussion of the relation of our work to the sparse coding work. We also think while there are important conceptual similarities, there are also some important differences that need to be mentioned. For example, the grid-like response patterns are averaged responses, they are not linear filters based on the input to the network as the Gabor filters. In our network, the velocity response needs to be integrated and maintained in order for the neurons to be spatially tuned. Also, the sparsity constraint in Olshausen and Field could be interpreted as imposing a particular form of Bayesian prior, while in our work, we find it difficult to interpret that way. We also add discussions on a few other related studies. We thank the reviewer for this suggestion.\n\nIn terms of Figure 6a, the increase of the blue curve after the boundary interaction is due to the accumulation of noise in the RNN, which would gradually increase the error (roughly linearly, thinking about Brownian motion may help here) without further boundary interactions. We now explain this in the caption of Figure 6. The reason the blue curve is not entirely smooth is that it is the averaged error of a set of sample paths, and the number of paths is not very large. However, we think the general pattern is clear. In terms of why the error is coming down before the boundary interaction, one possible yet speculative reason is that the boundary-related firing has a certain width, thus the error-correction might already start before the boundary interaction. In fact, this would be a natural interpretation based on a previous model which used boundary responses to correct the grid-drift (Hardcastle et al., Neuron, 2015). We note that one caveat to that interpretation is that, in our simulations, sometimes we don’t observe such a drop in error before the boundary interactions. Also, it remains unclear if the error-correction mechanisms in our model are the same as those proposed previously. However, the consistent pattern we observe empirically is that error robustly reduced after the boundary interaction. We thus have focused on this robust empirical observation, without speculating on an exact mechanism, which remains to be elucidated in the future.\n", "Thank you for your positive assessment and feedback. Below we address the concerns raised in the review. We have taken the suggestions in the review to make this manuscript as accessible to the general ICLR audience as possible.\n\n1. Concerning the attractor model to motivate our model, we agree with the reviewer that our initially statement is unclear and potentially misleading. We have thus revised the statements in the Introduction to better motivate our work. Specifically, we now present a more balance view of the two types of models, and explicitly present our modeling as a complementary/alternative approach. Also we are now more specific about the assumptions made in the attractor network model. In particular we point out that it is the subtle asymmetry in the weights we are mostly concerned about, not the local inhibition. We agree with the reviewer that local inhibition is not uncommon in cortex. However, these models also assume systematic weight asymmetry in the connectivity matrix, which is responsible for the movement of the bump attractor for tracking the animal’s location. Without such asymmetry, grids would not emerge in these models. For example, please see Eq. (2) in Couey et al. \"Recurrent inhibitory circuitry as a mechanism for grid formation.\" Just to be clear, we are not claiming that such connectivity pattern does not exist in the brain. We are just making the points that 1) this assumed pattern seems too restrictive, 2) so far there is no evidence for it. Furthermore, now we also motivate our work by the need for having a model that could account for other types of response patterns in Entorhinal Cortex. Note that such issues have been largely ignored previously.\nAs for the connections in our trained RNN, we did perform some preliminary analysis. We now include a discussion on this point in the final paragraph of the Discussion.\n\n2. Concerning the justification of the navigation task-\nWe completely agree that it would be useful to make the ecological relevance of dead-reckoning task explicit, in particular for the ICLR audience. We have discussed this point and add relevant references there. We thank the reviewer for this helpful suggestion.\n\nOn the issues of using artificial v.s. animal’s real trajectory to train the model- Historically there is a long debate in neuroscience on the use of artificial or naturalistic stimuli, in particular for vision. For example, see Rust and Movshon \"In praise of artifice.\" In the present paper, we didn’t use the animal’s real running trajectories for multiple reasons. First, we didn’t have access to enough data of animal’s running trajectories. Second, from a theoretical point of view, it seems advantageous to start with simple trajectories that we have complete control over. Third, grid-like responses have been reported in various animal species, including rat, mouse, bat and humans. The locomotory behavior of these different animal species can be quite different. For example, there are qualitative differences between rats and mice. We thus started from simple movement statistics to see what we can get. Having said that, we do agree with the reviewer that it would be very interesting to test the model using real trajectories. We briefly discussed these points in section 2.2. Also, as a side note, we have been talking to experimental colleagues to get data of the mice’s running behavior.\n\n3. The representative units are picked manually based on their spatial firing pattern. We simply identified a few units that clearly show grid-like or boundary-related responses, and marked them after the dimensionality reduction using t-SNE. We want to emphasize that the main point of Figure 4 is to show that early during training all responses are similar to the boundary related responses, and only as training continues do the grid-like units emerge. This did not have to be true. Early in training some units could have boundary responses and others could have grid-like responses. If these spatial response patterns remained largely unchanged during training then all units would travel a short distance in the t-SNE figure. This is in fact the case for the boundary responses: the final spatial responses are very close to the responses that developed early in training and so these units do not travel much in this space. We have modified the text to make it more clear. Thank you for raising this point.\n\n4. In terms of the role of regularization, we agree that it is potentially interesting from the neuroscience point of view. While we think the most informative effect remains the one we show in the main text, we now also include a new figure in the Appendix for the triangular environment to better illustrate the effect of the metabolic cost and adding noise to the RNN units (without being too redundant to the figure in the main text). We would be happy to include more simulations on this if the reviewer thinks that’s necessary.\n" ]
[ 8, 9, 8, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_B17JTOe0-", "iclr_2018_B17JTOe0-", "iclr_2018_B17JTOe0-", "iclr_2018_B17JTOe0-", "SkDHZUXlG", "rk3jvePlf", "HyMQMl9eG" ]
iclr_2018_HJhIM0xAW
Learning a neural response metric for retinal prosthesis
Retinal prostheses for treating incurable blindness are designed to electrically stimulate surviving retinal neurons, causing them to send artificial visual signals to the brain. However, electrical stimulation generally cannot precisely reproduce normal patterns of neural activity in the retina. Therefore, an electrical stimulus must be selected that produces a neural response as close as possible to the desired response. This requires a technique for computing a distance between the desired response and the achievable response that is meaningful in terms of the visual signal being conveyed. Here we propose a method to learn such a metric on neural responses, directly from recorded light responses of a population of retinal ganglion cells (RGCs) in the primate retina. The learned metric produces a measure of similarity of RGC population responses that accurately reflects the similarity of the visual input. Using data from electrical stimulation experiments, we demonstrate that this metric may improve the performance of a prosthesis.
accepted-poster-papers
This work shows interesting potential applications of known machine learning techniques to the practical problem of how to devise a retina prosthesis that is the most perceptually useful. The paper suffers from a few methodological problems pointed out by the reviewers (e.g., not using the more powerful neural network encoding in the subsequent experiments of the paper), but is still interesting and inspiring in its current state.
val
[ "HyzRKw7xf", "S1AQa7uxz", "B1OVwz9ez" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors develop new spike train distance metrics that cluster together responses to the same stimulus, and push responses to different stimuli away from each other. Two such metrics are discussed: neural networks, and quadratic metrics. They then show that these metrics can be used to classify neural responses as coming from the same vs different stimuli, and that they outperform the naive Hamming distance metric at this task. Moreover, they show that this metric implicitly captures some structure in the neural code: more similar responses correspond to more similar visual stimuli. Finally, they discuss the implications of their metric for retinal prosthesis, and show some (fairly preliminary) data for how it could be used.\n\nOverall, I love the concepts in this paper. I have some reasonably substantive concerns over the execution, outlined below. But I encourage the authors to consider following through on these suggestions to improve their paper: the paper's key idea is really good, and I think it's worth the effort to flesh that idea out more thoroughly.\n\nMy specific suggestions / criticisms are:\n\n1) The quadratic metric seems only marginally better than the Hamming one (especially in Figs. 3 and 4), whereas the neural nets do much better as a metric (Fig. 3). However, most of the analyses (Figs. 4,5) use the quadratic metric. Why not use the better neural network metric for the subsequent studies of image similarity, and retinal stimulation? \n\n2) For Figs. 4, 5, where you use linear decoders to test the stimuli corresponding to the neural responses, how good are those decoders (i.e., MSE between decoded stim and true stim.)? If the decoders are poor, then the comparisons based on those decoders might not be so meaningful. I encourage you to report the decoding error, and if it's large, to make a better decoder and use it for these studies.\n\n3) Similarly, for Fig. 4, why not measure the MSE between the actual image frames corresponding to these neural responses? Presumably, you have the image frames corresponding to the target response, and for each of the other responses shown (i.e., the responses at different distances from the target). This would avoid any complications from sub-optimal decoders, and be a much more direct test.\n\n(I understand that, for Fig. 5, you can't do this direct comparison, as the electrically stimulated patterns don't have corresponding image frames, so you need to decode them.)", "In their paper, the authors propose to learn a metric between neural responses by either optimizing a quadratic form or a deep neural network. The pseudometric is optimized by positing that the distance between two neural responses to two repeats of the same stimulus should be smaller than the distance between responses to different stimuli. They do so with the application of improving neural prosthesis in mind. \n\nFirst of all, I am doubtful about this application: I don't think the task of neural prosthesis can ever be to produce idential output pattern to the same stimuli. Nevertheless, a good metric for neural responses that goes beyond e.g. hamming distance or squared error between spike density function would be clearly useful for understanding neural representations.\n\nSecond, I find the framework proposed by the authors interesting, but not clearly motivated from a neurobiological perspective, as the similarity between stimuli does not appear to play a role in the optimized loss function. For two similar stimuli, natural responses of neural population can be more similar than the responses to two repetitions of the same stimulus.\n\nThird, the results presented by the authors are not convincing throughout. For example, 4B suggests that indeed the Hamming distance achieves lower error than the learned representation.\n\nNevertheless, it is an interesting approach that is worthwhile pursuing further. ", "* Summary of paper: The paper addresses the problem of optimizing metrics in the context of retinal prosthetics: Their goal is to learn a metric which assumes spike-patterns generated by the same stimulus to be more similar to each other than spike-patterns generated by different stimuli. They compare a conventional, quadratic metric to a neural-network based representation and a simple Hamming metric, and show that the neural-network based on achieves higher performance, but that the quadratic metric does not substantially beat the simple Hamming baseline. They subsequently evaluate the metric (unfortunately, only the quadratic metric) in two interesting applications involving electrical stimulation, with the goal of selecting stimulations which elicit spike-patterns which are maximally similar to spike-patterns evoked by particular stimuli.\n\n* Quality: Overall, the paper is of high quality. What puzzled me, however is the fact that, in the applications using electrical stimulation in the paper (i.e. the applications targeted to retinal prosthetics, Secs 3.3 and 3.4), the authors do not actually used the well-performing neural-network based metric, but rather the quadratic metric, which is no better than the baseline Hamming metric? It would be valuable for them to comment on what additional challenges would arise by using the neural network instead, and whether they think they could be surmonted.\n\n* Clarity: The paper is overall clear, but specific aspects could be improved: First, it took me a while to understand (and is not entirely clear to me) what the goal of the paper is, in particular outside the setting studied by the authors (in which there is a small number of stimuli to be distinguished). Second, while the paper does not claim to provide a new metric-learning approach, it would benefit from more clearly explaining if and how their approach relates to previous approaches to metric learning. Third, the paper, in my view, overstating some of the implications. As an example, Figure 5 is titled 'Learned quadratic response metric gives better perception than using a Hamming metric.': there is no psychophysical evaluation of perception in the paper, and even the (probably hand-picked?) examples in the figure do not look amazing.\n\n* Originality: To the best of my knowledge, this is the first paper addressing the question of learning similarity metrics in the context of retinal prosthetics. Therefore, this specific paper and approach is certainly novel and original. From a machine-learning perspective, however, this seems like pretty standard metric learning with neural networks, and no attempt is made to either distinguish or relate their approach to prior work in this field (e.g. Chopra et al 2005, Schroff et al 2015 or Oh Song et al 2016.)\n\nIn addition, there is a host of metrics and kernels which have been proposed for measuring similarity between spike trains (Victor-Purpura) -- while they might not have been developed in the context of prosthetics, they might still be relevant to this tasks, and it would have been useful to see a comparison of how well they do relative to a Hamming metric. The paper states this as a goal (\"This measure should expand upon...), but then never does that- why not?\n\n* Significance: The general question the authors are approaching (how to improve retinal prosthetics) is, an extremely important one both from a scientific and societal perspective. How important is the specific advance presented in this paper? The authors learn a metric for quantifying similarity between neural responses, and show that it performs better than a Hamming metric. It would be useful for the paper to comment on how they think that metric to be useful for retinal prosthetics. In a real prosthetic device, one will not be able learn a metric, as the metric learning her requires access to multiple trials of visual stimulation data, neuron-by-neuron. Clearly, any progress on the way to retinal prosthetics is important and this approach might contribute that. However, the current presentation of the manuscripts gives a somewhat misleading presentation of what has been achieved, and a more nuanced presentation would be important and appropriate. \n\n\nOverall, this is a nice paper which could be of interest to ICLR. Its strengths are that i) they identified a novel, interesting and potentially impactful problem that has not been worked on in machine learning before, ii) they provide a solution to it based on metric learning, and show that it performs better than a non-learned metrics. Its limitations are that i) no novel machine-learning methodology is used (and relationship to prior work in machine learning is not clearly described) ii) comparisons with previously proposed similarity measures of spike trains are lacking, iii) the authors do not actually use their learned, network based metric, but the metric which performs no better than the baseline in their main results, and iv) it is not well explained how this improved metric could actually be used in the context of retinal prosthetics.\n\nMinor comments:\n\n - p.2 The authors write that the element-wise product is denoted by $A \\bullet B = \\Tr(A^{\\intercal}) B$\n This seems to be incorrect, as the r.h.s. corresponds to a scalar.\n - p.3 What exactly is meant by “mining”?\n - p.4 It would be useful to give an example of what is meant by “similarity learning”.\n - p.4 “Please the Appendix” -> “Please see the Appendix”\n - p.5 (Fig. 3) The abbreviation “AUC” is not defined.\n - p.5 (Fig. 3B) The figure giving 'recall' should have a line indicating perfect performance, for comparison.\n - Sec 3.3: How was the decoder obtained ?\n - p.6 (Fig. 4) Would be useful to state that column below 0 is the target. Or just replace “0” by “target”.\n - p.6 (3rd paragraph) The sentence “Figure 4A bottom left shows the spatial profile of the linear decoding 20ms prior to the target response.” is unclear. It took me a very long time to realize that \"bottom left\" meant \"column 0, 'decoded stimulus'\" row. It's also unclear why the authors chose to look at 20ms prior to the target response.\n - p.6 The text says RMS distance, but the Fig. 4B caption says MSE— is this correct?" ]
[ 5, 6, 7 ]
[ 4, 3, 4 ]
[ "iclr_2018_HJhIM0xAW", "iclr_2018_HJhIM0xAW", "iclr_2018_HJhIM0xAW" ]
iclr_2018_BJj6qGbRW
Few-Shot Learning with Graph Neural Networks
We propose to study the problem of few-shot learning with the prism of inference on a partially observed graphical model, constructed from a collection of input images whose label can be either observed or not. By assimilating generic message-passing inference algorithms with their neural-network counterparts, we define a graph neural network architecture that generalizes several of the recently proposed few-shot learning models. Besides providing improved numerical performance, our framework is easily extended to variants of few-shot learning, such as semi-supervised or active learning, demonstrating the ability of graph-based models to operate well on ‘relational’ tasks.
accepted-poster-papers
All reviewers agree that the proposed method is novel and experiments do a good job in establishing its value for few-shot learning. Most the concerns raised by the reviewers on experimental protocols have been addressed in the author response and revised version.
train
[ "BJIp_k0xM", "r1_szu5xM", "By7ixJ9eG", "SkzW_xPbG", "HJ4l-figf", "r1XiADHmz", "HkmKSsqzz", "ByG5VLHQG", "HkH8YlDZz", "Hy1Xgqbfz", "HyGQQPbMM", "r1CZZbezf", "Syc4dgwWz", "SJeQvN3eG", "BkWNYcseM", "rJ5B_ucef" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "public", "author", "author", "author", "public", "author", "public", "public", "public" ]
[ "This paper proposes to use graph neural networks for the purpose of few-shot learning, as well as semi-supervised learning and active learning. The paper first relies on convolutional neural networks to extract image features. Then, these image features are organized in a fully connected graph. Then, this graph is processed with an graph neural network framework that relies on modelling the differences between features maps, \\propto \\phi(abs(x_i-x_j)). For few-shot classification then the cross-entropy classification loss is used on the node.\n\nThe paper has some interesting contributions and ideas, mainly from the point of view of applications, since the basic components (convnets, graph neural networks) are roughly similar to what is already proposed. However, the novelty is hurt by the lack of clarity with respect to the model design.\n\nFirst, as explained in 5.1 a fully connected graph is used (although in Fig. 2 the graph nodes do not have connections to all other nodes). If all nodes are connected to all nodes, what is the different of this model from a fully connected, multi-stream networks composed of S^2 branches? To rephrase, what is the benefit of having a graph structure when all nodes are connected with all nodes. Besides, what is the effect when having more and more support images? Is the generalization hurt?\n\nSecond, it is not clear whether the label used as input in eq. (4) is a model choice or a model requirement. The reason is that the label already appears in the loss of the nodes in 5.1. Isn't using the label also as input redundant?\n\nThird, the paper is rather vague or imprecise at points. In eq. (1) many of the notations remain rather unclear until later in the text (and even then they are not entirely clear). For instance, what is s, r, t. \n\nThe experimental section is also ok, although not perfect. The proposed method appears to have a modest improvement for few-shot learning. However, in the case of active learning and semi-supervised learning the method is not compared to any baselines (other than the random one), which makes conclusions hard to reach.\n\nIn general, I tend to be in favor of accepting the paper if the authors have persuasive answers and provide the clarifications required.", "This paper introduces a graph neural net approach to few-shot learning. Input examples form the nodes of the graph and edge weights are computed as a nonlinear function of the absolute difference between node features. In addition to standard supervised few-shot classification, both semi-supervised and active learning task variants are introduced. The proposed approach captures several popular few-shot learning approaches as special cases. Experiments are conducted on both Omniglot and miniImagenet datasets.\n\nStrengths\n- Use of graph neural nets for few-shot learning is novel.\n- Introduces novel semi-supervised and active learning variants of few-shot classification.\n\nWeaknesses\n- Improvement in accuracy is small relative to previous work.\n- Writing seems to be rushed.\n\nThe originality of applying graph neural networks to the problem of few-shot learning and proposing semi-supervised and active learning variants of the task are the primary strengths of this paper. Graph neural nets seem to be a more natural way of representing sets of items, as opposed to previous approaches that rely on a random ordering of the labeled set, such as the FCE variant of Matching Networks or TCML. Others will likely leverage graph neural net ideas to further tackle few-shot learning problems in the future, and this paper represents a first step in that direction.\n\nRegarding the graph, I am wondering if the authors can comment on what scenarios is the graph structure expected to help? In the case of 1-shot, the graph can only propagate information about other classes, which seems to not be very useful.\n\nThough novel, the motivation behind the semi-supervised and active learning setup could use some elaboration. By including unlabeled examples in an episode, it is already known that they belong to one of the K classes. How realistic is this set-up and in what application is it expected that this will show up?\n\nFor active learning, the proposed method seems to be specific to the case of obtaining a single label. How can the proposed method be scaled to handle multiple requested labels?\n\nOverall the paper is well-structured and related work covers the relevant papers, but the details of the paper seem hastily written.\n\nIn the problem set-up section, it is not immediately clear what the distinction between s, r, and t is. Stating more explicitly that s is for the labeled data, etc. would make this section easier to follow. In addition, I would suggest stating the reason why t=1 is a necessary assumption for the proposed model in the few-shot and semi-supervised cases.\n\nRegarding the Omniglot dataset, Vinyals et al. (2016) augmented the classes so that 4,800 classes were used for training and 1,692 for test. Was the same procedure done for the experiments in the paper? If yes, please update 6.1.1 to make this distinction more clear. If not, please update the experiments to be consistent with the baselines.\n\nIn the experiments, does the \\varphi MLP explicitly enforce symmetry and identity or is it learned?\n\nRegarding the Omniglot baselines, it appears that Koch et al. (2015), Edwards & Storkey (2016), and Finn et al. (2017) use non-standard class splits relative to the other methods. This should probably be noted.\n\nThe results for Prototypical Networks appear to be incorrect in the Omniglot and Mini-Imagenet tables. According to Snell et al. (2017) they should be 49.4% and 68.2% for miniImagenet. Moreover, Snell et al. (2017) only used 64 classes for training instead of 80 as utilized in the proposed approach. Given this, I am wondering if the authors can comment on the performance difference in the 5-shot case, even though Prototypical Networks is a special case of GNNs?\n\nFor semi-supervised and active-learning results, please include error bars for the miniImagenet results. Also, it would be interesting to see 20-way results for Omniglot as the gap between the proposed method and the baseline would potentially be wider.\n\nOther Comments:\n\n- In Section 4.2, Gc(.) is defined in Equation 2 but not mentioned in the text.\n- In Section 4.3, adding an equation to clarify the relationship with Matching Networks would be helpful.\n- I believe there is a typo in section 4.3 in that softmax(\\varphi) should be softmax(-\\varphi), so that more similar pairs will be more heavily weighted.\n- The equation in 5.1 appears to be missing a minus sign.\n\nOverall, the paper is novel and interesting, though the clarity and experimental results could be better explained.\n\nEDIT: I have read the author's response. The writing is improved and my concerns have largely been addressed. I am therefore revising my rating of the paper to a 7.", "This paper studies the problem of one-shot and few-shot learning using the Graph Neural Network (GNN) architecture that has been proposed and simplified by several authors. The data points form the nodes of the graph with the edge weights being learned, using ideas similar to message passing algorithms similar to Kearnes et al and Gilmer et al. This method generalizes several existing approaches for few-shot learning including Siamese networks, Prototypical networks and Matching networks. The authors also conduct experiments on the Omniglot and mini-Imagenet data sets, improving on the state of the art.\n\nThere are a few typos and the presentation of the paper could be improved and polished more. I would also encourage the authors to compare their work to other unrelated approaches such as Attentive Recurrent Comparators of Shyam et al, and the Learning to Remember Rare Events approach of Kaiser et al, both of which achieve comparable performance on Omniglot. I would also be interested in seeing whether the approach of the authors can be used to improve real world translation tasks such as GNMT. ", "\n\n> In the experiments, does the \\varphi MLP explicitly enforce symmetry and identity or is it learned?\n\nThe \\varphi(a,b) = MLP(abs(a-b)) explicity enforces symmetry due to the absolute value.\nThe identity is easily learned since the input to the MLP will always be the same vector (a vector of zeros) when a==b. \nI have rewritten this line to clarify which property is enforced and which one is easily learned.\n\n\n> Regarding the Omniglot baselines, it appears that Koch et al. (2015), Edwards & Storkey (2016), and Finn et al. (2017) use non-standard class splits relative to the other methods. This should probably be noted.\n\n- From Koch et al. (2015) we are using the results from Vinyals et al. (2016) reimplementation that is using the common class splits. I added a note explaining it in the caption of the table.\n\n- I checked again the paper from Finn et al. (2017). Based on what I read, they are using the same configuration splits, 1200 training, 423 testing and augmented by multiples of 90 degrees. I add the paper url at the end of this answer. Same for Edwards & Storkey (2016). Correct me if I am wrong.\n\n\n> The results for Prototypical Networks appear to be incorrect in the Omniglot and Mini-Imagenet tables. According to Snell et al. (2017) they should be 49.4% and 68.2% for miniImagenet. Given this, I am wondering if the authors can comment on the performance difference in the 5-shot case, even though Prototypical Networks is a special case of GNNs?\n\nIn order to use the same evaluation procedure across papers, all results are evaluated using the same K-way q-shot conditions for both training an test, in other words, a network that for example evaluates a 20-way 1-shot experiment has been trained on 20-way 1-shot tasks. This was the evaluation procedure presented by (Vinyals et al.) and followed by later works. In Prototypical Networks these results are reported in the Appendix. (Mishra et al.) is also reporting these results from (Snell et al) in their comparison. We chose to use the same evaluation procedure across papers.\n\n\n> Moreover, Snell et al. (2017) only used 64 classes for training instead of 80 as utilized in the proposed approach.\n\nWe modified it, in the results of the last update, the network is trained with the 64 training classes. The 16 validation classes are only used for early stopping and parameter tuning.\n\n\n> For semi-supervised and active-learning results, please include error bars for the miniImagenet results. Also, it would be interesting to see 20-way results for Omniglot as the gap between the proposed method and the baseline would potentially be wider.\n\nWe added the error bars.\n\n\n> Other Comments:\n> In Section 4.2, Gc(.) is defined in Equation 2 but not mentioned in the text.\n\nSolved\n\n\n> In Section 4.3, adding an equation to clarify the relationship with Matching Networks would be helpful.\n\nDone\n\n\n> I believe there is a typo in section 4.3 in that softmax(\\varphi) should be softmax(-\\varphi), so that more similar pairs will be more heavily weighted.\n\nSolved\n\n\n> The equation in 5.1 appears to be missing a minus sign.\n\nSolved\n\n\nWe improved Mini-Imagenet results by regularizing better (using dropout and early stopping.)\n\n\n- Finn et al. (2017) https://arxiv.org/pdf/1703.03400.pdf\n- Edwards & Storkey (2016) https://arxiv.org/pdf/1606.02185.pdf\n", "\nFirst of all, thank you very much for the comment and remarks.\n\n> Why is the score for the Prototypical Networks much lower than what is reported in the paper?\n\nIn order to use the same evaluation procedure across papers, all results are evaluated using the same K-way q-shot conditions for both training an test, in other words, a network that for example evaluates a 20-way 1-shot experiment has been trained on 20-way 1-shot tasks. This was the evaluation procedure presented by (Vinyals et al.) and followed by later works. In Prototypical Networks these results are reported in the Appendix. (Mishra et al.) is also reporting these results from (Snell et al) in their comparison.\n\n\n> By taking confidence interval into account, there is no statistically significant difference between your method (49.8% with conf interval 0.22%) and theirs. I find this a crucial problem in the experiment section because your method has generalized from the Prototypical Networks, and you claim that the extra flexibility has led to improvement in performance.\n\nIt is true that MiniImagenet results are in the same confidence interval than Prototypical Networks, despite this, Graph Neural Networks improve significantly in Omniglot dataset compared to Prototypical Networks. And they are also able to do semi-supervised and active learning which is not possible with the prototypical setting. This is why we claim the extra flexibility. Graph Neural Networks have two main interesting properties in few-shot learning: 1) They learn a different metric from the euclidean at every layer. 2) They can handle contextual information. Roughly speaking, the point (1) seems to be useful for Omniglot and the point (2) seems to be useful for MiniImagenet.\n\nEdit: We've improved the results for MiniImagenet to 50.33% (with conf interval 0.36%) in the 5-way 1-shot setting\n\n\n> Second, this is only a suggestion, but it would be nice if you add an experiment with ResNet architecture used by TCML (Mishra et al.) for MiniImageNet.\n\nWe will consider it in order to get a better comparison with Mishra et al. (2017).\n\n\n- Vinyals et al. (2016) \"Matching Networks for One Shot Learning.\" (https://arxiv.org/pdf/1606.04080.pdf)\n- Snell et al. (2017). \"Prototypical Networks for Few-shot Learning.\" (https://arxiv.org/abs/1703.05175)\n- Mishra et al. (2017). \"Meta-Learning with Temporal Convolutions.\" (https://arxiv.org/abs/1707.03141)\t\n\n", "\nHi Parnav, thank you for the clarification.\n\nI thought 32x32 was the resolution of the attentive patch instead of the image size, but that was wrong, sorry for that. I edited my first comment in order to avoid confusing future readers. Even though, we didn't add the paper due to other differences in the data augmentation and the embedding network now commented in the first answer.", "\nThank you for the review,\n\n\n> I would also encourage the authors to compare their work to other unrelated approaches such as Attentive Recurrent Comparators of Shyam et al, and the Learning to Remember Rare Events.\n\nWe added the results from \"Learning to Remember Rare Events\" to our table. Regarding \"Attentive Recurrent Comparators\" we didn't add it due to some differences in the data augmentation method (translation, mirroring and shearing) and the use of Wide Resnets as feature extractors.\n\n\n> I would also be interested in seeing whether the approach of the authors can be used to improve real world translation tasks such as GNMT. \n\nI guess the Graph Neural Network could replace the attentive module for Neural Machine Translation systems that search for parts over the source sentence. Maybe it could exploit more complex relationships among the words of the input sentence. I don't know if it has been already tried.\n\n\nNext, I list some of the main modifications that we have done to the paper:\n\n- We updated Omniglot results using the same data augmentation protocol than other papers and results are now more competitive with the state of the art.\n\n- We considerably improved Mini-Imagenet results by regularizing better (using dropout and early stopping.)\n\n- We improved the writing, figures and we corrected some typos.\n\n", "> Regarding the Attentive Recurrent Comparators, we couldn't add it because they are using 105x105 resolution version of the images, which makes it hard to be compared with 28x28 resolution versions.\n\nThe results in the paper are using 32x32 images. Results in Koch et al uses 105x105 images and that is highlighted in our work.", "\nFirst of all, thank you for the review and comments.\n\n\n> As explained in 5.1 a fully connected graph is used (although in Fig. 2 the graph nodes do not have connections to all other nodes). If all nodes are connected to all nodes, what is the different of this model from a fully connected, multi-stream networks composed of S^2 branches?\n\nThe graph is defined as fully connected, but the connection weights are different among nodes. In other words, the adjacency matrix $A$ is not a binary matrix formed by 0s and 1s, instead of that, every value of the adjacency matrix $A[i,j]$ is a Real number that ranges from 0 to 1. These values are computed by the network at every layer before applying each Graph Convolution.\n\nOur Graph Neural Network (GNN) performs a particular operation on each node of the Graph and another operation is applied on the neighbors of that node that are averaged using the weights of the Adjacency matrix. This behaviour is easier to represent unfolding the equation 2 from the paper: \n\nx^{k+1} = Gc(x^{k}) = ρ(I·x^{k}·\\theta_1^{k} + A·x^{k}·\\theta_2^{k}).\n\nWhere $k$ is the layer index; $x^{k}$ are the nodes structured into a 2-dimensional (Number_nodes by Number_features) matrix; $I$ is the identity matrix; $A$ is the adjacent matrix; and \\theta_1^{k} and \\theta_2^{k} are the vectors of learnable parameters of dimensionality (Number_features,) for these two operations.\n\nThe matrices $I$ and $A$ are called operators, in our paper are represented with the following term $\\mathcal{A} = {A^{k}, I}$\n\nOnce it has been clarified, we can notice that the mechanism of a GNN is different from a multi-stream network composed by S^2 branches. In a GNN the same operation is convolved over the nodes of a layer and at every node the information of the neighbors is aggregated based on the Adjacency matrix, the parameters to learn at every layer will only be theta_1 and theta_2. Based on my knowledge about multi-stream networks, the number of parameters to handle S^2 branches would be radically larger, the mechanism to aggregate the information from the other nodes would also be different. I hope this explanation clarified this point.\n\nRegarding Fig. 2 I updated it connecting all the nodes. In the previous version I had removed some of the connections to clearly see how they change at every layer, but it was probably misleading.\n\n\n> To rephrase, what is the benefit of having a graph structure when all nodes are connected with all nodes. Besides, what is the effect when having more and more support images? Is the generalization hurt?\n\nWe can distinguish to main benefits of the GNN:\n1) A different metric from the euclidean is learned at every layer. \n2) GNN can handle contextual information by aggregating information from the other nodes to the current one based on the weights of the adjacency matrix learned in (1).\n\nAbout the generalization, the number of parameters of the GNN is independent from the number of support images, therefore, increasing the number of support images will not overparametrize the network avoiding the risk to overfit.\n\n\n> Second, it is not clear whether the label used as input in eq. (4) is a model choice or a model requirement. The reason is that the label already appears in the loss of the nodes in 5.1. Isn't using the label also as input redundant?\n\nIn a few-shot task we have: 1) A support subset of labeled images and 2) An unlabeled image to classify. We input (1) the label of the subset of labeled images and we predict (2) the label of the image to classify which appears in the loss. Therefore it is a model requirement for the few-shot scenario. We modified this explanation to be clearer. In the semi-supervised scenario we just input the label for some of the samples, therefore in this case, it is only a requirement for the labeled samples.\n\n\n> Third, the paper is rather vague or imprecise at points. In eq. (1) many of the notations remain rather unclear until later in the text (and even then they are not entirely clear). For instance, what is s, r, t. \n\nWe have rewritten this section in order to be clearer. \n\n\n> The experimental section is also ok, although not perfect. The proposed method appears to have a modest improvement for few-shot learning. However, in the case of active learning and semi-supervised learning the method is not compared to any baselines (other than the random one), which makes conclusions hard to reach.\n\nWe updated the Omniglot results using the same data augmentation protocol than other papers and results are more competitive with the state of the art now.\nWe also improved Mini-Imagenet results by better regularizing the network using dropout and early stopping.", "Thanks again for your comment. \n\nWe apologize, there is a typo in the equation (and in the previous answer). The subscript \"l\" is indexing over output feature maps, not over input feature maps. We will fix this in the document. \n\nBest,\n\nAuthors", "\nThanks for reading the paper,\n\n\n> What do d_k and d_k+1 stands for? \n\nAs you said, they are the point dimension for the k-stage and k+1-stage.\n\n\n> If they are the point dimension for the k-stage and k+1-stage how can you iterate over the [l] index (which goes from 1 to d_k+1) for both?\n\nYou can see that x^{(k)}$ and $\\theta^{(k)}$ are also indexed by $k$, therefore, this $k$ is also changing the value of [l] which also depends on $k$. In practice we just mean that at every layer the input dimensionality is d_k, and the output is d_k+1, same as in CNNs.\n\n\n> Why you use a concatenation? is it suppose to represent the 1 in the generator family? Shouldn't it be added instead of concatenated?\n\nThe concatenation is independent from the Graph operation, we are just concatenating the input layer to the new outputs, this is applied to other types of Neural Networks. It was introduced by \"Densely Connected Convolutional Networks\", https://arxiv.org/abs/1608.06993\n\n\n", "Greetings, could you give me some insight on the following points:\nEquation (2):\n* What do d_k and d_k+1 stands for? If they are the point dimension for the k-stage and k+1-stage how can you iterate over the [l] index (which goes from 1 to d_k+1) for both?\n\nFigure (8):\n* Why you use a concatenation? is it suppose to represent the 1 in the generator family? Shouldn't it be added instead of concatenated?\n\nThanks in advance", "\nFirst of all thank you for the review and comments.\n\n\n> Regarding the graph, I am wondering if the authors can comment on what scenarios is the graph structure expected to help? In the case of 1-shot, the graph can only propagate information about other classes, which seems to not be very useful.\n\nI would like to point out two main strengths of the GNN method that we are proposing:\n1) A different metric from the euclidean is learned at every layer. \n2) GNN can handle contextual information by aggregating information from the neighbor nodes based on the weights of the adjacency matrix learned in (1).\n\nBased on our experiments, propagating information from other classes seems more useful when the number of classes is larger than one, specifically we notice the largest improvement in Mini-Imagenet 5-way 5-shot. But the metric that is learned at every layer also provides strong results for the case of one-shot, specially we notice it in the Omniglot dataset in 5-way 1-shot and 20-way 1-shot. The Graph structure also allows us to run in the semi-supervised and active learning scenarios.\n\n\n> Though novel, the motivation behind the semi-supervised and active learning setup could use some elaboration. By including unlabeled examples in an episode, it is already known that they belong to one of the K classes. How realistic is this set-up and in what application is it expected that this will show up?\n\nOne example application that comes to my mind for the semi-supervised scenario would be building a face recognition system from Facebook profiles. A user may have uploaded dozens of images with other people and maybe just some of the pictures are labeled. It could be possible to use all the faces from the pictures that the user uploaded even if they are not labeled together with the few labeled ones to build a few-shot classifier for that user. \n\nI hope the open community will be able to find new and better applications.\n\n\n> For active learning, the proposed method seems to be specific to the case of obtaining a single label. How can the proposed method be scaled to handle multiple requested labels?\n\nIn the proposed method, we are uncovering one of the labels choosing from a softmax distribution in a particular layer. It would be possible to uncover multiple labels by choosing more than one, it woud also be possible to uncover multiple labels at multiple layers. Our main aim here was to test out how feasible is to learn to do Active learning from the classification loss using an end to end structure like a GNN instead of using handcrafted methods like Uncertainty Sampling. Scaling it to larger datasets can be hard to optimize, we leave it for the future and we present it as a prove of concept.\n\n\n> In the problem set-up section, it is not immediately clear what the distinction between s, r, and t is.\n\nWe have rewritten this section more accurately.\n\n\n> Regarding the Omniglot dataset, Vinyals et al. (2016) augmented the classes so that 4,800 classes were used for training and 1,692 for test. Was the same procedure done for the experiments in the paper? If yes, please update 6.1.1 to make this distinction more clear. If not, please update the experiments to be consistent with the baselines.\n\nThanks a lot for this point. Our data augmentation implementation was only for the training classes. We updated the paper results implementing the data augmentation also on the test classes as it is done in other works and the results are now better and more competitive with the state of the art.\n\nThe answer continues in the following message \"Answer part 2\"", "\n> I am not really satisfied with the protocol you used. There is no constraint on how you train the network for few-shot image classification, thus it is fine to use settings different in training and testing.\n\nI agree with you there is no constraint in how you train the network, but we chose to use the same evaluation protocol than other papers in order to do a fairer evaluation.\n", "> In order to use the same evaluation procedure across papers, all results are evaluated using the same K-way q-shot conditions for both training an test, in other words, a network that for example evaluates a 20-way 1-shot experiment has been trained on 20-way 1-shot tasks. \n\nI am not really satisfied with the protocol you used.\nThere is no constraint on how you train the network for few-shot image classification,\nthus it is fine to use settings different in training and testing.\nI do not think that the fact that Mishra et al. does similar report justifies this problem.\n\n\n> It is true that MiniImagenet results are in the same confidence interval than Prototypical Networks, despite this, Graph Neural Networks improve significantly in Omniglot dataset compared to Prototypical Networks. And they are also able to do semi-supervised and active learning which is not possible with the prototypical setting. \n\nThank you for your answer.\nAlthough I had a strong concern about the supervised training result for MiniImageNet, your comments led me to conclude that there are strength in your proposal.\n", "Dear Authors,\n\nThis paper introduced adding a graph neural network on top of a feature extractor to conduct comparisons between inputs.\nThe idea of adding a module to combine features from multiple inputs have been studied extensively in this field (Snell et al. and Mishra et al.).\nThus, I feel that it is critical for this paper to prove empirical strength of the proposed architectural extension.\n\nWith regard to experiments, I have few questions and proposals to make it better.\nFirst, why is the score for the Prototypical Networks much lower than what is reported in the paper?\nFor instance, 5-way 1-shot score of miniImageNet, the Prototypical Networks is reported to be 49.42% (conf interval 0.78%), but you reported it as 46.61%.\nBy taking confidence interval into account, there is no statistically significant difference between your method (49.8% with conf interval 0.22%) and theirs.\nI find this a crucial problem in the experiment section because your method has generalized from the Prototypical Networks, and you claim that the extra flexibility has led to improvement in performance.\nBy the way, the score by Snell et al. is reproducible (I have done it myself for 5-way 1-shot setting of MiniImageNet, although I have failed to do so for 5-way 5-shot).\n\nSecond, this is only a suggestion, but it would be nice if you add an experiment with ResNet architecture used by TCML (Mishra et al.) for MiniImageNet.\nYour experimental results are not comparable to the current state of the art with weaker feature extractor than TCML.\n\n- Snell et al. (2017). \"Prototypical Networks for Few-shot Learning.\" (https://arxiv.org/abs/1703.05175)\n- Mishra et al. (2017). \"Meta-Learning with Temporal Convolutions.\" (https://arxiv.org/abs/1707.03141)\n" ]
[ 7, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJj6qGbRW", "iclr_2018_BJj6qGbRW", "iclr_2018_BJj6qGbRW", "r1_szu5xM", "rJ5B_ucef", "ByG5VLHQG", "By7ixJ9eG", "HkmKSsqzz", "BJIp_k0xM", "HyGQQPbMM", "r1CZZbezf", "iclr_2018_BJj6qGbRW", "r1_szu5xM", "BkWNYcseM", "HJ4l-figf", "iclr_2018_BJj6qGbRW" ]
iclr_2018_S1nQvfgA-
Semantically Decomposing the Latent Spaces of Generative Adversarial Networks
We propose a new algorithm for training generative adversarial networks to jointly learn latent codes for both identities (e.g. individual humans) and observations (e.g. specific photographs). In practice, this means that by fixing the identity portion of latent codes, we can generate diverse images of the same subject, and by fixing the observation portion we can traverse the manifold of subjects while maintaining contingent aspects such as lighting and pose. Our algorithm features a pairwise training scheme in which each sample from the generator consists of two images with a common identity code. Corresponding samples from the real dataset consist of two distinct photographs of the same subject. In order to fool the discriminator, the generator must produce images that are both photorealistic, distinct, and appear to depict the same person. We augment both the DCGAN and BEGAN approaches with Siamese discriminators to accommodate pairwise training. Experiments with human judges and an off-the-shelf face verification system demonstrate our algorithm’s ability to generate convincing, identity-matched photographs.
accepted-poster-papers
The paper proposes a GAN based approach for disentangling identity (or class information) from style. The supervision needed is the identity label for each image. Overall, the reviewers agree that the paper makes a novel contribution along the line of work on disentangling 'style' from 'content'.
train
[ "SkIH8vIef", "ryEAzYOlM", "BkAls-Kgf", "H19BT53-f", "SyGahq2WG", "rkqOhq2bM", "rJ7G3q3Zz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Quality\nThe paper is well written and the model is simple and clearly explained. The idea for disentangling identity from other factors of variation using identity-matched image pairs is quite simple, but the experimental results on faces and shoes are impressive.\n\nClarity\nThe model and its training objective are simple and clearly explained.\n\nOriginality\nThere are now many, many papers on generative models with disentangled feature representations, including with GANs. However, to my knowledge this is the first paper showing very compelling results using this particular setup of identity-aligned images.\n\nSignificance\nDisentangled generative models are an important line of work in my opinion. This paper presents a very simple but apparently effective way of disentangling identity from other factors, and implements in two of the more recent GAN architectures.\n\nSuggestion for an experiment - can you do few shot image generation? A simple way to do it would be to train an encoder from image → identity encoding. Then, given one or a few images of a new person’s face or a new shoe, you could estimate the identity latent variable, and then generate many additional samples.\n\nPros\n- Very simple and effective disentangling technique for GANs.\n- Great execution, compelling samples on both faces and shoes.\n\nCons\n- Only two factors of variations are disentangled in this model. Could it be generalized to specify more than just two, e.g. lighting, pose, viewpoint, etc?\n- Not much technically new or surprising compared to past work on disentangling generative models.", "Summary:\n\nThis paper investigated the problem of controlled image generation. Assuming images can be disentangled by identity-related factors and style factors, this paper proposed an algorithm that produces a pair of images with the same identity. Compared to standard GAN framework, this algorithm first generated two latent variables for the pair images. The two latent variables are partially shared reflecting the shared identity information. The generator then transformed the latent variables into high-resolution images with a deconvolution decoder networks. The discriminator was used to distinguish paired images from database or paired images sampled by the algorithm. Experiments were conducted using DCGAN and BEGAN on portrait images and shoe product images. Qualitative results demonstrated that the learned style representations capture viewpoint, illumination and background color while the identity was well preserved by the identity-related representations.\n\n\n== Novelty & Significance ==\nPaired image generation is an interesting topic but this has been explored to some extent. Compared to existing coupled generation pipeline such as CoGAN, I can see the proposed formulation is more application-driven.\n\n== Technical Quality ==\nIn Figure 3, the portrait images in the second row and fourth row look quite similar. I wonder if the trained model works with only limited variability (in terms of identity).\nIn Figure 4, the viewpoint is quite limited (only 4 viewpoints are provided).\n\nI am not very convinced whether SD-GAN is a generic algorithm for controlled image generation. Based on the current results, I suspect it only works in fairly constrained settings. \nIt would be good to know if it actually works in more challenging datasets such as SUN bedroom, CUB and Oxford Flowers. \n\n“the AC-DCGAN model cannot imagine new identities”\nI feel the author of this paper made an unfair argument when comparing AC-DCGAN with the proposed method. First, during training, the proposed SD-GAN needs to access the identity information and there is only limited identity in the dataset. Based on the presentation, it is not very clear how does the model generate novel identities (in contrast to simply interpolating existing identities). For example, is it possible to generate novel viewpoints in Figure 4?\n\nMissing references on conditional image generation and coupled image generation:\n-- Generative Adversarial Text-to-Image Synthesis. Reed et al., In ICML 2016.\n-- Attribute2Image: Conditional Image Generation from Visual Attributes. Yan et al., In ECCV 2016.\n-- Domain Separation Networks. Bousmalis et al., In NIPS 2016.\n-- Unsupervised Image-to-Image Translation Networks. Liu et al., In NIPS 2017.\n\nOverall, I rate this paper slightly above borderline. It showed some good visualization results on controlled image generation. But the comparison to AC-GAN is not very fair, since the identity pairs are fully supervised for the proposed method. As far as I can see, there are no clear-cut improvements quantitatively. Also, there is no comparison with CoGAN, which I believe is the most relevant work for coupled image generation. \n", "[Overview]\n\nIn this paper, the authors proposed a model called SD-GAN, to decompose semantical component of the input in GAN. Specifically, the authors proposed a novel architecture to decompose the identity latent code and non-identity latent code. In this new architecture, the generator is unchanged while the discriminator takes pair data as the input, and output the decision of whether two images are from the same identity or not. By training the whole model with a conventional GAN-training regime, SD-GAN learns to take a part of the input Z as the identity information, and the other part of input Z as the non-identity (or attribute) information. In the experiments, the authors demonstrate that the proposed SD-GAN could generate images preserving the same identity with diverse attributes, such as pose, age, expression, etc. Compared with AC-GAN, the proposed SD-GAN achieved better performance in both automatically evaluation metric (FaceNet) and Human Study. In the appendix, the authors further presented ablated qualitative results in various settings.\n\n[Strengths]\n\n1. This paper proposed a simple but effective generative adversarial network, called SD-GAN, to decompose the input latent code of GAN into separate semantical parts. Specifically, it is mainly instantiated on face images, to decompose the identity part and non-identity part in the latent code. Unlike the previous works such as AC-GAN, SD-GAN exploited a Siamese network to replace the conventional discriminator used in GAN. By this way, SD-GAN could generate images of novel identities, rather than being constrained to those identities used during training. I think this is a very good property. Due to this, SD-GAN consumes much less memory than AC-GAN, when training on a large number of identities.\n\n2. In the experiment section, the authors quantitatively evaluate the generated images based on two methods, one is using a pre-trained FaceNet model to measure the verification accuracy and one is human study. When evaluated based on FaceNet, the proposed SD-GAN achieved higher accuracy and obtained more diverse face images, compared with AC-GAN. In human study, SD-GAN achieved comparable verification accuracy, while higher diversity than AC-GAN. The authors further presented ablated experiments in the Appendix.\n\n[Comments]\n\nThis paper presents a novel model to decompose the latent code in a semantic manner. However, I have several questions about the model:\n\n1. Why would SD-GAN not generate images merely have a smaller number of identities or just a few identities? In Algorithm 1, the authors trained the model by sampling one identity vector, which is then concatenated to two observation vectors. In this case, the generator always takes the same identity vectors, and the discriminator is used to distinguish these fake same-identity pair and the real same-identity pair from training data. As such, even if the generator generates the same identity, say mean identity, given different identity vectors, the generated images can still obtain a lower discrimination loss. Without any explicite constraint to enforce the generator to generate different identity with different identity vectors, I am wondering what makes SD-GAN be able to generate diverse identities? \n\n2. Still about the identity diversity. Though the authors showed the identity-matched diversity in the experiments, the diversity across identity on the generated images is not evaluated. The authors should also evaluate this kind of identity. Generally, AC-GAN could generate as many identities as the number of identities in training data. I am curious about whether SD-GAN could generate comparable diverse identity to AC-GAN. One simple way is to evaluate the whole generated image set using Inception Score based on a Pre-trained face identification network; Another way is to directly use the generated images to train a verification model or identification model and evaluate it on real images. Though compared with AC-GAN, SD-GAN achieved better identity verification performance and sample diversity, I suspect the identity diversity is discounted, though SD-GAN has the property of generating novel identities. Furthermore, the authors should also compare the general quality of generated samples with DC-GAN and BEGAN as well (at least qualitatively), apart from the comparison to AC-GAN on the identity-matched generation.\n\n3. When making the comparison with related work, the authors mentioned that Info-GAN was not able to determine which factors are assigned to each dimension. I think this is not precise. The lack of this property is because there are no data annotations. Given the data annotations, Info-GAN can be easily augmented with such property by sending the real images into the discriminator for classification. Also, there is a typo in the caption of Fig. 10. It looks like each column shares the same identity vector instead of each row.\n\n[Summary]\n\nThis paper proposed a new model called SD-GAN to decompose the input latent code of GAN into two separete semantical parts, one for identity and one for observations. Unlike AC-GAN, SD-GAN exploited a Siamese architecture in discriminator. By this way, SD-GAN could not only generate more identity-matched face image pairs but also more diverse samples with the same identity, compared with AC-GAN. I think this is a good idea for decomposing the semantical parts in the latent code, in the sense that it can imagine new face identities and consumes less memory during training. Overall, I think this is a good paper. However, as I mentioned above, I am still not clear why SD-GAN could generate diverse identities without any constraints to make the model do that. Also, the authors should further evaluate the diversity of identity and compare it with AC-GAN.", "\nThank you for your response and for pointing out missing references. We have added them to our related work section in the updated manuscript. Please see our general response above with regard to identity diversity.\n\nWe did not quantitatively compare to CoGAN as their problem objectives are different from our own. CoGANs learn to translate images between domains with binary attributes such as blond/brown hair or glasses/no-glasses without parallel data. How one might extend CoGANs to learn a manifold over thousands of identities is non-obvious.\n\nAs you suggested, we would also like to run studies on other types of data (the Oxford Flowers dataset that you recommended is particularly enticing), but leave this as an avenue for our future explorations.", "\nThank you for your insights. We are glad that you found our paper to be well-written and our method to be both simple and effective. Your intuition is correct that SD-GAN could be extended to disentangle multiple factors of variation. We are actively investigating this and plan to publish these results in future work. You are also correct that SD-GAN could be modified for few-shot image generation; please see Appendix B (and the relevant Figure 8) for our preliminary investigation and results in this setting.", "\nThank you for your detailed comments. In addition to adding information about identity diversity to the manuscript (see general response above), we offer the following responses to your other inquiries:\n\n1) Why would SD-GAN not generate images merely have a smaller number of identities or just a few identities?\n\nIf SD-GANs always produced images depicting the same identity regardless of the identity vector, the discriminator would be able to easily label these samples as fake on the basis that they always depict the same subject. This is the same reason that regular GANs are able to produce images that differ in subject identity.\n\n3) When making the comparison with related work, the authors mentioned that Info-GAN was not able to determine which factors are assigned to each dimension. I think this is not precise.\n\nThank you for pointing out that our statement about InfoGAN was imprecise. Our original purpose in making this claim was to indicate that there is no *obvious* way to hold identity fixed while varying contingent factors in vanilla InfoGAN, and hence no way to directly compare it to our SD-GAN algorithm. However, as you pointed out, InfoGAN is fully unsupervised while SD-GAN is not. We have updated our related work section to clarify the distinction between InfoGAN and SD-GAN.", "We would like to thank all of the reviewers for their thoughtful comments and suggestions. We are glad that reviewers found our method simple, our results compelling, and our paper to be well-written. While the overall response was positive, reviewers expressed minor concerns related to identity diversity in our generated results. We have uploaded a new version of the paper that will hopefully clarify.\n\n[Identity diversity]\n\nIn the original manuscript, we report one measure of identity diversity. In Table 1, our All-Div metric reports the mean diversity (one minus MS-SSIM) for 10k pairs with random identity. While All-Div captures diversity at the pixel level, it is perhaps not the best measure of *semantic* (perceived) diversity. In our updated manuscript, we report another metric in Table 1: the false accept rate (FAR) of FaceNet and human annotators. A higher FAR indicates some evidence of lower identity diversity. By this metric, SD-GANs produce images with lower but comparable diversity to those from AC-DCGAN, and both have lower diversity than the real data. We have added these details to Table 1." ]
[ 6, 6, 7, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_S1nQvfgA-", "iclr_2018_S1nQvfgA-", "iclr_2018_S1nQvfgA-", "ryEAzYOlM", "SkIH8vIef", "BkAls-Kgf", "iclr_2018_S1nQvfgA-" ]
iclr_2018_By-7dz-AZ
A Framework for the Quantitative Evaluation of Disentangled Representations
Recent AI research has emphasised the importance of learning disentangled representations of the explanatory factors behind data. Despite the growing interest in models which can learn such representations, visual inspection remains the standard evaluation metric. While various desiderata have been implied in recent definitions, it is currently unclear what exactly makes one disentangled representation better than another. In this work we propose a framework for the quantitative evaluation of disentangled representations when the ground-truth latent structure is available. Three criteria are explicitly defined and quantified to elucidate the quality of learnt representations and thus compare models on an equal basis. To illustrate the appropriateness of the framework, we employ it to compare quantitatively the representations learned by recent state-of-the-art models.
accepted-poster-papers
The paper proposes evaluation metrics for quantifying the quality of disentangled representations. There is consensus among reviewers that the paper makes a useful contribution towards this end. Authors have addressed most of reviewers' concerns in their response.
train
[ "ByfkoCtlf", "rk7pIjceG", "H1YPgFhlf", "rJnsAbTQG", "S1-fkGaXf", "rk4ik297f", "H1PQy257z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "****\nI acknowledge the author's comments and improve my score to 7.\n****\n\nSummary:\nThe authors propose an experimental framework and metrics for the quantitative evaluation of disentangling representations.\nThe basic idea is to use datasets with known factors of variation, z, and measure how well in an information theoretical sense these are recovered by a representation trained on a dataset yielding a latent code c.\nThe authors propose measures disentanglement, informativeness and completeness to evaluate the latent code c, mostly through learned nonlinear mappings between z and c measuring the statistical relatedness of these variables.\nThe paper ultimately is light on comprehensive evaluation of popular models on a variety of datasets and as such does not quite yield the insights it could.\n\nSignificance:\nThe proposed methodology is relevant, because disentangling representations are an active field of research and currently are not evaluated in a standardized way.\n\nClarity:\nThe paper is lucidly written and very understandable.\n\nQuality:\nThe authors use formal concepts from information theory to underpin their basic idea of recovering latent factors and have spent a commendable amount of effort on clarifying different aspects on why these three measures are relevant.\nA few comments:\n1. How do the authors propose to deal with multimodal true latent factors? What if multiple sets of z can generate the same observations and how does the evaluation of disentanglement fairly work if the underlying model cannot be uniquely recovered from the data?\n2. Scoring disentanglement against known sources of variation is sensible and studied well here, but how would the authors evaluate or propose to evaluate in datasets with unknown sources of variation?\n3. the actual sources of variation are interpretable and explicit measurable quantities here. However, oftentimes a source of variation can be a variable that is hard or impossible to express in a simple vector z (for instance the sentiment of a scene) even when these factors are known. How do the authors propose to move past narrow definitions of factors of variation and handle more complex variables? Arguably, disentangling is a step towards concept learning and concepts might be harder to formalize than the approach taken here where in the experiment the variables are well-behaved and relatively easy to quantify since they relate to image formation physics.\n4. For a paper introducing a formal experimental framework and metrics or evaluation I find that the paper is light on experiments and evaluation. I would hope that at the very least a broad range of generative models and some recognition models are used to evaluate here, especially a variational autoencoder, beta-VAE and so on. Furthermore the authors could consider applying their framework to other datasets and offering a benchmark experiment and code for the community to establish this as a means of evaluation to maximize the impact of a paper aimed at reproducibility and good science.\n\nNovelty:\nPrevious papers like \"beta-VAE\" (Higgins et al. 2017) and \"Bayesian Representation Learning With Oracle Constraints\" by Karaletsos et al (ICLR 16) have followed similar experimental protocols inspired by the same underlying idea of recovering known latent factors, but have fallen short of proposing a formal framework like this paper does. It would be good to add a section gathering such attempts at evaluation previously made and trying to unify them under the proposed framework.\n", "The paper addresses the problem of devising a quantitative benchmark to evaluate the capability of algorithms to disentangle factors of variation in the data. \n\n*Quality* \nThe problem addressed is surely relevant in general terms. However, the contributed framework did not account for previously proposed metrics (such as equivariance, invariance and equivalence). Within the experimental results, only two methods are considered: although Info-GAN is a reliable competitor, PCA seems a little too basic to compete against. The choice of using noise-free data only is a limiting constraint (in [Chen et al. 2016], Info-GAN is applied to real-world data). \nFinally, in order to corroborate the quantitative results, authors should have reported some visual experiments in order to assess whether a change in c_j really correspond to a change in the corresponding factor of variation z_i according to the learnt monomial matrix.\n\n*Clarity*\nThe explanation of the theoretical framework is not clear. In fact, Figure 1 is straight in identifying disentanglement and completeness as a deviation from an ideal bijective mapping. But, then, the authors missed to clarify how the definitions of D_i and C_j translate this requirement into math. \nAlso, the criterion of informativeness of Section 2 is split into two sub-criteria in Section 3.3, namely test set NRMSE and Zero-Shot NRMSE: such shift needs to be smoothed and better explained, possibly introducing it in Section 2.\n\n*Originality*\nThe paper does not allow to judge whether the three proposed criteria are original or not with respect to the previously proposed ones of [Goodfellow et al. 2009, Lenc & Vedaldi 2015, Cohen & Welling 2014, Jayaraman & Grauman 2015]. \n\n*Significance*\nThe significance of the proposed evaluation framework is not fully clear. The initial assumption of considering factors of variations related to graphics-generated data undermines the relevance of the work. Actually, authors only consider synthetic (noise-free) data belonging to one class only, thus not including the factors of variations related to noise and/or different classes.\n\nPROS: \nThe problem faced by the authors is interesting\n\nCONS:\nThe criteria of disentanglement, informativeness & completeness are not fully clear as they are presented.\nThe proposed criteria are not compared with previously proposed ones - equivariance, invariance and equivalence [Goodfellow et al. 2009, Lenc & Vedaldi 2015, Cohen & Welling 2014, Jayaraman & Grauman 2015]. Thus, it is not possible to elicit from the paper to which extent they are novel or how they are related..\nThe dataset considered is noise-free and considers one class only. Thus, several factors of variation are excluded a priori and this undermines the significance of the analysis.\nThe experimental evaluation only considers two methods, comparing Info-GAN, a state-of-the-art method, with a very basic PCA.\n\n\n**FINAL EVALUATION**\nThe reviewer rates this paper with a weak reject due to the following points.\n1) The novel criteria are not compared with existing ones [Goodfellow et al. 2009, Lenc & Vedaldi 2015, Cohen & Welling 2014, Jayaraman & Grauman 2015].\n2) There are two flaws in the experimental validation:\n\t2.1) The number of methods in comparison (InfoGAN and PCA) is limited.\n\t2.2) A synthetic dataset is only considered.\n\nThe reviewer is favorable in rising the rating towards acceptance if points 1 and 2 will be fixed. \n\n**EVALUATION AFTER AUTHORS' REBUTTAL**\nThe reviewer has read the responses provided by the authors during the rebuttal period. In particular, with respect to the highlighted points 1 and 2, point 1 has been thoroughly answered and the novelty with respect previous work is now clearly stated in the paper. Despite the same level of clarification has not been reached for what concerns point 2, the proposed framework (although still limited in relevance due to the lack of more realistic settings) can be useful for the community as a benchmark to verify the level of disentanglement than newly proposed deep architectures can achieve. Finally, by also taking into account the positive evaluation provided by the fellow reviewers, the rating of the paper has been risen towards acceptance. \n", "The authors consider the metrics for evaluating disentangled representations. They define three criteria: Disentanglement, Informativeness, and Completeness. They learning a linear mapping from the latent code to an idealized set of disentangled generative factors, and then define information-theoretic measures based on pseudo-distributions calculated from the relative magnitudes of weights. Experimental evaluation considers a dataset of 200k images of a teapot with varying pose and color.\n\nI think that defining metrics for evaluating the degree of disentanglement in representations is great problem to look at. Overall, the metrics approached by the authors are reasonable, though the way pseudo-distribution are define in terms of normalized weight magnitudes is seems a little ad hoc to me. \n\nA second limitation of the work is the reliance on a \"true\" set of disentangled factors. We generally want to learn learning disentangled representations in an unsupervised or semi-supervised manner, which means that we will in general not have access supervision data for the disentangled factors. Could the authors perhaps comment on how well these metrics would work in the semi-supervised case?\n\nOverall, I would say this is somewhat borderline, but I could be convinced to argue for acceptance based on the other reviews and the author response. \n\nMinor Commments:\n\n- Tables 1 and 2 would be easier to unpack if the authors were to list the names of the variables (i.e. azimuth instead of z_0) or at least list what each variable is in the caption. \n\n- It is not entirely clear to me how the proposed metrics, whose definitions all reference magnitudes of weights, generalize to the case of random forests. ", "Thank you for your feedback. Please see our response to reviewer 2, which addresses the points made by all three reviewers.", "Thank you for your feedback. Please see our response to reviewer 2, which addresses the points made by all three reviewers.", "R1:\n> The significance of the proposed evaluation framework is not fully clear. The\n> initial assumption of considering factors of variations related to\n> graphics-generated data undermines the relevance of the work.\n\nWe have further clarified the significance of the framework in Section\n1 & 5. The framework is not restricted to graphics-generated data,\nit could also be used e.g. with speech synthesis.\n\nR1:\n> But, then, the authors missed to clarify how the definitions of D_i\n> and C_j translate this requirement into math.\n\nThe descriptions of disentanglement and completeness on p 2&3\nmake it clear how D_i and C_j quantify the deviation from an\nideal bijective mapping.\n\nR1:\n> Also, the criterion of informativeness of Section 2 is split into two\n> sub-criteria in Section 3.3, namely test set NRMSE and Zero-Shot\n> NRMSE: such shift needs to be smoothed and better explained, possibly\n> introducing it in Section 2.\n\nWe thank the reviewer for pointing this out. We have now clarified (sec 4.1 final\nsentence) that the zero-shot inference task is a \"bonus\", and not a core\ncomponent of the framework.\n\nR1:\n> The dataset considered is noise-free and considers one class\n> only. Thus, several factors of variation are excluded a priori and\n> this undermines the significance of the analysis.\n\nIt would be easy to add noise (e.g. Gaussian) to the output of the\nrenderer, but we do not believe that this would have a substantial\neffect on the results. It would be interesting to expand the\nexperiments to cover more object classes, but we believe that the\nframework and experiments presented already constitute a substantial\nadvance.\n\nR3:\n> 1. How do the authors propose to deal with multimodal true latent\n> factors? What if multiple sets of z can generate the same observations\n> and how does the evaluation of disentanglement fairly work if the\n> underlying model cannot be uniquely recovered from the data?\n\nIf multiple sets of z can generate the same observations, then this\nshould be reflected in a (multimodal) distribution within the codes\nc. If this were present then it would be propagated via the mapping f\nfrom c to z into a distribution over z's. Current methods like InfoGAN\ntend to make a unimodal assumption about Q(c|x), but if this were\nmultimodal then the above mechanism would work, and one could use the\nobvious log-likelihood criterion log p(z|c) to train the\nregression network (e.g. like a mixture of experts, Jacobs et al\n1991). Of course the ordinary least squares criterion is just a\nspecial case of this with a Gaussian noise model for p(z|c).\nWe have also added a paragraph at the bottom of p3 concerning the\nrotation-of-factors case, for which the model is not identifiable.\n\nR1:\n> ... in order to corroborate the quantitative results, authors\n> should have reported some visual experiments in order to assess\n> whether a change in c_j really correspond to a change in the\n> corresponding factor of variation z_i according to the learnt monomial\n> matrix.\n\nPlease see Figure 6, as per the original submission.\n\nR3:\n> 3. the actual sources of variation are interpretable and explicit\n> measurable quantities here. However, oftentimes a source of variation\n> can be a variable that is hard or impossible to express in a simple\n> vector z (for instance the sentiment of a scene) even when these\n> factors are known. How do the authors propose to move past narrow\n> definitions of factors of variation and handle more complex variables?\n> Arguably, disentangling is a step towards concept learning and\n> concepts might be harder to formalize than the approach taken here\n> where in the experiment the variables are well-behaved and relatively\n> easy to quantify since they relate to image formation physics.\n\nIt is vital to be able to quantify disentangling wrt what R3 calls a\n\"simple\" vector z. The contribution of the paper is to do this. There\nmay well be more complex sources of latent structure, such as the\ninter-relationship of different objects in a scene. In our view\nthis can likely be handled by an appropriate hierarchical\nmodel with a vector of z's at the highest level, but this is\nan issue for future work.\n\nR2:\n> Tables 1 and 2 would be easier to unpack if the authors were to list\n> the names of the variables (i.e. azimuth instead of z_0) or at least\n> list what each variable is in the caption.\n\nfixed.", "We thank the reviewers for their helpful comments, and appreciate the\nview that \"defining metrics for evaluating the degree of\ndisentanglement in representations is great problem to look at\".\n\nTwo reviewers raise the issue that our work requires a \"true\" set of\ngenerative factors in order to carry out the evaluation. Our response\nis that if it is not possible to quantify disentanglement in this\nsituation, it will certainly be much more difficult to quantify it\nwhen the ground truth is not known, and this must be the first step.\nWe have now emphasized in the abstract, introduction and conclusion\nthat the method applies when the ground truth generative factors are\nknown.\n\nR1:\n> However, the contributed framework did not account for previously proposed\n> metrics (such a equivariance, invariance and equivalence).\n> ...\n> The paper does not allow to judge whether the three proposed criteria\n> are original or not with respect to the previously proposed ones of\n> [Goodfellow et al. 2009, Lenc & Vedaldi 2015, Cohen & Welling 2014,\n> Jayaraman & Grauman 2015].\n> …\n> The novel criteria are not compared with existing ones [Goodfellow et al.\n> 2009, Lenc & Vedaldi 2015, Cohen & Welling 2014, Jayaraman & Grauman\n> 2015].\nand\nR3:\n> Previous papers like \"beta-VAE\" (Higgins et al. 2017) and \"Bayesian\n> Representation Learning With Oracle Constraints\" by Karaletsos et al\n> (ICLR 16) have followed similar experimental protocols inspired by the\n> same underlying idea of recovering known latent factors, but have\n> fallen short of proposing a formal framework like this paper does. It\n> would be good to add a section gathering such attempts at evaluation\n> previously made and trying to unify them under the proposed\n> framework.\n\nWe have added sec 3 to expand the coverage of related work. The\nrelationship to equivariance and invariance is covered in the last\nparagraph of sec 3; note that such properties arise naturally \nfrom a properly disentangled and informative representation.\n\nWe have expanded the comparison to Higgins et al. (2017) and\nKaraletsos et al (2016) in sec 3. We have also added here a paragraph\non similarities/differences to the work of Yang and Amari (1997) wrt\nthe evaluation of ICA, following comments we received on the paper.\n\nR2:\n> Within the experimental results, only two methods are considered:\n> although Info-GAN is a reliable competitor, PCA seems a little too\n> basic to compete against.\n> ...\n> The experimental evaluation only considers two methods, comparing\n> Info-GAN, a state-of-the-art method, with a very basic PCA.\nand\nR3:\n> The paper ultimately is light on comprehensive evaluation of popular models\n> on a variety of datasets and as such does not quite yield the insights it\n> could.\n> ...\n> For a paper introducing a formal experimental framework and metrics or\n> evaluation I find that the paper is light on experiments and evaluation. I\n> would hope that at the very least a broad range of generative models and\n> some recognition models are used to evaluate here, especially a variational\n> autoencoder, beta-VAE and so on.\n\nOur experiments highlight the differences between a baseline (PCA) and a\nstate-of-the-art method (InfoGAN). This contrastive\ncomparison demonstrates the appropriateness of the framework, with the\nthree criteria clearly explaining why InfoGAN's learnt code is superior to PCA's\nand the metric scores quantifying this level of superiority. We will make the\ncode and dataset publicly available on acceptance of the paper and hope this\nfacilitates further comparisons and eventually the establishment of quantitative\nbenchmarks for disentangled factor learning. We note e.g. that the authors of\nthe beta-VAE have not published their code, which has made conducting the\nrequested experiments more difficult.\n\nR3:\n> Furthermore the authors could consider ... offering a benchmark\n> experiment and code for the community to establish this as a means\n> of evaluation to maximize the impact of a paper aimed at\n> reproducibility and good science.\n\nWe will be happy to make the dataset and code publicly available\non acceptance of the paper, as now mentioned in the conclusion.\n\nR2:\n> though the way pseudo-distribution are define in terms of normalized weight\n> magnitudes is seems a little ad hoc to me.\n> ...\n> It is not entirely clear to me how the proposed metrics, whose\n> definitions all reference magnitudes of weights, generalize to the\n> case of random forests.\n\nWe thank the reviewer for this feedback. We have now clarified these points\nby defining the relative importances R_{ij} on p2, and discussing the definition\nof importances for random forests as per Breiman et al (1984) on p3." ]
[ 7, 6, 6, -1, -1, -1, -1 ]
[ 5, 5, 4, -1, -1, -1, -1 ]
[ "iclr_2018_By-7dz-AZ", "iclr_2018_By-7dz-AZ", "iclr_2018_By-7dz-AZ", "rk7pIjceG", "ByfkoCtlf", "H1YPgFhlf", "H1YPgFhlf" ]
iclr_2018_HJcSzz-CZ
Meta-Learning for Semi-Supervised Few-Shot Classification
In few-shot classification, we are interested in learning algorithms that train a classifier from only a handful of labeled examples. Recent progress in few-shot classification has featured meta-learning, in which a parameterized model for a learning algorithm is defined and trained on episodes representing different classification problems, each with a small labeled training set and its corresponding test set. In this work, we advance this few-shot classification paradigm towards a scenario where unlabeled examples are also available within each episode. We consider two situations: one where all unlabeled examples are assumed to belong to the same set of classes as the labeled examples of the episode, as well as the more challenging situation where examples from other distractor classes are also provided. To address this paradigm, we propose novel extensions of Prototypical Networks (Snell et al., 2017) that are augmented with the ability to use unlabeled examples when producing prototypes. These models are trained in an end-to-end way on episodes, to learn to leverage the unlabeled examples successfully. We evaluate these methods on versions of the Omniglot and miniImageNet benchmarks, adapted to this new framework augmented with unlabeled examples. We also propose a new split of ImageNet, consisting of a large set of classes, with a hierarchical structure. Our experiments confirm that our Prototypical Networks can learn to improve their predictions due to unlabeled examples, much like a semi-supervised algorithm would.
accepted-poster-papers
The paper extends the earlier work on Prototypical networks to semi-supervised setting. Reviewers largely agree that the paper is well-written. There are some concerns on the incremental nature of the paper wrt to the novelty aspect but in the light of reported empirical results which show clear improvement over earlier work and given the importance of the topic, I recommend acceptance.
train
[ "rJzcaGvgf", "Hyx7bEPez", "SkW9BQ9lG", "Hyrfr91fG", "SJ7s7qkzG", "Hy4CMckMG", "BkmDZ-ZZG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public" ]
[ "This paper is an extension of the “prototypical network” which will be published in NIPS 2017. The classical few-shot learning has been limited to using the unlabeled data, while this paper considers employing the unlabeled examples available to help train each episode. The paper solves a new semi-supervised situation, which is more close to the setting of the real world, with an extension of the prototype network. Sufficient implementation detail and analysis on results.\n\nHowever, this is definitely not the first work on semi-supervised formed few-shot learning. There are plenty of works on this topic [R1, R2, R3]. The authors are advised to do a thorough survey of the relevant works in Multimedia and computer vision community. \n \nAnother concern is that the novelty. This work is highly incremental since it is an extension of existing prototypical networks by adding the way of leveraging the unlabeled data. \n\nThe experiments are also not enough. Not only some other works such as [R1, R2, R3]; but also the other naïve baselines should also be compared, such as directly nearest neighbor classifier, logistic regression, and neural network in traditional supervised learning. Additionally, in the 5-shot non-distractor setting on tiered ImageNet, only the soft kmeans method gets a little bit advantage against the semi-supervised baseline, does it mean that these methods are not always powerful under different dataset?\n\n[R1] “Videostory: A new multimedia embedding for few-example recognition and translation of events,” in ACM MM, 2014\n\n[R2] “Transductive Multi-View Zero-Shot Learning”, IEEE TPAMI 2015\n\n[R3] “Video2vec embeddings recognize events when examples are scarce,” IEEE TPAMI 2014\n", "In this paper, the authors studied the problem of semi-supervised few-shot classification, by extending the prototypical networks into the setting of semi-supervised learning with examples from distractor classes. The studied problem is interesting, and the paper is well-written. Extensive experiments are performed to demonstrate the effectiveness of the proposed methods. While the proposed method is a natural extension of the existing works (i.e., soft k-means and meta-learning).On top of that, It seems the authors have over-claimed their model capability at the first place as the proposed model cannot properly classify the distractor examples but just only consider them as a single class of outliers. Overall, I would like to vote for a weakly acceptance regarding this paper.", "This paper proposes to extend the Prototypical Network (NIPS17) to the semi-supervised setting with three possible \nstrategies. One consists in self-labeling the unlabeled data and then updating the prototypes on the basis of the \nassigned pseudo-labels. Another is able to deal with the case of distractors i.e. unlabeled samples not beloning to\nany of the known categories. In practice this second solution is analogous to the first, but a general 'distractor' class\nis added. Finally the third technique learns to weight the samples according to their distance to the original prototypes.\n\nThese strategies are evaluated in a particular semi-supervised transfer learning setting: the models are first trained \non some source categories with few labeled data and large unlabeled samples (this setting is derived by subselecting\nmultiple times a large dataset), then they are used on a final target task with again few labeled data and large \nunlabeled samples but beloning to a different set of categories.\n\n+ the paper is well written, well organized and overall easy to read\n+/- this work builds largely on previous work. It introduces only some small technical novelty inspired by soft-k-means\nclustering that anyway seems to be effective.\n+ different aspect of the problem are analyzed by varying the number of disctractors and varying the level of\nsemantic relatedness between the source and the target sets\n\nFew notes and questions\n1) why for the omniglot experiment the table reports the error results? It would be better to present accuracy as for the other tables/experiments\n2) I would suggest to use source and target instead of train and test -- these two last terms are confusing because\nactually there is a training phase also at test time.\n3) although the paper indicate that there are different other few-shot methods that could be applicable here, \nno other approach is considered besides the prothotipical network and its variants. An further external reference \ncould be used to give an idea of what would be the experimental result at least in the supervised case.\n\n\n\n\n", "“There are plenty of works on this topic…”\nWe also thank the reviewer for pointing out related zero-shot learning literature and we will study them and add those references to the next version of the paper. Based on our preliminary reading, [1] is a journal version that builds on top of [2], with both papers presenting very similar approaches for the application of event recognition in videos. Transductive Multi-View Zero-Shot Learning [3] uses a similar label propagation procedure as ours. However, while [3] uses standalone deep feature extractors, we show that our semi-supervised prototypical network can be trained completely end-to-end. One of the non-trivial results of our paper is that we show that end-to-end meta-learning significantly improves the performance (see Semi-supervised Inference vs. Soft K-means). We would like to emphasize that end-to-end semi-supervised learning in a meta-learning framework is, to the best of our knowledge, a novel contribution.\n\n“...other naïve baselines should also be compared...”\nThe recent literature on few-shot learning has established that meta-learning-based approaches outperform kNN and standard neural network based approaches. For the Omniglot dataset, Mann et al. [4] has previously studied baselines such as KNN either in pixel space or deep features, and feedforward NNs. They found these baselines all lag behind their method by quite a lot, and meanwhile Prototypical Networks outperform Mann et al. by another significant margin. For example, Table 1 summarizes the performance for 5-shot, 5-way classification. Therefore, we will provide supervised nearest neighbor, logistic regression, and neural network baselines for completeness; however, we believe that our work is built on top of state-of-the-art methods, and should beat these simple baselines.\n\nTable 1 - Omniglot dataset baselines\nMethod Accuracy\nKNN pixel 48%\nKNN deep 69%\nMann et al. [4] 88%\nProtoNet 99.7%\n\n“...not always powerful under different dataset?”\nFor completeness we ran both 1-shot and 5-shot settings and found that our method consistently outperforms the baselines. While in 5-shot the improvement is less, this is reasonable since the number of labeled items is larger and the benefit brought by unlabeled items is considerably smaller than in 1-shot settings. We disagree with the comment that our model is not robust under different datasets, since the best settings we found is consistent across all three, quite diverse, datasets, including the novel and much larger tieredImageNet.\n\nReferences:\n[1] “Video2vec embeddings recognize events when examples are scarce,” IEEE TPAMI 2014\n[2] “Videostory: A new multimedia embedding for few-example recognition and translation of events,” in ACM MM, 2014.\n[3]: Transductive Multi-View Zero-Shot Learning, IEEE TPAMI 2015.\n[4]: One-shot learning with Memory-Augmented Neural Networks. ICML 2016.", "Thank you for the comments. We’d like to clarify our setup here: The problem as we have defined it is to correctly perform the given N-way classification in each episode (similarly as in the previous work). Distractors are introduced to make the problem harder in a more realistic way, but the goal is not to be able to classify them. Specifically, our model needs to understand which points are irrelevant for the given classification task (“distractors”) in order to not take them into account, but actually classifying these distractors into separate categories is not required in order to perform the given classification task, so our models make no effort to do this.\n\nFurther, we would like to emphasize that adding distractor examples in few-shot classification settings is a novel and more realistic learning environment compared to previous approaches in supervised few-shot learning and as well as concurrent approaches in semi-supervised few-shot learning [1,2]. It is non-trivial to show that various versions of semi-supervised clustering can be trained end-to-end from scratch as another layer on top of prototypical networks, with the presence of distractor clusters (note that each distractor class has the same number of images as a non-distractor class). \n\nReferences:\n[1]: Few-Shot Learning with Graph Neural Networks. Anonymous. Submitted to ICLR, 2017.\n[2]: Semi-Supervised Few-Shot Learning with Prototypical Networks. Rinu Boney and Alexander Ilin. CoRR, abs/1711.10856, 2017.", "We appreciate the constructive comments from reviewer 2 and we are delighted to learn that the reviewer feels that our paper is well written and organized.\n\n“builds largely on previous work… only some small technical novelty…”\nWe would like to emphasize that we introduce a new task for few-shot classification, incorporating unlabeled items. This is impactful as follow-up work can use our dataset as a public benchmark. In fact, there are several concurrent ICLR submissions and arxiv pre-prints [1,2] that also introduce semi-supervised few-shot learning. However compared to these concurrent papers, our benchmark extends beyond this work into more realistic and generic settings, with hierarchical class splits and unlabeled distractor classes, which we believe will make positive contributions to the community.\n\nThe fact that our semi-supervised prototypical network can be trained end-to-end from scratch is non-trivial, especially under many distractor clusters (note that each distractor class has the same number of images as a non-distractor class). We argue that our extension is simple yet effective, serving as another layer on top of the regular prototypical network layer, and provides consistent improvement in the presence of unlabeled examples. Further, to our knowledge, our best-performing method, the masked soft k-means, is novel.\n\n“It would be better to present accuracy…”\nThank you for the suggestion. We will revise it in our next version.\n\n“no other approach is considered besides the prototypical network and its variants.”\nProtoNets is one of the top performing methods for few-shot learning and our proposed extensions each naturally forms another layer on top of the Prototypical layer. To address the concern, we are currently running other variants of the models such as a nearest neighbor baseline, and will report results before the ICLR discussion period ends. In the Omniglot dataset literature, many simple baselines has been extensively explored, and Prototypical Networks are so far the state-of-the-art. Table 1 summarizes the performance for a 5-way 5-shot benchmark (results reported by [3])\n\nTable 1 - Omniglot dataset baselines\nMethod Accuracy\nKNN pixel 48%\nKNN deep 69%\nMann et al. [3] 88%\nProtoNet 99.7%\n\nReferences:\n[1]: Few-Shot Learning with Graph Neural Networks. Anonymous. Submitted to ICLR, 2017.\n[2]: Semi-Supervised Few-Shot Learning with Prototypical Networks. Rinu Boney and Alexander Ilin. CoRR, abs/1711.10856, 2017.\n[3]: One-shot learning with Memory-Augmented Neural Networks. ICML 2016.", "Great work! Could you release the split for tiered-Imagenet?" ]
[ 6, 6, 6, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_HJcSzz-CZ", "iclr_2018_HJcSzz-CZ", "iclr_2018_HJcSzz-CZ", "rJzcaGvgf", "Hyx7bEPez", "SkW9BQ9lG", "iclr_2018_HJcSzz-CZ" ]
iclr_2018_H1q-TM-AW
A DIRT-T Approach to Unsupervised Domain Adaptation
Domain adaptation refers to the problem of leveraging labeled data in a source domain to learn an accurate model in a target domain where labels are scarce or unavailable. A recent approach for finding a common representation of the two domains is via domain adversarial training (Ganin & Lempitsky, 2015), which attempts to induce a feature extractor that matches the source and target feature distributions in some feature space. However, domain adversarial training faces two critical limitations: 1) if the feature extraction function has high-capacity, then feature distribution matching is a weak constraint, 2) in non-conservative domain adaptation (where no single classifier can perform well in both the source and target domains), training the model to do well on the source domain hurts performance on the target domain. In this paper, we address these issues through the lens of the cluster assumption, i.e., decision boundaries should not cross high-density data regions. We propose two novel and related models: 1) the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption; 2) the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) model, which takes the VADA model as initialization and employs natural gradient steps to further minimize the cluster assumption violation. Extensive empirical results demonstrate that the combination of these two models significantly improve the state-of-the-art performance on the digit, traffic sign, and Wi-Fi recognition domain adaptation benchmarks.
accepted-poster-papers
Well motivated and well written, with extensive results. The paper also received positive comments from all reviewers. The AC recommends that the paper be accepted.
train
[ "rknCb19ef", "Syaz3vINf", "Hk19swKeM", "BkhjhvQ-G", "rJMJD0n7z", "ryETB02XG", "SknfSAnQf", "S17uEA2QG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper presents two complementary models for unsupervised domain adaptation (classification task): 1) the Virtual Adversarial Domain Adaptation (VADA) and 2) the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T). The authors make use of the so-called cluster assumption, i.e., decision boundaries should not cross high-density data regions. VADA extends the standard Domain-Adversarial training by introducing an additional objective L_t that measures the target-side cluster assumption violation, namely, the conditional entropy w.r.t. the target distribution. Since the empirical estimate of the conditional entropy breaks down for non-locally-Lipschitz classifiers, the authors also propose to incorporate virtual adversarial training in order to make the classifier well-behaved. The paper also argues that the performance on the target domain can be further improved by a post-hoc minimization of L_t using natural gradient descent (DIRT-T) which ensures that the decision boundary changes incrementally and slowly. \n\nPros:\n+ The paper is written clearly and easy to read\n+ The idea to keep the decision boundary in the low-density region of the target domain makes sense\n+ The both proposed methods seem to be quite easy to implement and incorporate into existing DATNN-based frameworks\n+ The combination of VADA and DIRT-T performs better than existing DA algorithms on a range of visual DA benchmarks\n\nCons:\n- Table 1 can be a bit misleading as the performance improvements may be partially attributed to the fact that different methods employ different base NN architectures and different optimizers\n- The paper deals exclusively with visual domains; applying the proposed methods to other modalities would make this submission stronger\n\nOverall, I think it is a good paper and deserves to be accepted to the conference. I’m especially appealed by the fact that the ideas presented in this work, despite being simple, demonstrate excellent performance.\n\nPost-rebuttal revision:\nAfter reading the authors' response to my review, I decided to leave the score as is.", "I'm truly satisfied by the new experiments, which demonstrate the benefit of \"virtual adversarial training\" in the whole process. \n\nUnfortunately, I did not take the time to explore the connection of \"Probabilistic Lipschitzness\" with the current work, but it's certainly an interesting thing to look at.\n", "As there are many kinds of domain adaptation problems, the need to mix several learning strategies to improve the existing approaches is obvious. However, this task is not necessarily easy to succeed. The authors proposed a sound approach to learn a proper representation (in an adversarial way) and comply the cluster assumption.\n\nThe experiments show that this Virtual Adversarial Domain Adaptation network (VADA) achieves great results when compared to existing learning algorithms. Moreover, we also see the learned model is consistently improved using the proposed \"Decision-boundary Iterative Refinement Training with a Teacher\" (DIRT-T) approach.\n\nThe proposed methodology relies on multiple choices that could sometimes be better studied and/or explained. Namely, I would like to empirically see which role of the locally-Lipschitz regularization term (Equation 7). Also, I wonder why this term is tuned by an hyperparameter (lamda_s) for the source, while a single hyperparamer (lambda_t) is used for the sum of the two target quantity.\n \nOn the theoretical side, the discussion could be improved. Namely, Section 3 about \"limitation of domain adversarial training\" correctly explained that \"domain adversarial training may not be sufficient for domain adaptation if the feature extraction function has high-capacity\". It would be interesting to explain whether this observation is consistent with Theorem 1 of the paper (due to Ben-David et al., 2010), on which several domain adversarial approaches are based. The need to consider supplementary assumptions (such as ) to achieve good adaptation can also be studied through the lens of more recent Ben-David's work, e.g. Ben-David and Urner (2014). In the latter, the notion of \"Probabilistic Lipschitzness\", which is a relaxation of the \"cluster assumption\" seems very related to the actual work.\n\nReference:\nBen-David and Urner. Domain adaptation-can quantity compensate for quality?, Ann. Math. Artif. Intell., 2014\n\nPros:\n- Propose a sound approach to mix two complementary strategies for domain adaptation.\n- Great empirical results.\n\nCons:\n- Some choices leading to the optimization problem are not sufficiently explained.\n- The theoretical discussion could be improved.\n\nTypos:\n- Equation 14: In the first term (target loss), theta should have an index t (I think).\n- Bottom of page 6: \"... and that as our validation set\" (missing word).\n", "The paper was a good contribution to domain adaptation. It provided a new way of looking at the problem by using the cluster assumption. The experimental evaluation was very thorough and shows that VADA and DIRT-T performs really well. \n\nI found the math to be a bit problematic. For example, L_d in (4) involves a max operator. Although I understand what the authors mean, I don't think this is the correct way to write this. (5) should discuss the min-max objective. This will probably involve an explanation of the gradient reversal etc. Speaking of GRL, it's mentioned on p.6 that they replaced GRL with the traditional GAN objective. This is actually pretty important to discuss in detail: did that change the symmetric nature of domain-adversarial training to the asymmetric nature of traditional GAN training? Why was that important to the authors?\n\nThe literature review could also include Shrivastava et al. and Bousmalis et al. from CVPR 2017. The latter also had MNIST/MNIST-M experiments.", "Thank you for the review! To improve the quality of the paper, we have made several adjustments to our paper in accordance with your review.\n\n“Namely, I would like to empirically see which role of the locally-Lipschitz regularization term (Equation 7).”\n\nThank you for the suggestion. We have included an extensive ablation study of the role of the locally-Lipschitz regularization term in Section 6.3.1. Our results show that while conditional entropy minimization alone is sufficient to instantiate the cluster assumption and improve over DANN, the additional incorporation of the locally-Lipschitz regularization term does indeed offer additional performance improvement.\n\n“Also, I wonder why this term is tuned by an hyperparameter (λ_s) for the source, while a single hyperparamer (λ_t) is used for the sum of the two target quantity”\n\nThe choice to use λ_t for the sum of the two target quantities is purely for simplicity. Since the official implementation of VAT (https://github.com/takerum/vat_tf) used the same weighting for the conditional entropy and virtual adversarial training, we opted to do that as well in the target domain.\n\n“It would be interesting to explain whether this observation is consistent with Theorem 1 of the paper (due to Ben-David et al., 2010)”\n\nThank you for the suggestion. We have added the connection in Appendix E. In particular, we can show that, if the embedding function has infinite-capacity, the H\\DeltaH-divergence achieves the maximum value of 2 even when the feature distribution matching constraint is satisfied. This results in Theorem 1 becoming a vacuous upper bound.\n\n“The notion of ‘Probabilistic Lipschitzness’, which is a relaxation of the ‘cluster assumption’ seems very related to the actual work.”\n\nThank you for this insight. We have incorporated a brief mention of probabilistic Lipschitzness in Section 2. It seems that a stronger connection can be made and we appreciate any additional suggestions you may have on how to better address probabilistic Lipschitzness in our paper.\n\n“Equation 14: In the first term (target loss), theta should have an index t (I think).” and “Bottom of page 6: ‘... and that as our validation set’ (missing word).”\n\nFixed. Thanks! ", "Thank you for the review! To improve the quality of the paper, we have made several adjustments to our paper in accordance with your review.\n\n“The experimental evaluation was very thorough and shows that VADA and DIRT-T performs really well.”\n\nWe have added additional experiments that may be of interest to you. In Section 6.3.1, we perform an extensive ablation study demonstrate the relative contribution of virtual adversarial training. In Section 6.2, we apply VADA/DIRT-T to a non-visual domain adaptation task.\n\n“For example, L_d in (4) involves a max operator. Although I understand what the authors mean, I don’t think this is the correct way to write this.”\n\nWe agree the the use of the max operator is informal. To account for the possibility that the maximum is not achievable, using the supremum is more appropriate. We have updated the paper to reflect this. Our choice of presentation is now in keeping with that in GAIL (Eq. (14), Ho & Ermon (2016)) and WGAN (Eq. (2), Arjovsky et al. (2017)).\n\n“(5) should discuss the min-max objective. This will probably involve an explanation of the gradient reversal etc. Speaking of GRL, it’s mentioned on p.6 that they replaced GRL with the traditional GAN objective. This is actually pretty important to discuss in detail: did that change the symmetric nature of domain-adversarial training to the asymmetric nature of traditional GAN training? Why was that important to the authors?”\n\nThank you for pointing this out. We have added a footnote next to (5) and modified Appendix C to reflect the following opinion:\n\nWe believe that, at a high level, it is not of particular importance which optimization procedure is used to approximate the mini-max optimization problem. Our decision to switch from the symmetric to asymmetric training is motivated by\n\n1. The extensive GAN literature which advocates the asymmetric optimization approach.\n\n2. Our initial experiments on MNIST → MNIST-M which suggest that the asymmetric optimization approach is more stable.\n\nWe are not committed to either optimization strategy and encourage practitioners to try both when applying VADA. In case the reviewer is interested in the performance of pure domain adversarial training using the asymmetric optimization approach, we have included it in Section 6.3.1.\n\n“The literature review could also include Shrivastava et al. and Bousmalis et al. from CVPR 2017. The latter also had MNIST/MNIST-M experiments.”\n\nThank you for the suggestion. We have incorporated Bousmalis’s paper into our comparison.\n\nReferences\nJonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pp. 4565–4573, 2016.\n\nMartin Arjovsky, Soumith Chintala, and Le ́on Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.\n", "Thank you for the review! To improve the quality of the paper, we have made several adjustments to our paper in accordance with your review.\n\n“Table 1 can be a bit misleading as the performance improvements may be partially attributed to the fact that different methods employ different base NN architectures and different optimizers”\n\nThe purpose of Table 1 is to offer a holistic evaluation of the entire set-up. As such, it demonstrates that there exists a training objective + architecture + optimization configuration such significant improvements over previous methods/implementations are possible. We provided such a table in part because doing so seems to be standard practice in semi-supervised learning and domain adaptation papers (Miyato et al., 2017; Laine & Aila, 2016; Tarvainen & Valpola, 2017; Saito et al., 2017; French et al., 2017).\n\nTo offer a fairer comparison, we made the following modifications to the paper:\n\n1. We explicitly mention Table 2 in the main body of the paper, in the section Model Evaluation/Overall\n\n2. We added an ablation study to Section 6.3.1 to demonstrate the relative contribution of virtual adversarial training in comparison to our base implementation of domain adversarial training.\n\n“The paper deals exclusively with visual domains; applying the proposed methods to other modalities would make this submission stronger”\n\nWe agree that the submission would be stronger by performing experiments in other modalities. To that end, we added an example of applying VADA and DIRT-T to a non-visual data in Section 6.2. We chose to apply our model to a Wi-Fi activity recognition dataset. Our results show that VADA significantly improves upon DANN. Unfortunately, due to the small target domain training set size, DIRT-T does not improve upon VADA. We provide additional experiments in Appendix F which suggest that VADA already achieves strong clustering on the Wi-Fi dataset, and therefore DIRT-T is not expected to improve performance in this situation.\n\nWe leave as future work the study of applying VADA/DIRT-T (and the general application of the cluster assumption) to text classification domain adaptation tasks. Given the success of VAT on text classification (Miyato et al., 2016), we are optimistic that this line of work is promising.\n\nReferences\nTakeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial train- ing: a regularization method for supervised and semi-supervised learning. arXiv preprint arXiv:1704.03976, 2017.\n\nTakeru Miyato, Andrew M Dai, and Ian Goodfellow. Virtual adversarial training for semi-supervised text classification. stat, 1050:25, 2016.\n\nSamuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016.\n\nAntti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consis- tency targets improve semi-supervised deep learning results. 2017.\n\nGeoffrey French, Michal Mackiewicz, and Mark Fisher. Self-ensembling for domain adaptation. arXiv preprint arXiv:1706.05208, 2017.\n\nKuniaki Saito, Yoshitaka Ushiku, and Tatsuya Harada. Asymmetric tri-training for unsupervised domain adaptation. arXiv preprint arXiv:1702.08400, 2017.", "To improve the quality of the paper, we have made several adjustments to our paper. In addition to minor edits (e.g. fixing typos, improving clarity), we made the following large changes:\n\n1. We added a non-visual domain adaptation task (Wi-Fi activity recognition) to Section 6.2 and Appendix F.\n\n2. We added an additional ablation experiment testing the contribution of virtual adversarial training to Section 6.3.\n\n3. We improved the presentation of Proposition 1 in Appendix E and added a subsection connecting Proposition 1 to Ben-David’s domain adaptation upper bound (Theorem 1).\n\nThank you for taking the time to review this paper!" ]
[ 8, -1, 7, 7, -1, -1, -1, -1 ]
[ 4, -1, 4, 2, -1, -1, -1, -1 ]
[ "iclr_2018_H1q-TM-AW", "rJMJD0n7z", "iclr_2018_H1q-TM-AW", "iclr_2018_H1q-TM-AW", "Hk19swKeM", "BkhjhvQ-G", "rknCb19ef", "iclr_2018_H1q-TM-AW" ]
iclr_2018_r1Dx7fbCW
Generalizing Across Domains via Cross-Gradient Training
We present CROSSGRAD , a method to use multi-domain training data to learn a classifier that generalizes to new domains. CROSSGRAD does not need an adaptation phase via labeled or unlabeled data, or domain features in the new domain. Most existing domain adaptation methods attempt to erase domain signals using techniques like domain adversarial training. In contrast, CROSSGRAD is free to use domain signals for predicting labels, if it can prevent overfitting on training domains. We conceptualize the task in a Bayesian setting, in which a sampling step is implemented as data augmentation, based on domain-guided perturbations of input instances. CROSSGRAD jointly trains a label and a domain classifier on examples perturbed by loss gradients of each other’s objectives. This enables us to directly perturb inputs, without separating and re-mixing domain signals while making various distributional assumptions. Empirical evaluation on three different applications where this setting is natural establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbation methods, and (2) data augmentation is a more stable and accurate method than domain adversarial training.
accepted-poster-papers
Well motivated and well received by all of the expert reviewers. The AC recommends that the paper be accepted.
test
[ "SkBWbEhVz", "rkJNQW5gM", "Syaaxl0lf", "SJbAMFl-z", "rkDrknW-G", "HyQ8rr6Xf", "rJ13AnxXG", "HyalvTemf", "S1CwzpemG", "SkE4Ang7M", "SkHQ6TqWG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "The rebuttal addresses my questions. The authors are recommended to explicitly use \"domain generalization\" in the paper and/or the title to make the language consistent with the literature. ", "This paper proposed a domain generalization approach by domain-dependent data augmentation. The augmentation is guided by a network that is trained to classify a data point to different domains. Experiments on four datasets verify the effectiveness of the proposed approach. \n\nStrengths:\n+ The proposed classification model is domain-dependent, as opposed to being domain-invariant. This is new and differs from most existing works on domain adaptation/generalization, to the best of my knowledge. \n+ The experiments show that the proposed method outperforms two baselines. However, more related approaches could be included to strengthen the experiments (see below for details).\n\n\nWeaknesses:\n- The paper studies domain generalization and yet fails to position it in the right literature. By a simple search of \"domain generalization\" using Google Scholar, I found several existing works on this problem and have listed some below. The authors may consider to include them in both the related works and the experiments. \n\nQuestions: \n1. It is intuitive to directly define the data augmentation by x_i+Grad_x J_d. Why is it necessary to instead define it as the inverse transformation G^{-1}(g') and then go through the approximations to derive the final augmentation? \n2. Is the CrossGrad training necessary? What if one trains the network in two steps? Step 1: learn G using J_d and a regularization to avoid misclassification over the labels using the original data. Step 2: Learn the classification network (possibly different from G) by the domain-dependent augmentation.\n\n\nSaeid Motiian, Marco Piccirilli, Donald A. Adjeroh, and Gianfranco Doretto. Unified deep supervised\ndomain adaptation and generalization. In IEEE International Conference on Computer\nVision (ICCV), 2017.\n\nMuandet, K., Balduzzi, D. and Schölkopf, B., 2013. Domain generalization via invariant feature representation. In Proceedings of the 30th International Conference on Machine Learning (ICML-13) (pp. 10-18).\n\nXu, Z., Li, W., Niu, L. and Xu, D., 2014, September. Exploiting low-rank structure from latent domains for domain generalization. In European Conference on Computer Vision (pp. 628-643). Springer, Cham.\n\nGhifary, M., Bastiaan Kleijn, W., Zhang, M. and Balduzzi, D., 2015. Domain generalization for object recognition with multi-task autoencoders. In Proceedings of the IEEE international conference on computer vision (pp. 2551-2559).\n\nGan, C., Yang, T. and Gong, B., 2016. Learning attributes equals multi-source domain generalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 87-97).", "The method is posed in the Bayesian setting, the main idea being to achieve the data augmentation through domain-guided perturbations of input instances. Different from traditional adaptation methods, where the adaptation step is applied explicitly, in this paper the authors exploit labeled instances from several domains to collectively train a system that can handle new domains without the adaptation step. While this is another way of looking at domain adaptation, it may be misleading to say 'without' adaptation step. By the gradient perturbations on multi-domain training data, the learning of the adaptation step is effectively done. This should be clarified in the paper. The notion of using 'scarce' training domains to cover possible choices for the target domain is interesting and novel. The experimental validation should also include a deeper analysis of this factor: how the proposed adaptation performance is affected by the scarcity of the training multi-domain data. While this is partially shown in Table 8, it seems that by adding more domains the performance is compromised (compared to the baseline) (?). It would be useful to see how the model ranks the multiple domains in terms of their relatedness to the target domain. Figs 6-7 are unclear and difficult to read. The captions should provide more information about the main point of these figures. ", "Quality, clarity : Very well written, well motivated, convincing experiments and analysis\nOriginality: I think they framed the problem of domain-robustness very well: how to obtain a \"domain level embedding\" which generalizes to unseen domains. To do this the authors introduce the CrossGrad method, which trains both a label classification task and a domain classification task (from which the domain-embedding is obtained)\nSignificance: Robustness in new domains is a very important practical and theoretical issue.\n\nPros:\n- It's novel, interesting, well written, and appears to work very well in the experiments provided.\n\nCons:\n- Formally, for the embedding to generalize one needs to make the \"domain continuity assumption\", which is not guaranteed to hold in any realistic settings (e.g. when there are no underlying continuous factors) \n- The training set needs to be in the form (x,y,d) where 'd' is a domain, this information might not exist or be only partially present.\n- A single step required 2 forward and 2 backward passes - thus is twice as expensive. \n\nConstructive comments:\n- Algorithm 1 uses both X_l and X_d, yet the text only discusses X_d, there is some symmetry, but more discussion will help.\n- LabelGrad is mentioned in section 4 but defined in section 4.1, it should be briefly defined in the first mention.", "The authors proposed to perturbed the estimated domain features for data augmentation, which is done by using the gradients of label and domain classification losses. The idea is interesting and new. And the paper is well written.\n\nMy major conerns are as follows:\n1. Section 3 seems a bit too lengthy or redundant to derive the data augmentation by introducing the latent domain features g. In fact, without g, it also makes sense to perturb x as done in lines 6 and 7 in Alg. 1.\n2. The assumption in (A1) can only be guaranteed under certain theoretical conditions. The authors should provide more explanations to better convey the assumption to readers.\n\nMinors:\n1. LabelGrad was not defined when firstly being used in Section 4.\n2. Fig. 1 looks redundant.", "We have uploaded a revised draft of our paper, in line with the valuable comments. To summarize the changes:\n- A discussion of the additional papers referenced in the reviews has been added to the Related Work section.\n- A comparison with Ghifary et al. 2015 has been added to Table 9.\n- The domain continuity assumption is elaborated on p3-4.\n- The domain-based perturbation for the label classifier is motivated in the \"forward\" direction, i.e. directly as \\nabla_x J_d (and similarly for the domain classifier). Reviewers felt this was more intuitive. To give insight into the relation between the perturbations of input x and domain features g, we added a short paragraph at the end of p4 and moved the related \"backward\" derivation (the one in the original draft) to an appendix.\n- A note on how the base networks are modified and how the complementary loss is used has been added.\n- Several other small issues in the exposition have been fixed.\n\nWe thank the reviewers and other commenters for their suggestions in regards to improving the exposition, and look forward to further suggestions and comments.", "We thank the reviewer for their time and effort.\n\n> While this is another way of looking at domain adaptation, it may be misleading to say 'without' adaptation step. By the gradient perturbations on multi-domain training data, the learning of the adaptation step is effectively done. This should be clarified in the paper.\n\nThe word “adaptation” suggests modifying the model for a given target domain. In contrast, we focus on domain-generalization, where we are not given any specific target domain. But the reviewer is right in that there is an implicit adaptation component, which we shall clarify in the revised draft.\n\n> The notion of using 'scarce' training domains to cover possible choices for the target domain is interesting and novel. The experimental validation should also include a deeper analysis of this factor: how the proposed adaptation performance is affected by the scarcity of the training multi-domain data. While this is partially shown in Table 8, it seems that by adding more domains the performance is compromised (compared to the baseline) (?). \n\nWe shall add a deeper analysis/discussion in the revision. With training data covering a larger number of domains, the baseline automatically tends to become domain-aware, and domain generalization techniques (ours and the others) have less room for improvement. However, it is possible that we can improve on our current performance by tuning the network parameters for different levels of scarcity. We shall explore this.\n\n> It would be useful to see how the model ranks the multiple domains in terms of their relatedness to the target domain.\n\nThe analysis in Figure 6 is motivated by exactly this question. Instead of giving a single number (or rank), we have visualized the relation between the training and test domains in terms of (projections of) the “g” embedding. We could repeat these plots for other domains like Font, though these are probably best presented in supplementary material.\n\n> Figs 6-7 are unclear and difficult to read. The captions should provide more information about the main point of these figures\n\nWe will address this in our revised draft.\n", "We thank the reviewer for their time and effort.\n\n> Section 3 seems a bit too lengthy or redundant to derive the data augmentation by introducing the latent domain features g. In fact, without g, it also makes sense to perturb x as done in lines 6 and 7 in Alg. 1. 2. \n\nA similar concern was raised by Reviewer 1 as well. We wanted to provide some insight on why perturbing x by Grad_x J_d should provide generalization along domain. Also, we hope this more flexible framework will inspire future work on alternative ways of sampling g-s and the corresponding inverse.\n\n> The assumption in (A1) can only be guaranteed under certain theoretical conditions. The authors should provide more explanations to better convey the assumption to readers.\n\nYes, but in many cases, domain variation can be captured via latent continuous features (e.g. slant, ligature size, etc. for fonts; and speaking rate, pitch, intensity, etc. for speech). CrossGrad strives to characterize these continuous features for capturing domain variation. We shall elaborate on this further in our revised draft.\n\n> Minors: 1. LabelGrad was not defined when firstly being used in Section 4. 2. Fig. 1 looks redundant.\n\nWe will fix the issues pointed out .\n", "We thank the reviewer for their time and effort.\n\n> It is intuitive to directly define the data augmentation by x_i+Grad_x J_d. Why is it necessary to instead define it as the inverse transformation G^{-1}(g') and then go through the approximations to derive the final augmentation?\n\nYes, computationally they turn out to be the same but our exposition provides some insight on why perturbing x by Grad_x J_d should provide generalization along domain. Also, we hope this more flexible framework will inspire future work on alternative ways of sampling g-s and the corresponding inverse. We will add a discussion of both ways of motivating the perturbation (g from x, and x from g) in our revised draft.\n\n> Is the CrossGrad training necessary? What if one trains the network in two steps? Step 1: learn G using J_d and a regularization to avoid misclassification over the labels using the original data. Step 2: Learn the classification network (possibly different from G) by the domain-dependent augmentation.\n\nWe implemented the suggested method and found that it performs worse than the baseline. The accuracy for label classification on the Google Fonts dataset, on the test set, is around .63 while the baseline is around .68. If we learn G as a separate first step using training domains, there is no guarantee that G will generalize in a “meaningful way” across unseen domains. Then using this G for data augmentation will not be helpful. CrossGrad tries to force G to learn a meaningful continuous domain representation using perturbation in the label space. \n\n> The paper studies domain generalization and yet fails to position it in the right literature. By a simple search of \"domain generalization\" using Google Scholar, I found several existing works on this problem and have listed some below. The authors may consider to include them in both the related works and the experiments.\n\nWe thank the reviewer for pointing us to these references. Among these, we’ve already cited Motiian et al.’s work on domain generalization and used their model for comparison on the MNIST task. The other references will be discussed in the related work section in our revised draft. Many previous approaches try to erase domain information. For instance, Muandet et al. '13 and Ghifary et al. '15 try to extract generalizable features across domains. Our model is different in that we try to leverage the information that domain features provide about labels.\n", "We thank the reviewer for their time and effort.\n> 'Formally, for the embedding to generalize one needs to make the \"domain continuity assumption\", which is not guaranteed to hold in any realistic settings (e.g. when there are no underlying continuous factors)'\n\nYes, real-life domains are mostly discrete (e.g. fonts, speakers, etc) but their variation can often be captured via latent continuous features (e.g. slant, ligature size, etc. for fonts; and speaking rate, pitch, intensity, etc. for speech). CrossGrad strives to characterize these continuous features for capturing domain variation.\n\n> 'The training set needs to be in the form (x,y,d) where 'd' is a domain, this information might not exist or be only partially present.'\n\nWe do not need domain information for all the data that will be used in our eventual training: we can bootstrap from a relatively small amount of domain-labeled data, by training a classifier for the domains present in the training data, which can then be used to label the rest of the data. We also note that the domain adaptation/generalization literature typically does rely on training data with source-domain labels.\n\nWe are fixing the revision with other helpful comments by the reviewer on notation and writing.\n", "There is one recent DG paper that seems to have related methodology.\nLi et al, AAAI 2018, Learning to Generalize: Meta-Learning for Domain Generalization. \n( https://arxiv.org/abs/1710.03463 )\nIt would be good to contrast this as well.\n" ]
[ -1, 7, 7, 8, 7, -1, -1, -1, -1, -1, -1 ]
[ -1, 5, 4, 4, 5, -1, -1, -1, -1, -1, -1 ]
[ "rkJNQW5gM", "iclr_2018_r1Dx7fbCW", "iclr_2018_r1Dx7fbCW", "iclr_2018_r1Dx7fbCW", "iclr_2018_r1Dx7fbCW", "iclr_2018_r1Dx7fbCW", "Syaaxl0lf", "rkDrknW-G", "rkJNQW5gM", "SJbAMFl-z", "rkJNQW5gM" ]
iclr_2018_ByRWCqvT-
Learning to cluster in order to transfer across domains and tasks
This paper introduces a novel method to perform transfer learning across domains and tasks, formulating it as a problem of learning to cluster. The key insight is that, in addition to features, we can transfer similarity information and this is sufficient to learn a similarity function and clustering network to perform both domain adaptation and cross-task transfer learning. We begin by reducing categorical information to pairwise constraints, which only considers whether two instances belong to the same class or not (pairwise semantic similarity). This similarity is category-agnostic and can be learned from data in the source domain using a similarity network. We then present two novel approaches for performing transfer learning using this similarity function. First, for unsupervised domain adaptation, we design a new loss function to regularize classification with a constrained clustering loss, hence learning a clustering network with the transferred similarity metric generating the training inputs. Second, for cross-task learning (i.e., unsupervised clustering with unseen categories), we propose a framework to reconstruct and estimate the number of semantic clusters, again using the clustering network. Since the similarity network is noisy, the key is to use a robust clustering algorithm, and we show that our formulation is more robust than the alternative constrained and unconstrained clustering approaches. Using this method, we first show state of the art results for the challenging cross-task problem, applied on Omniglot and ImageNet. Our results show that we can reconstruct semantic clusters with high accuracy. We then evaluate the performance of cross-domain transfer using images from the Office-31 and SVHN-MNIST tasks and present top accuracy on both datasets. Our approach doesn't explicitly deal with domain discrepancy. If we combine with a domain adaptation loss, it shows further improvement.
accepted-poster-papers
Pros -- A novel formulation for cross-task and cross-domain transfer learning. -- Extensive evaluations. Cons -- Presentation a bit confusing, please improve. The paper received positive reviews from reviewers. But the reviewers pointed out some issues with presentation and flow of the paper. Even though the revised version has improved, the AC feels that it can be improved further. For example, as pointed out by reviewers, different parts of the model are trained using different losses and / or are pre-trained. It would be worth clarifying that. It might help if the authors include a pseudocode / algorithm block to the final version of the paper.
train
[ "BJbSJYcgG", "Byum0OYlG", "BJZ2B0agf", "B16XOpsXf", "Sk5-cTImG", "B1D0OaIQz", "B1oBwTIXM", "BJkB-Sczf", "ryA6vo2gf", "HktVdZdlG", "rkVJuH4ez" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "author", "public" ]
[ "The authors propose a method for performing transfer learning and domain adaptation via a clustering approach. The primary contribution is the introduction of a Learnable Clustering Objective (LCO) that is trained on an auxiliary set of labeled data to correctly identify whether pairs of data belong to the same class. Once the LCO is trained, it is applied to the unlabeled target data and effectively serves to provide \"soft labels\" for whether or not pairs of target data belong to the same class. A separate model can then be trained to assign target data to clusters while satisfying these soft labels, thereby ensuring that clusters are made up of similar data points. \n\nThe proposed LCO is novel and seems sound, serving as a way to transfer the general knowledge of what a cluster is without requiring advance knowledge of the specific clusters of interest. The authors also demonstrate a variety of extensions, such as how to handle the case when the number of target categories is unknown, as well as how the model can make use of labeled source data in the setting where the source and target share the same task.\n\nThe way the method is presented is quite confusing, and required many more reads than normal to understand exactly what is going on. To point out one such problem point, Section 4 introduces f, a network that classifies each data instance into one of k clusters. However, f seems to be mentioned only in a few times by name, despite seeming like a crucial part of the method. Explaining how f is used to construct the CCN could help in clarifying exactly what role f plays in the final model. Likewise, the introduction of G during the explanation of the LCO is rather abrupt, and the intuition of what purpose G serves and why it must be learned from data is unclear. Additionally, because G is introduced alongside the LCO, I was initially misled into understanding was that G was optimized to minimize the LCO. Further text explaining intuitively what G accomplishes (soft labels transferred from the auxiliary dataset to the target dataset) and perhaps a general diagram of what portions of the model are trained on what datasets (G is trained on A, CCN is trained on T and optionally S') would serve the method section greatly and provide a better overview of how the model works.\n\nThe experimental evaluation is very thorough, spanning a variety of tasks and settings. Strong results in multiple settings indicate that the proposed method is effective and generalizable. Further details are provided in a very comprehensive appendix, which provides a mix of discussion and analysis of the provided results. It would be nice to see some examples of the types of predictions and mistakes the model makes to further develop an intuition for how the model works. I'm also curious how well the model works if, you do not make use of the labeled source data in the cross-domain setting, thereby mimicking the cross-task setup.\n\nAt times, the experimental details are a little unclear. Consistent use of the A, T, and S' dataset abbreviations would help. Also, the results section seems to switch off between calling the method CCN and LCO interchangeably. Finally, a few of the experimental settings differ from their baselines in nontrivial ways. For the Office experiment, the LCO appears to be trained on ImageNet data. While this seems similar in nature to initializing from a network pre-trained on ImageNet, it's worth noting that this requires one to have the entire ImageNet dataset on hand when training such a model, as opposed to other baselines which merely initialize weights and then fine-tune exclusively on the Office data. Similarly, the evaluation on SVHN-MNIST makes use of auxiliary Omniglot data, which makes the results hard to compare to the existing literature, since they generally do not use additional training data in this setting. In addition to the existing comparison, perhaps the authors can also validate a variant in which the auxiliary data is also drawn from the source so as to serve as a more direct comparison to the existing literature.\n\nOverall, the paper seems to have both a novel contribution and strong technical merit. However, the presentation of the method is lacking, and makes it unnecessarily difficult to understand how the model is composed of its parts and how it is trained. I think a more careful presentation of the intuition behind the method and more consistent use of notation would greatly improve the quality of this submission.\n\n=========================\nUpdate after author rebuttal:\n=========================\nI have read the author's response and have looked at the changes to the manuscript. I am satisfied with the improvements to the paper and have changed my review to 'accept'. ", "(Summary)\nThis paper tackles the cross-task and cross-domain transfer and adaptation problems. The authors propose learning to output a probability distribution over k-clusters and designs a loss function which encourages the distributions from the similar pairs of data to be close (in KL divergence) and the distributions from dissimilar pairs of data to be farther apart (in KL divergence). What's similar vs dissimilar is trained with a binary classifier.\n\n(Pros)\n1. The citations and related works cover fairly comprehensive and up-to-date literatures on domain adaptation and transfer learning.\n2. Learning to output the k class membership probability and the loss in eqn 5 seems novel.\n\n(Cons)\n1. The authors overclaim to be state of the art. For example, table 2 doesn't compare against two recent methods which report results exactly on the same dataset. I checked the numbers in table 2 and the numbers aren't on par with the recent methods. 1) Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks, Bousmalis et al. CVPR17, and 2) Learning Transferrable Representations for Unsupervised Domain Adaptation, Sener et al. NIPS16. Authors selectively cite and compare Sener et al. only in SVHN-MNIST experiment in sec 5.2.3 but not in the Office-31 experiments in sec 5.2.2.\n2. There are some typos in the related works section and the inferece procedure isn't clearly explained. Perhaps the authors can clear this up in the text after sec 4.3.\n\n(Assessment)\nBorderline. Refer to the Cons section above.", "pros:\nThis is a great paper - I enjoyed reading it. The authors lay down a general method for addressing various transfer learning problems: transferring across domains and tasks and in a unsupervised fashion. The paper is clearly written and easy to understand. Even though the method combines the previous general learning frameworks, the proposed algorithm for LEARNABLE CLUSTERING OBJECTIVE (LCO) is novel, and fits very well in this framework. Experimental evaluation is performed on several benchmark datasets - the proposed approach outperforms state-of-the-art for specific tasks in most cases. \n\ncons/suggestions: \n- the authors should discuss in more detail the limitations of their approach: it is clear that when there is a high discrepancy between source and target domains, that the similarity prediction network can fail. How to deal with these cases, or better, how to detect these before deploying this method?\n- the pair-wise similarity prediction network can become very dense: how to deal with extreme cases?", "The paper is updated. We added part of the discussions raised by AnonReviewer3 and enhanced the clarity suggested by AnonReviewer2 and AnonReviewer1. We greatly appreciate the contribution of reviewers.", "Thank you for the detailed comments. We would like to respond to the cons you mentioned.\n\n1. First, we discuss several points about the cross domain transfer experiment. We separate them in the following:\n\n(1) Thank you so much for referring us to Bousmalis’s CVPR work. It is a great work and we added the citation in our updated version. However, that paper doesn’t include the experiments on Office-31 nor SVHN-MNIST. Could you please share the mentioned results with us?\n\n(2) Sener’s paper is another great work. However, to make the numbers be comparable, the backbone networks (e.g., AlexNet, VGG, ResNet…), training configurations, and image preprocessing must be the same. Our table 2 for Office-31 makes sure the setup is comparable. Sener’s work has a different setup and has not released the code yet. Therefore a fair comparison on Office-31 dataset is not viable at this date. For the SVHN-to-MNIST experiment, we use another way to compare the results. In table 10, it shows the relative performance gain against the baselines (source-only) of each original paper. In such way we don’t need the code of the referred approaches be available to make the absolute numbers comparable.\n\n(3) Based on above explanation, our statement about the performance achievement is still valid.\n\n(4) We would like to heavily emphasize that in the context of domain adaptation, our work should be considered as a complementary strategy to current mainstream domain adaptation approaches since the proposed LCO does not minimize the domain discrepancy. Instead of beating the benchmark, the main purpose of our paper is showing that transferring more information is an effective strategy for transfer learning across both tasks and domains. In fact, we show improved results using one of the domain adaptation methods (DANN) but we anticipate that incorporating later advances in domain adaptation using discrepancy losses will improve this. Furthermore, the performance of the framework is largely dependent on the accuracy of the learned similarity prediction. We use a naive implementation of a similarity prediction network (SPN) to obtain the presented numbers. The performance can be improved further when a more powerful similarity learning algorithm is available.\n\n2. The inference procedure for cross task transfer applies forward propagation on the network in figure 4 and uses the outputs after the cluster assignment layer. For cross domain transfer, the network is in figure 5 and the inference outputs is from the classification layer. We included the description in the updated version. Thank you so much for the feedback to improve the clarity.\n", "We greatly appreciate your constructive suggestions for enhancing the clarity. The suggestions make sense and can be done with minor refinement. We adopt them in our updated version. Thank you so much for your contribution.\n\nThe experimental settings you mentioned are also very interesting to us. Although we believe the current experimental settings serve well the purpose of supporting the main claim that transferring predictive similarity is an effective and generic way of transfer learning, the settings you described will help exploring other dimensions of the framework. For example, learning the similarity with only limited semantic categories, i.e., only use T’ instead of the large A. We would like to include the aspects you mentioned in a future work.\n\nFor the discussion about the Office-31 experiment, we are pleased to explain why the ImageNet data (or A) doesn’t need to be presented during the training with T. The community uses the weights pretrained with ImageNet as a generic initialization due to its training on a large number of categories. We leverage a similar idea and argue that a semantic similarity learned with ImageNet-scale probably generalize well to unseen classes. Our cross-task ImageNet experiment shows support to this idea. Thereby we augment the transferring scheme from only transferring the weights to transferring both weights and the learned similarity prediction. In other words, once the similarity prediction network is learned on ImageNet, it can be applied directly to other image classification datasets without access to ImageNet dataset, just like how we use the pre-trained weights (features).\n", "Thank you for your comments and for recognizing the work. We are pleased to have a discussion of the limitations here and we added part of it into the paper.\n\nAs you pointed out, the performance of the similarity prediction is crucial, and the amount of performance gain on the target task is proportional to the accuracy of the similarity. One idea for enhancement is applying domain adaptation to learn the similarity prediction network (SPN). Any existing adaptation method can be used to train the SPN; as long as that adaptation method can deal with the high discrepancy between source and target, our framework will directly benefit from improvements in the learned SPN. With our proposed learning framework, the meta-knowledge carried by the SPN can then be transferred across either tasks or domains.\n\nAn extreme case involving the density of pairs is when there is a large number of categories in the target dataset. If there are 100 categories and we only randomly sample 32 instances for a mini-batch, there could be no similar pairs in a mini-batch since the instances are all from different categories. The LCO might not work well in this case since it has a form of contrastive loss. There are two ways to address this. The first is enlarge the mini-batch size, so that the number of sampled similar pairs increases. This method is limited by the memory size. The second way is to obtain dense similarity predictions offline. Then a mini-batch is sampled based on the pre-calculated dense similarity to ensure similar pairs are presented. \n", "Yes, figure 5 is trained end-to-end, except G.\nThe applicability of LCO is decided by its learnable part (G). In our experiment (appendix D), if G can perform better than a random guess, the clustering could benefit from it. Therefore the limitation would be how to learn a G, so its prediction of similarity is better than a random guess. If the datasets have high domain discrepancy, learn the G with domain adaptation strategy will be a good idea.", "Did you train the model depicted in Figure 5 end-to-end including backbone with classification model (except for G)?\nWhat are your thoughts on the applicability of LCO for cross-domain transfer in fields other than vision and language modelling? ", "Thank you very much for your interest. The responses of each point are in below:\na) It dependents on how hard the dataset is. In our experiments, the backbone networks and G are randomly initialized for the experiments on Omniglot and MNIST, and they perform well. For the experiments on Office-31 and the subsets of ImageNet, we do initialize the backbones with pre-training. When dealing with real-world photos, it is a common practice of pre-training, especially when the target dataset is small, e.g., Office-31. The general suggestion is: If the dataset needs a pre-trained network to help it performs well on classification, then it would be better also to use a pre-trained network in our approach.\nb) We believe the fixed value 2 is sufficient for most of the case. Our experiments involve the datasets of different complexity (e.g. MNIST vs ImageNet), unbalanced dataset (e.g. Office-31), and the varied number of categories (e.g. Omniglot alphabets). It shows the same setting performs well on the diverse conditions. In practice, we do see sometimes the performance could be improved by setting a larger margin (e.g. 2~5), but that extra gain seems dataset-dependent. Therefore we use the fixed value 2 as a conservative but universal setting.\nc) As you mentioned, it is for making the distance metric symmetric. Using only one part will introduce a hard question: Which one should be chosen? I have no clear answer for this. But from the implementation aspect, the symmetric form has an efficient vectorization thus it adds neglectable computational time compared to using only one part.", "Great work! Thank you for your contribution and I have three questions.\na) Do you have any suggestion for situations when a pre-trained backbone network is not available, it seems very important for getting good results. As far as I understand, training backbone end-to-end in the proposed solution would not be easy due to G's dependence over it. \nb) What range of values for sigma do you recommend? In paper you used fixed value i.e. 2 for all experiments. \nc) In actual implementation of equation 1 and 3, do you add terms D_KL(P* || Q) + D_KL(Q* || P) (or 2 x D_KL(P* || Q) as they are symmetric or is it fine to just optimize one e.g. D_KL(P* || Q)?\n\n" ]
[ 7, 5, 9, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ByRWCqvT-", "iclr_2018_ByRWCqvT-", "iclr_2018_ByRWCqvT-", "iclr_2018_ByRWCqvT-", "Byum0OYlG", "BJbSJYcgG", "BJZ2B0agf", "ryA6vo2gf", "HktVdZdlG", "rkVJuH4ez", "iclr_2018_ByRWCqvT-" ]
iclr_2018_H1T2hmZAb
Deep Complex Networks
At present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their real-valued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech spectrum prediction using TIMIT. We achieve state-of-the-art performance on these audio-related tasks.
accepted-poster-papers
The paper received mostly positive comments from experts. To summarize: Pros: -- The paper provides complex counterparts for typical architectures / optimization strategies used by real valued networks. Cons: -- Although the authors include plots explaining how nonlinearities transform phase, intuition about how phase gets processed can be improved. -- Improving evaluations: Wisdom et al. computes log magnitude; real valued networks may not be suited for computing real / complex numbers which have a large dynamic range, like the complex spectra. So please compare performance by estimating magnitude as in Wisdom et al. -- Please add computational cost, in terms of the number of multiplies and adds, to the final version of the paper. I am recommending that the paper be accepted based on these reviews.
train
[ "r1iYihLEf", "SyJZuXjlG", "rJH_dHjeG", "BJ8VRRhgM", "H1rRov6mz", "HJUShwpQf", "HJC5jDTmG", "rJkJTEcXG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "public", "public" ]
[ "Unfortunately I'm not familiar with state of the art in music transcription.\n\nFrom description it sounds that test set is quite small (3 melodies). For a small test set, various hyper-parameters such as model architecture, learning rate schedule and choice of optimization algorithm are expected to have a strong impact. There's a number of hyper-parameters in the optimization, how were they chosen? \n\nIt is not clear that the improvement is due to using complex numbers, rather than a particular choice of architecture/training procedure. This is the danger of using small/unpopular dataset -- improvement to state of the art may be due to chance or other uninteresting reasons.", "This paper defines building blocks for complex-valued convolutional neural networks: complex convolutions, complex batch normalisation, several variants of the ReLU nonlinearity for complex inputs, and an initialisation strategy. The writing is clear, concise and easy to follow.\n\nAn important argument in favour of using complex-valued networks is said to be the propagation of phase information. However, I feel that the observation that CReLU works best out of the 3 proposed alternatives contradicts this somewhat. CReLU simply applies ReLU component-wise to the real and imaginary parts, which has an effect on the phase information that is hard to conceptualise. It definitely does not preserve phase, like modReLU would.\n\nThis makes me wonder whether the \"complex numbers\" paradigm is applied meaningfully here, or whether this is just an arbitrary way of doing some parameter sharing in convnets that happens to work reasonably well (note that even completely random parameter tying can work well, as shown in \"Compressing neural networks with the hashing trick\" by Chen et al.). Some more insight into how phase information is used, what it represents and how it is propagated through the network would help to make sense of this.\n\nThe image recognition results are mostly inconclusive, which makes it hard to assess the benefit of this approach. The improved performance on the audio tasks seems significant, but how the complex nature of the networks helps achieve this is not really demonstrated. It is unclear how the phase information in the input waveform is transformed into the phase of the complex activations in the network (because I think it is implied that this is what happens). This connection is a bit vague. Once again, a more in-depth analysis of this phase behavior would be very welcome.\n\nI'm on the fence about this work: I like the ideas and they are explained well, but I'm missing some insight into why and how all of this is actually helping to improve performance (especially w.r.t. how phase information is used).\n\n\nComments:\n\n- The related work section is comprehensive but a bit unstructured, with each new paragraph seemingly describing a completely different type of work. Maybe some subsection titles would help make it feel a bit more cohesive.\n\n- page 3: \"(cite a couple of them)\" should be replaced by some actual references :)\n\n- Although care is taken to ensure that the complex and real-valued networks that are compared in the experiments have roughly the same number of parameters, doesn't the complex version always require more computation on account of there being more filters in each layer? It would be nice to discuss computational cost as well.\n\n\nREVISION: I have decided to raise my rating from 5 to 7 as I feel that the authors have adequately addressed many of my comments. In particular, I really appreciated the additional appendix sections to clarify what actually happens as the phase information is propagated through the network.\n\nRegarding the CIFAR results, I may have read over it, but I think it would be good to state even more clearly that these experiments constitute a sanity check, as both reviewer 1 and myself were seemingly unaware of this. With this in mind, it is of course completely fine that the results are not better than for real-valued networks.\n", "The paper presents an extensive framework for complex-valued neural networks. Related literature suggests a variety of motivations for complex valued neural networks: biological evidence, richer representation capacity, easier optimization, faster learning, noise-robust memory retrieval mechanisms and more. \n\nThe contribution of the current work does not lie in presenting significantly superior results, compared to the traditional real-valued neural networks, but rather in developing an extensive framework for applying and conducting research with complex-valued neural networks. Indeed, the most standard work nowadays with real-valued neural networks depends on a variety of already well-established techniques for weight initialization, regularization, activation function, convolutions, etc. In this work, the complex equivalent of many of these basics tools are developed, such as a number of complex activation functions, complex batch normalization, complex convolution, discussion of complex differentiability, strategies for complex weight initialization, complex equivalent of a residual neural network. \n\nEmpirical results show that the new complex-flavored neural networks achieve generally comparable performance to their real-valued counterparts, on a variety of different tasks. Then again, the major contribution of this work is not advancing the state-of-the-art on many benchmark tasks, but constructing a solid framework that will enable stable and solid application and research of these well-motivated models. \n", "Authors present complex valued analogues of real-valued convolution, ReLU and batch normalization functions. Their \"related work section\" brings up uses of complex valued computation such as discrete Fourier transforms and Holographic Reduced Representations. However their application don't seem to connect to any of those uses and simply reimplement existing real-valued networks as complex valued.\n\nTheir contributions are:\n\n1. Formulate complex valued convolution\n2. Formulate two complex-valued alternatives to ReLU and compare them\n3. Formulate complex batch normalization as a \"whitening\" operation on complex domain\n4. Formulate complex analogue of Glorot weight normalization scheme\n\nSince any complex valued computation can be done with a real-valued arithmetic, switching to complex arithmetic needs a compelling use-case. For instance, some existing algorithm may be formulated in terms of complex values, and reformulating it in terms of real-valued computation may be awkward. However, cases the authors address, which are training batch-norm ReLU networks on standard datasets, are already formulated in terms of real valued arithmetic. Switching these networks to complex values doesn't seem to bring any benefit, either in simplicity, or in classification performance.", "We appreciate your encouraging comments, we thank you for the review -- and for acknowledging the contribution we have made by creating “a solid framework that will enable stable and solid application” [of models based on deep complex networks].\n\nWe want to inform the reviewer that we have added the following sections to the paper. We have added: \n\nSection 6.3 in order to detail the complex chain rule for the reader.\nSection 6.4 contains the details about the complex LSTM.\nSection 6.5 has been added in order to illustrate the utility of our complex batch normalization procedure. \nAnd we have also added a discussion about the phase information encoding for each of the activation functions tested in our work. You can find the latter in section 6.6.\n", "We thank the reviewer for the useful feedback. We have considered the comments and added a discussion on Phase Encoding in the appendix of the revised manuscript. We have illustrated the difference in encoding between the different activation functions tested for the deep complex network and shown that CReLU has more flexibility discriminating phase information. We also show that for all the tested activations, phase information is not necessarily preserved but, depending on where the complex representation lies in the complex plane, the latter might be either preserved, altered or discarded.\n\n“ CReLU simply applies ReLU component-wise to the real and imaginary parts, which has an effect on the phase information that is hard to conceptualise. It definitely does not preserve phase, like modReLU would.”\n\n“ … Some more insight into how phase information is used, what it represents and how it is propagated through the network would help to make sense of this.”\n\nIndeed, none of the ReLU based activations that are presented preserve phase completely. However, the nature of a ReLU is that it operates as an identity function in certain regions of the input. Therefore phase is preserved exactly in some regions of the input space and discriminated in others. \n\nFor example, in section 3.4 we discuss the properties of the MoDReLU, CReLU and zReLU activation functions. We have added Section 6.6 which discusses the ways in which phase is preserved and manipulated in each of these cases. \n\nImportantly, in cases when activation functions do not preserve phase information, phase information can still influence subsequent computation. For example phase information may be preserved explicitly through a number of layers when activation functions are operating in their linear regime, prior to a layer further up in a network where the phase of an input lies in a zero region of an activation function. In audio classification tasks, one can easily imagine how phase information could be important to use to influence classification decisions, but this does not mean that phase must be preserved all the way from input to the final classification output.\n\n“The image recognition results are mostly inconclusive, which makes it hard to assess the benefit of this approach.”\n\nAs mentioned to reviewer 3. we feel it is important to underscore that our experiments on CIFAR with complex ResNets are included to demonstrate that our implementation is correct and that it yields results that are comparable to state of the art real architectures on a standard, well-known vision benchmark. This type of experiment is important because a naively implemented complex variation of a ResNet is *not* stable. Our complex batch norm formulation is essential to making deep complex networks work and therefore is an important contribution. (See Section 6.5).\n\n“The improved performance on the audio tasks seems significant, but how the complex nature of the networks helps achieve this is not really demonstrated. It is unclear how the phase information in the input waveform is transformed into the phase of the complex activations in the network (because I think it is implied that this is what happens). This connection is a bit vague. Once again, a more in-depth analysis of this phase behavior would be very welcome.”\n\nConsider also our experiments in Table 4 which use tanh and sigmoid activations. These activation functions are bijective (invertible) functions and therefore they allow phase information to pass through the network with negligible loss of information. Our experiments in Table 4 involve predicting spectrograms and therefore the utility of preserving phase information should be easy for the reader to imagine. Our experiments support this intuition, as we see clear performance improvements with these phase information preserving models compared to similar real computation only networks.\n\n“ Although care is taken to ensure that the complex and real-valued networks that are compared in the experiments have roughly the same number of parameters, doesn't the complex version always require more computation on account of there being more filters in each layer? It would be nice to discuss computational cost as well.”\n\nIn terms of computational complexity, the convolutional operation and the complex batchnorm are of the same order than their real counterparts. However, as the complex convolution is 4 times more expensive than its real counterpart and as the complex batchnorm is not implemented in cudnn, this makes a difference in terms of running time.\n", "We thank reviewer 3 for the useful feedback. We have clarified some important points about our work below -- in particular some of our experiments are included to demonstrate the correctness of our implementation on a well known benchmark and the importance of the building blocks for using complex numbers. \nWe also feel it is important to draw special attention to the fact that we do indeed have clear quantitative improvements in performance using complex-valued neural networks compared to similar purely real-value networks on a well-defined audio task. This seems to have been overlooked in the initial review.\n\n“Their ‘related work section’ brings up uses of complex valued computation such as discrete Fourier transforms and Holographic Reduced Representations. However their application don't seem to connect to any of those uses”\n\nThere is not a lot of work that has considered the use of complex representations in the context of deep learning. This is why we have cited and commented upon interesting examples of what people have used complex representations for in the past. Some examples of prior work include the work on holographic representations and spectral pooling which use the (complex) Fourier Transform. We therefore discuss them as related work, i.e, holographic representations (and Associative LSTMs) and Fourier Transform methods (with their applications to Spectral Representations for ConvNets).\n\n“Since any complex valued computation can be done with a real-valued arithmetic, switching to complex arithmetic needs a compelling use-case. For instance, some existing algorithm may be formulated in terms of complex values, and reformulating it in terms of real-valued computation may be awkward. However, cases the authors address, which are training batch-norm ReLU networks on standard datasets, are already formulated in terms of real valued arithmetic.”\n\nAs mentioned to reviewer 1, we feel it is important to underscore that our experiments on CIFAR with complex ResNets are included to demonstrate that our implementation is correct and that it yields results that are comparable to state-of-the-art real-valued computation architectures on a standard, well-known vision benchmark. This type of experiment is important because a naively implemented complex variation of a ResNet is *not* stable. Our complex batch-norm formulation is essential to making deep complex networks work and therefore is an important contribution. (See Section 6.5)\n\nIn light of Tables 3 and 4 we respectfully disagree with the following statement:\n“Switching these networks to complex values doesn't seem to bring any benefit, either in simplicity, or in classification performance.”\n\nYes, for our ResNet experiments on CIFAR the complex network does not show a gain in performance; however, as we discuss above that was not the point of presenting those experiments. In contrast, our audio experiments in Tables 3 and 4 demonstrate the utility of complex VGG style architectures and complex convolutional LSTM architectures through clear performance gains. Comparing similarly structured real and complex networks, one sees increased performance for the complex variant in both of these sets of experiments. This is in line with the interpretation that for audio signals the use of complex neural networks allows information such as phase to be represented and manipulated within layers (implicitly in the case of rectangular complex numbers), and this yields higher performance compared to similarly structured real models.\n", "1. In Deep learning, it is typical for researchers to seek structures that can represent more information in weight space.\n\n2. Real-valued convolutional neural nets can trivially store magnitude information.\n\n3. Complex-valued convolutional neural nets can trivially store phase information, in addition to magnitude information. (As seen in the Deep Complex network paper)\n\n4. So, seeking methods for representing more information in weight space, is a sensible, and typical goal in Deep learning, because the richer your weight space, the better the hypotheses or answers produced by neural net models!" ]
[ -1, 7, 8, 4, -1, -1, -1, -1 ]
[ -1, 4, 4, 4, -1, -1, -1, -1 ]
[ "HJC5jDTmG", "iclr_2018_H1T2hmZAb", "iclr_2018_H1T2hmZAb", "iclr_2018_H1T2hmZAb", "rJH_dHjeG", "SyJZuXjlG", "BJ8VRRhgM", "iclr_2018_H1T2hmZAb" ]
iclr_2018_HkwBEMWCZ
Skip Connections Eliminate Singularities
Skip connections made the training of very deep networks possible and have become an indispensable component in a variety of neural architectures. A completely satisfactory explanation for their success remains elusive. Here, we present a novel explanation for the benefits of skip connections in training very deep networks. The difficulty of training deep networks is partly due to the singularities caused by the non-identifiability of the model. Several such singularities have been identified in previous works: (i) overlap singularities caused by the permutation symmetry of nodes in a given layer, (ii) elimination singularities corresponding to the elimination, i.e. consistent deactivation, of nodes, (iii) singularities generated by the linear dependence of the nodes. These singularities cause degenerate manifolds in the loss landscape that slow down learning. We argue that skip connections eliminate these singularities by breaking the permutation symmetry of nodes, by reducing the possibility of node elimination and by making the nodes less linearly dependent. Moreover, for typical initializations, skip connections move the network away from the "ghosts" of these singularities and sculpt the landscape around them to alleviate the learning slow-down. These hypotheses are supported by evidence from simplified models, as well as from experiments with deep networks trained on real-world datasets.
accepted-poster-papers
pros: * novel explanation: skip connections <--> singualrities * thorough analysis * significant topic in understanding deep nets cons: * more rigorous theoretical analysis would be better overall, the committee feels this paper would be interesting to have at ICLR.
train
[ "rJkUMYQgz", "SJWa0g9xM", "HJiCEsseG", "rytZt1jXM", "B1W8I1jmf", "B11hBJsXM", "H1JlpxZZf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public" ]
[ "The authors show that two types of singularities impede learning in deep neural networks: elimination singularities (where a unit is effectively shut off by a loss of input or output weights, or by an overly-strong negative bias), and overlap singularities, where two or more units have very similar input or output weights. They then demonstrate that skip connections can reduce the prevalence of these singularities, and thus speed up learning.\n\nThe analysis is thorough: the authors explore alternative methods of reducing the singularities, and explore the skip connection properties that more strongly reduce the singularities, and make observations consistent with their overarching claims.\n\nI have no major criticisms.\n\nOne suggestion for future work would be to provide a procedure for users to tailor their skip connection matrices to maximize learning speed and efficacy. The authors could then use this procedure to make highly trainable networks, and show that on test (not training) data, the resultant network leads to high performance.", "Paper examines the use of skip connections (including residual layers) in deep networks as a way of alleviating two perceived difficulties in training: 1) when a neuron does not contain any information, and 2) when two neurons in a layer compute the same function. Both of these cases lead to singularities in the Hessian matrix, and this work includes a number of experiments showing the effect of skip connections on the Hessian during training. \n\nThis is a significant and timely topic. While I may not be the best one to judge the originality of this work, I appreciated how the authors presented clear and concise arguments with experiments to back up their claims.\n\n", "This paper proposes to explain the benefits of skip connections in terms of eliminating the singularities of the loss function. The discussion is largely based on a sequence of experiments, some of which are interesting and insightful. The discussion here can be useful for other researchers. \n\nMy main concern is that the result here is purely empirical, with no concrete theoretical justification. What the experiments reveal is an empirical correlation between the Eigval index and training accuracy, which can be caused by lots of reasons (and cofounders), and does not necessarily establish a causal relation. Therefore, i found many of the discussion to be questionable. I would love to see more solid theoretical discussion to justify the hypothesis proposed in this paper.\n \nDo you have a sense how accurate is the estimation of the tail probabilities of the eigenvalues? Because the whole paper is based on the approximation of the eigval indexes, it is critical to exam the estimation is accurate enough to draw the conclusions in the paper. \n\nAll the conclusions are based on one or two datasets. Could you consider testing the result on more different datasets to verify if the results are generalizable? ", "We agree with the reviewer that a theoretically more rigorous analysis of the results presented in our paper would be desirable. However, both because the paper is already quite long and because the analysis requested by the reviewer is not straightforward even in simplified models, we would like to leave this for future work. We would also like to point out that the idea that degeneracies cause training difficulties in neural networks is not wholly without theoretical precedent. Although the models they deal with are highly simplified models, prior work by Amari and colleagues, which we discuss in our paper, already established this connection. Similarly, as also discussed in our paper, Saxe et al. (2013) showed that randomly initialized linear networks become increasingly degenerate with depth and they identified this as the source of the training difficulties in such deep networks.\n\nSupplementary Note 4 validates our methods for quantifying degeneracies in smaller, numerically tractable networks. The results from these smaller networks agree with the results from the larger networks presented in the main text. For these smaller networks, the mixture model slightly underestimated the fraction of degenerate eigenvalues and overestimated the fraction of degenerate eigenvalues. However, in both cases, there was a highly significant linear relationship between the estimated and actual fractions. This suggests that inferences drawn from the mixture model about the relative degeneracy of models should be reliable, while inferences about the exact values of the eigenvalue degeneracy or negativity of the models should be made more cautiously.\n\nAs requested, we tested the results shown in Figure 4 in two more image recognition datasets (SVHN and STL-10). The results from these new datasets are presented in the supplementary Figure S7. These results are in agreement with the ones from CIFAR-100 shown in Figure 4.", " We thank the reviewers for their comments and positive feedback.\n\nSince the original submission, we have noticed that another degeneracy that could potentially be significant in training deep networks is linear dependence between hidden units. The paper is updated with a discussion and some new results regarding this degeneracy. Results in Figure 4 are replicated for two more datasets (SVHN and STL-10, supplementary Figure S7). Several more minor changes are also made in the revision to improve the presentation.", "Yes, the orthogonal skip connectivity proposed in the preprint mentioned is the same as the one we propose in our submission. We note that this preprint appeared several months after our preprint.", "This paper presents a good analysis on analyzing skip connections for training deep NNs. I noticed that the paper, \"Orthogonal and Idempotent Transformations for Learning Deep Neural Networks\" by Jingdong Wang, Yajie Xing, Kexin Zhang, Cha Zhang, provided two methods for designing skip connections, which might be related to \"orthogonality\" mentioned in this paper. Is there any possible discussion?" ]
[ 8, 8, 6, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_HkwBEMWCZ", "iclr_2018_HkwBEMWCZ", "iclr_2018_HkwBEMWCZ", "HJiCEsseG", "iclr_2018_HkwBEMWCZ", "H1JlpxZZf", "iclr_2018_HkwBEMWCZ" ]
iclr_2018_H1cWzoxA-
Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Modeling
Recurrent neural networks (RNN), convolutional neural networks (CNN) and self-attention networks (SAN) are commonly used to produce context-aware representations. RNN can capture long-range dependency but is hard to parallelize and not time-efficient. CNN focuses on local dependency but does not perform well on some tasks. SAN can model both such dependencies via highly parallelizable computation, but memory requirement grows rapidly in line with sequence length. In this paper, we propose a model, called "bi-directional block self-attention network (Bi-BloSAN)", for RNN/CNN-free sequence encoding. It requires as little memory as RNN but with all the merits of SAN. Bi-BloSAN splits the entire sequence into blocks, and applies an intra-block SAN to each block for modeling local context, then applies an inter-block SAN to the outputs for all blocks to capture long-range dependency. Thus, each SAN only needs to process a short sequence, and only a small amount of memory is required. Additionally, we use feature-level attention to handle the variation of contexts around the same word, and use forward/backward masks to encode temporal order information. On nine benchmark datasets for different NLP tasks, Bi-BloSAN achieves or improves upon state-of-the-art accuracy, and shows better efficiency-memory trade-off than existing RNN/CNN/SAN.
accepted-poster-papers
The proposed Bi-BloSAN is a two-levels' block SAN, which has both parallelization efficiency and memory efficiency. The study is thoroughly conducted and well presented.
train
[ "SJz6VRFlG", "rkcETx9lf", "ryOYfeaef", "BJjyKg-mM", "SyboUpHzz", "rkzLzZpbM", "ryi_ebTWM", "r1Fr-WT-f", "rk9hKgTZz", "S1Y3_lTZM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ "Pros: \nThe paper proposes a “bi-directional block self-attention network (Bi-BloSAN)” for sequence encoding, which inherits the advantages of multi-head (Vaswani et al., 2017) and DiSAN (Shen et al., 2017) network but is claimed to be more memory-efficient. The paper is written clearly and is easy to follow. The source code is released for duplicability. The main originality is using block (or hierarchical) structures; i.e., the proposed models split the an entire sequence into blocks, apply an intra-block SAN to each block for modeling local context, and then apply an inter-block SAN to the output for all blocks to capture long-range dependency. The proposed model was tested on nine benchmarks and achieve good efficiency-memory trade-off. \n\nCons:\n- Methodology of the paper is very incremental compared with previous models. \n- Many of the baselines listed in the paper are not competitive; e.g., for SNLI, state-of-the-art results are not included in the paper. \n- The paper argues advantages of the proposed models over CNN by assuming the latter only captures local dependency, which, however, is not supported by discussion on or comparison with hierarchical CNN.\n- The block splitting (as detailed in appendix) is rather arbitrary in terms of that it potentially divides coherent language segments apart. This is unnatural, e.g., compared with alternatives such as using linguistic segments as blocks.\n- The main originality of paper is the block style. However, the paper doesn’t analyze how and why the block brings improvement. \n-If we remove intra-block self-attention (but only keep token-level self-attention), whether the performance will be significantly worse?\n", "This high-quality paper tackles the quadratic dependency of memory on sequence length in attention-based models, and presents strong empirical results across multiple evaluation tasks. The approach is basically to apply self-attention at two levels, such that each level only has a small, fixed number of items, thereby limiting the memory requirement while having negligible impact on speed. It captures local information into so-called blocks using self-attention, and then applies a second level of self-attention over the blocks themselves.\n\nThe paper is well organized and clearly written, modulo minor language mistakes that should be easy to fix with further proof-reading. The contextualization of the method relative to CNNs/RNNs/Transformers is good, and the beneficial trade-offs between memory, runtime and accuracy are thoroughly investigated, and they're compelling.\n\nI am curious how the story would look if one tried to push beyond two levels...? For example, how effective might a further inter-sentence attention level be for obtaining representations for long documents? \n\nMinor points:\n- Text between Eq 4 & 5: W^{(1)} appears twice; one instance should probably be W^{(2)}.\n- Multiple locations, e.g. S4.1: for NLI, the word is *premise*, not *promise*.\n- Missing word in first sentence of S4.1: ... reason __ the ...", "This paper introduces bi-directional block self-attention model (Bi-BioSAN) as a general-purpose encoder for sequence modeling tasks in NLP. The experiments include tasks like natural language inference, reading comprehension (SquAD), semantic relatedness and sentence classifications. The new model shows decent performance when comparing with Bi-LSTM, CNN and other baselines while running at a reasonably fast speed.\n\nThe advantage of this model is that we can use little memory (as in RNNs) and enjoy the parallelizable computation as in (SANs), and achieve similar (or better) performance.\n\nWhile I do appreciate the solid experiment section, I don't think the model itself is sufficient contribution for a publication at ICLR. First, there is not much innovation in the model architecture. The idea of the Bi-BioSAN model simply to split the sentence into blocks and compute self-attention for each of them, and then using the same mechanisms as a pooling operation followed by a fusion level. I think this more counts as careful engineering of the SAN model rather than a main innovation. Second, the model introduces much more parameters. In the experiments, it can easily use 2 times parameters than the commonly used encoders. What if we use the same amount of parameters for Bi-LSTM encoders? Will the gap between the new model and the commonly used ones be smaller?\n\n====\n\nI appreciate the answers the authors added and I change the score to 6.", "Dear all reviewers, we upload a revision of this paper that differs from the previous one in that\n1) As suggested by AnonReviewer3, we implemented the Hierarchical CNN (called Hrchy-CNN in the paper) as a baseline, and we then applied this model to SNLI and SICK datasets, which showed that the proposed model, Bi-BloSAN, still outperforms the Hierarchical CNN by a large margin; \n2) We fixed some typos. ", "To test the performance of hierarchical CNN for context fusion, we implemented it on SNLI dataset. In particular, we used 3-layer 300D CNNs with kernel length 5 (i.e., using n-gram of n=5). By following [1], we also applied \"Gated Linear Units (GLU)\" [2] and residual connection [3] to the hierarchical CNN. We tuned the keep probability of dropout between 0.65 and 0.85 with step-size 0.05. The code of this hierarchical CNNs can be found at https://github.com/code4review/BiBloSA/blob/master/context_fusion/hierarchical_cnn.py\n\nThis model has 3.4M parameters. It spends 343s per training epoch and 2.9s for inference on dev set. Its test accuracy is 83.92% (with dev accuracy 84.15% and train accuracy 91.28%), which slightly outperforms the CNNs with multi-window [4] shown in our paper, but is still worse than other baselines and Bi-BloSAN. We will add these results to the revision.\n\n[1] Gehring, Jonas, et al. \"Convolutional Sequence to Sequence Learning.\" arXiv preprint arXiv:1705.03122 (2017).\n[2] Dauphin, Yann N., et al. \"Language modeling with gated convolutional networks.\" arXiv preprint arXiv:1612.08083 (2016).\n[3] He, Kaiming, et al. \"Deep residual learning for image recognition.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.\n[4] Kim, Yoon. \"Convolutional neural networks for sentence classification.\" arXiv preprint arXiv:1408.5882 (2014).", "Dear all reviewers, we upload a revision of this paper that differs from the previous one in that\n1) We found the multi-head attention is very sensitive to the keep probability of dropout due to \"Attention Dropout\", so we tuned it in interval [0.70:0.05:0.90], resulting in test accuracy on SNLI increasing from 83.3% to 84.2%.\n2) As suggested by AnonReviewer2, we decreased the hidden units number of Bi-BliSAN from 600 to 480 on SNLI, which leads to the parameters number dropping from 4.1M to 2.8M. The test accuracy of this 480D Bi-BloSAN is 85.66% with dev accuracy 86.08% and train accuracy 91.68%.\n3) As suggested by AnonReviewer3, we added the discussion on hierarchical CNNs to the introduction.\n4) We corrected typos and mistakes partly pointed out by AnonReviewer4.", "==Following above==\n- Q4. The block splitting (as detailed in appendix) is rather arbitrary in terms of that it potentially divides coherent language segments apart. This is unnatural, e.g., compared with alternatives such as using linguistic segments as blocks.\n\nHere are two reasons for not using linguistic segments as blocks in our model. Firstly, the property of significantly reducing memory cannot be guaranteed if using linguistic segments, because either too long or too short segments will lead to expensive memory consumption, and we cannot easily control the length of linguistic segments provided by other tools. For example, in Eq.(19), either large and small block length r is likely to result in large memory. Secondly, the process of achieving linguistic segments potentially increases computation/memory cost, introduces overhead and requires more complex implementation. In addition, although we do not use linguistic segments for block splitting, our model can still capture the dependencies between tokens from different blocks by using the block-level context fusion and feature fusion gate developed in this paper. \n\n\n- Q5. The main originality of paper is the block style. However, the paper doesn’t analyze how and why the block brings improvement. \n\nThe block or two-layer self-attention substantially reduces the memory and computational costs required by previous self-attention mechanisms, which is proportional to the square of sequence length. Meanwhile, it achieves competitive or better accuracy than RNNs/CNNs. We give a formally explanation of how this block idea can reduce the memory in Appendix A.\n\n\n- Q6. If we remove intra-block self-attention (but only keep token-level self-attention), whether the performance will be significantly worse?\n\nCompared to test accuracy 85.7% of Bi-BloSAN on SNLI, the accuracy will be decreased to 85.2% if we remove the intra-block attention (keep block-level attention), whereas the accuracy will be decreased to 85.3% if we remove inter-block self-attention (keep token-level self-attention in blocks). Moreover, if we only use token-level self-attention, the model will be identical to the directional self-attention [2]. You can refer to the ablation study at the end of Section 4.1 for more details.\n\n\n\nReferences\n[1] Vaswani, Ashish, et al. \"Attention is all you need. CoRR abs/1706.03762.\" (2017).\n[2] Shen, Tao, et al. \"Disan: Directional self-attention network for rnn/cnn-free language understanding.\" arXiv preprint arXiv:1709.04696 (2017).\n[3] Srivastava, Rupesh Kumar, Klaus Greff, and Jürgen Schmidhuber. \"Highway networks.\" arXiv preprint arXiv:1505.00387 (2015).\n[4] Nie, Yixin, and Mohit Bansal. \"Shortcut-stacked sentence encoders for multi-domain inference.\" arXiv preprint arXiv:1708.02312 (2017).\n[5] Jihun Choi, Kang Min Yoo and Sang-goo Lee. \"Learning to compose task-specific tree structures.\" arXiv preprint arXiv:1707.02786 (2017). \n[6]Kim, Yoon. \"Convolutional neural networks for sentence classification.\" arXiv preprint arXiv:1408.5882 (2014).\n[7] Kaiser, Łukasz, and Samy Bengio. \"Can Active Memory Replace Attention?.\" Advances in Neural Information Processing Systems. 2016.\n[8] Kalchbrenner, Nal, et al. \"Neural machine translation in linear time.\" arXiv preprint arXiv:1610.10099 (2016).\n[9] Gehring, Jonas, et al. \"Convolutional Sequence to Sequence Learning.\" arXiv preprint arXiv:1705.03122 (2017).", "Thank you for your elaborative comments! We discuss the Cons you pointed out one by one as follows.\n\n- Q1. Methodology of the paper is very incremental compared with previous models.\n\nYes, the idea of using block or two-level attention is simple. In fact, it is similar to the idea behind almost all the hierarchical models. However, it has never been studied on self-attentions based models, especially on attention-only models (as much as we know, Transformer [1] and DiSAN [2] are the merely two published attention-only models), for context fusion. Moreover, it solves a critical problem of previous self-attention mechanisms, i.e., expensive memory consumption, which was a burden of applying attention to long sequences and an inevitable weakness compared to popular RNN models. Hence, it is a simple idea, which leads to a simple model, but effectively solves an important problem.\n\nIn addition, given this idea, it is non-trivial to design a neural net architecture for context fusion, we still need to figure out: 1) How to split the sequence so the memory can be effectively reduced? 2) How to capture the dependency between two elements from different blocks? 3) How to produce contextual-aware representation for each element on each level? 4) How to combine the output of different levels so the information from lower level does not fade out? For example, on top of Figure 3, we duplicate the block features e_i to each element as its high-level representation, use skip (highway [3]) connections to achieve its lower level representations x_i and h_i, and then design a fusion gate to combine the three representations. This design assigns each element with both high-level and low-level representations and combine them on top of the model to produce a contextual-aware representation per input element. Without it, the two-level attention can only give us e_i, which cannot explicitly model the dependency between elements from different blocks, and cannot be used for context fusion. This method has not been used in construction of attention-based models because multi-level self-attention had not been studied before.\n\n\n- Q2. Many of the baselines listed in the paper are not competitive; e.g., for SNLI, state-of-the-art results are not included in the paper. \n\nIn the experiment on SNLI, Bi-BloSAN is only used to produce sentence encoding. For a fair comparison, we only compare it with the sentence-encoding based models listed separately on the leaderboard of SNLI. Up to ICLR submission deadline, Bi-BloSAN achieves the best test accuracy among all of them. \n\nAfter ICLR submission deadline, the leaderboard has been updated with several new methods. We copy the results of the new methods in the following.\nThe Proposed Model) 480D Bi-BloSAN\t2.8M\t85.7%\n1) 300D Residual stacked encoders[4]\t9.7M\t85.7%\n2) 600D Gumbel TreeLSTM encoders[5]\t10.0M\t86.0%\n3) 600D Residual stacked encoders[4]\t29.0M\t86.0%\nThese results show that compared to the newly updated methods, Bi-BloSAN uses significantly less parameters but achieves competitive test accuracy.\n\n\n- Q3. The paper argues advantages of the proposed models over CNN by assuming the latter only captures local dependency, which, however, is not supported by discussion on or comparison with hierarchical CNN.\n\nThe discussion about CNN in the current version mainly focuses on single layer CNN with multi-window [6], which is widely used in NLP community, and does not mention too much about recent studies on hierarchical CNNs. The hierarchical CNNs in NLP, such as Extended Neural GPU [7], ByteNet [8], and ConvS2S [9], are able to model relatively long-range dependency by using stacking CNNs, which can increase the number of input elements represented in a state. Nonetheless, as mentioned in [1], the number of operations (i.e. CNNs) required to relate signals from two arbitrary input grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions. However, self-attention based method only requires constant number of operations, no matter how far it is between two elements. We will add the discussion on hierarchical CNNs in the revision.", "Thanks for your comments! \n\n- Q1. First, there is not much innovation in the model architecture. The idea of the Bi-BioSAN model simply to split the sentence into blocks and compute self-attention for each of them, and then using the same mechanisms as a pooling operation followed by a fusion level. I think this more counts as careful engineering of the SAN model rather than a main innovation.\n\nYes, the idea of using block or two-level attention is simple. In fact, it is similar to the idea behind almost all the hierarchical models. However, it has never been studied on self-attentions based models, especially on attention-only models (as much as we know, Transformer [1] and DiSAN [2] are the merely two published attention-only models), for context fusion. Moreover, it solves a critical problem of previous self-attention mechanisms, i.e., expensive memory consumption, which was a burden of applying attention to long sequences and an inevitable weakness compared to popular RNN models. Hence, it is a simple idea, which leads to a simple model, but effectively solves an important problem.\n\nIn addition, given this idea, it is non-trivial to design a neural net architecture for context fusion, we still need to figure out: 1) How to split the sequence so the memory can be effectively reduced? 2) How to capture the dependency between two elements from different blocks? 3) How to produce contextual-aware representation for each element on each level? 4) How to combine the output of different levels so the information from lower level does not fade out? For example, on top of Figure 3, we duplicate the block features e_i to each element as its high-level representation, use skip (highway [3]) connections to achieve its lower level representations x_i and h_i, and then design a fusion gate to combine the three representations. This design assigns each element with both high-level and low-level representations and combine them on top of the model to produce a contextual-aware representation per input element. Without it, the two-level attention can only give us e_i, which cannot explicitly model the dependency between elements from different blocks, and cannot be used for context fusion. This method has not been used in construction of attention-based models because multi-level self-attention had not been studied before.\n\n\n- Q2. Second, the model introduces much more parameters. In the experiments, it can easily use 2 times parameters than the commonly used encoders. What if we use the same amount of parameters for Bi-LSTM encoders? Will the gap between the new model and the commonly used ones be smaller?\n\nAs suggested by you, we studied two cases in which Bi-LSTM and Bi-BloSAN have similar number of parameters. The gap does not change in both cases. We will add these new results to our revision. \n\n1) We increase the number of hidden units in Bi-LSTM encoders from 600 to 800. This increases the number of parameters from 2.9M to 4.8M, which is more than 4.1M of Bi-BloSAN. We implement this 800D Bi-LSTM encoder on the SNLI dataset which is the largest benchmark dataset used in this paper. After tuning of the hyperparameters (e.g., dropout keep probability is increased from 0.65 to 0.80 with step 0.05 in case of overfitting), the best test accuracy is 84.95% (with dev accuracy of 85.67%).\n\n2) We decrease the number of hidden units in Bi-BloSAN from 600 to 480. This reduces the number of parameters from 4.1M to 2.8M, which is similar to that of the commonly used encoders. Interestingly, without tuning the keep probability of dropout, the test accuracy of this 480D Bi-BloSAN is 85.66% (with dev accuracy 86.08% and train accuracy 91.68%). \n\nAdditionally, a recent NLP paper [4] shows that increasing the dimension of an RNN encoder from 128D to 2048D does not result in substantially improvement of the performance (from 21.50 to 21.86 of BLEU score on newstest2013 for machine translation). This is consistent with the results above. \n\n\n\nReferences\n[1] Vaswani, Ashish, et al. \"Attention is all you need. CoRR abs/1706.03762.\" (2017).\n[2] Shen, Tao, et al. \"Disan: Directional self-attention network for rnn/cnn-free language understanding.\" arXiv preprint arXiv:1709.04696 (2017).\n[3] Srivastava, Rupesh Kumar, Klaus Greff, and Jürgen Schmidhuber. \"Highway networks.\" arXiv preprint arXiv:1505.00387 (2015).\n[4] Britz, Denny, et al. \"Massive exploration of neural machine translation architectures.\" arXiv preprint arXiv:1703.03906 (2017).", "Thank you for your strong support to our work! We will carefully fix the typos you pointed out.\n\n- Q1. I am curious how the story would look if one tried to push beyond two levels...? For example, how effective might a further inter-sentence attention level be for obtaining representations for long documents? \n\nWe have different answers to this question for sequences with different lengths.\n\nFor context fusion or embedding of single sentences (which is the main focus of this paper), a two-level self-attention is usually sufficient to reduce the memory consumption and meanwhile to inherit most power of original SAN in modeling contextual dependencies. Compared to multi-level attention, it preserves the local dependencies in longer subsequence and directly controls the memory utility rate, by using less parameters and computations than multi-level one. \n\nFor the context fusion of a document or a passage, which already has a multi-level structure (document-passages-sentences-phrases), it is worth considering to use multi-level self-attention to model the contextual relationship when the memory consumption needs to be small. Recently, self-attention has been applied to long text as a popular context fusion strategy in machine comprehension task [1,2]. In this task, the original self-attention requires lots of memory, and cannot be solely applied due to the difficulty of context fusion for a long passage/document. It is more practical to use LSTM or GRU as context fusion layers and use self-attention as a complementary module capturing the distance-irrelevant dependency. But the recurrent structure of LSTM/GRU leads to inefficiency in computation. Therefore, multi-level self-attention could provide a both memory and time efficient solution. For example, we can design a three-level self-attention structure, which consists of intra-block intra sentence, inter-block intra sentence and inter-sentence self-attentions, to produce context-aware representations of tokens from a passage. Such model can overcome the weaknesses of both RNN/CNN-based SANs (only used as a complimentary module to context fusion layers) and the RNN/CNN-free SANs (with explosion of memory consumption when text length grows).\n\n\n\nReferences\n[1] Hu, Minghao, Yuxing Peng, and Xipeng Qiu. \"Reinforced mnemonic reader for machine comprehension.\" CoRR, abs/1705.02798 (2017).\n[2] Huang, Hsin-Yuan, et al. \"FusionNet: Fusing via Fully-Aware Attention with Application to Machine Comprehension.\" arXiv preprint arXiv:1711.07341 (2017)." ]
[ 6, 9, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1cWzoxA-", "iclr_2018_H1cWzoxA-", "iclr_2018_H1cWzoxA-", "iclr_2018_H1cWzoxA-", "SJz6VRFlG", "iclr_2018_H1cWzoxA-", "SJz6VRFlG", "SJz6VRFlG", "ryOYfeaef", "rkcETx9lf" ]
iclr_2018_ry8dvM-R-
Routing Networks: Adaptive Selection of Non-Linear Functions for Multi-Task Learning
Multi-task learning (MTL) with neural networks leverages commonalities in tasks to improve performance, but often suffers from task interference which reduces the benefits of transfer. To address this issue we introduce the routing network paradigm, a novel neural network and training algorithm. A routing network is a kind of self-organizing neural network consisting of two components: a router and a set of one or more function blocks. A function block may be any neural network – for example a fully-connected or a convolutional layer. Given an input the router makes a routing decision, choosing a function block to apply and passing the output back to the router recursively, terminating when a fixed recursion depth is reached. In this way the routing network dynamically composes different function blocks for each input. We employ a collaborative multi-agent reinforcement learning (MARL) approach to jointly train the router and function blocks. We evaluate our model against cross-stitch networks and shared-layer baselines on multi-task settings of the MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a significant improvement in accuracy, with sharper convergence. In addition, routing networks have nearly constant per-task training cost while cross-stitch networks scale linearly with the number of tasks. On CIFAR100 (20 tasks) we obtain cross-stitch performance levels with an 85% average reduction in training time.
accepted-poster-papers
The proposed routing networks using RL to automatically learn the optimal network architecture is very interesting. Solid experimental justification and comparisons. The authors also addressed reviewers' concerns on presentation clarity in revisions.
train
[ "HyNnyzceG", "r1AHkVdgf", "Hk65p-5lf", "SJyFVJhXf", "SJkOLCs7G", "HyAwoOF7M", "HJ2i7kjzf", "BJIwfksMG", "rJgSZyozG", "ByzMZ1ozM", "BkHpeJoGf", "ry3NDBHGf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "The paper introduces a routing network for multi-task learning. The routing network consists of a router and a set of function blocks. Router makes a routing decision by either passing the input to a function block or back to the router. This network paradigm is tested on multi-task settings of MNIST, mini-imagenet and CIFAR-100 datasets.\n\nThe paper is well-organized and the goal of the paper is valuable. However, I am not very clear about how this paper improves the previous work on multi-task learning by reading the Related Work and Results sections.\n\nThe Related Work section includes many recent work, however, the comparison of this work and previous work is not clear. For example:\n\"Routing Networks share a common goal with techniques for automated selective transfer learning\nusing attention (Rajendran et al., 2017) and learning gating mechanisms between representations\n(Stollenga et al., 2014), (Misra et al., 2016), (Ruder et al., 2017). However, these techniques have\nnot been shown to scale to large numbers of routing decisions and task.\" Why couldn't these techniques scale to large numbers of routing decisions and task? How could the proposed network in this paper scale?\n\nThe result section also has no comparison with the previously published work. Is it possible to set similar experiments with the previously published material on this topic and compare the results?\n\n\n-- REVISED\n\nThank you for adding the comparisons with other work and re-writing of the paper for clarity.\nI increase my rating to 7.\n\n", "Summary:\nThe paper suggests to use a modular network with a controller which makes decisions, at each time step, regarding the next nodule to apply. This network is suggested a tool for solving multi-task scenarios, where certain modules may be shared and others may be trained independently for each task. It is proposed to learn the modules with standard back propagation and the controller with reinforcement learning techniques, mostly tabular. \n\n-\tpage 4: \nIn algorithm 2, line 6, I do not understand the reward computation. It seems that either a _{k+1} subscript index is missing for the right hand side R, or an exponent of n-k is missing on \\gamma. In the current formula, the final reward affects all decisions without a decay based on the distance between action and reward gain. This issue should be corrected or explicitly stated.\n\nThe ‘collaboration reward’ is not clearly justified: If I understand correctly, It is stated that actions which were chosen often in the past get higher reward when chosen again. This may create a ‘winner takes all’ effect, but it is not clear why this is beneficial for good routing. Specifically, this term is optimized when a single action is always chosen with high probability – but such a single winner does not seem to be the behavior we want to encourage.\n\n-\tPage 5: It is not described clearly (and better: defined formally) what exactly is the state representation. It is said to include the current network output (which is a vector in R^d), the task label and the depth, but it is not stated how this information is condensed into a single integer index for the tabular methods. If I understand correctly, the state representation used in the tabular algorithms includes only the current depth. If this is true, this constitutes a highly restricted controller, making decisions only based on depth without considering the current output. \n-\tThe functional approximation versions are even less clear: Again it is not clear what information is contained in the state and how it is represented. In addition it is not clear in this case what network architecture is used for computation of the policy (PG) or valkue (Q-learning), and how exactly they are optimized.\n-\tThe WPL algorithm is not clear to me\no\t In algorithm box 3, what is R_k? I do not see it defined anywhere. Is it related to \\hat{R}? how?\no\tIs it assumed that the actions are binary? \no\tI do not understand why positive gradients are multiplied with the action probability and negative gradients with 1 minus this probability. What is the source of a-symmetry between positive and negative gradients? \n\n-\tPage 6:\no\tIt is not clear why MNist is tested over 200 examples, where there is a much larger test set available\no\tIn MIN-MTL I do not understand the motivation from creating superclasses composed of 5 random classes each: why do we need such arbitrary and un-natural class definitions? \n\n-\tPage 7: \nThe results on Cifar-100 are compared to several baselines, but not to the standard non-MTL solution: Solve the multi-class classification problem using a softmax loss and a unified, non routing architecture in which all the layers are shared by all classes, with the only distinction in the last classification layer. If the routing solution does not beat this standard baseline, there is no justification for its more complex structure and optimization.\n\n-\tPage 8: The author report that when training the controller with single agent methods the policy collapses into choosing a single module for most tasks. However, this is not-surprising, given that the action-based reward (whos strength is unclear) seems to promote such winner-takes-all behavior.\n\nOverall:\n-\tThe paper is highly unclear in its method representation\no\tThere is no unified clear notation. The essential symbols (states, actions, rewards) are not formally defined, and often it is not clear even if they are integers, scalars, or vectors. In notation existing, there are occasional notation errors. \no\tThe reward is a) not clear, and b) not well motivated when it is explained, and c) not explicitly stated anywhere: it is said that the action-specific reward may be up to 10 times larger than the final reward, but the actual tradeoff parameter between them is not stated. Note that this parameter is important, as using a 10-times larger action-related reward means that the classification-related reward becomes insignificant.\no\tThe state representation used is not clear, and if I understand correctly, it includes only the current depth. This is a severely limited state representation, which does not enable to learn actions based on intermediate results\no\tThe continuous versions of the RL algorithms are not explained at all: no state representation, nor optimization is described.\no\tThe presentation suffers from severe over-generalization and lack of clarity, which disabled my ability to understand the network and algorithms for a specific case. Instead, I would recommend that in future versions of this document a single network, with a specific router and set of decisions, and with a single algorithm, will be explained with clear notation end-to-end\n\nBeyond the clarity issues, I suspect also that the novelty is minor (if the state does not include any information about the current output) and that the empirical baseline is lacking. However, it is hard to judge these due to lack of clarity.\n\n\nAfter revision:\n- Most of the clarity issues were handled well, and the paper now read nicely\n- It is now clear that routing is not done based on the current input (an example is not dynamically routed based on its current representation). Instead routing depends on the task and depth only. This is still interesting, but is far from reaching context-dependent routing.\n- The results presented are nice and show that task-dependent routing may be better than plain baseline or the stiching alternative. However, since this is a task transfer issue, I believe several data size points should be tested. For example, as data size rises, the task-specific-all-fc alternative is expected to get stronger (as with more data, related task are less required for good performance).\n \n\n- ", "In this paper, the authors present a novel formulation for learning the optimal architecture of a neural network in a multi-task learning framework. Using multi-agent reinforcement learning to find a policy — a mapping from the input layer (and task indicator) to the function block that must be used at the next layer, the paper shows improvement over hard-coded architectures with shared layers. \n\nThe idea is very interesting and the paper is well-written. The extensive review of the existing literature, along with systematically presented qualitative and quantitative results make for a clear and communicative read. I do think that further scrutiny and research would benefit the work since there are instances when the presented approach enjoys some benefits when solving the problems considered. For example, the more general formulation that uses the dispatcher does not appear to work ver well (Fig. 5), and the situation is only improved by using a task specific router. \n\nOverall, I think the idea is worthy of wide dissemination and has great potential for furthering the applications of multi-task learning. ", "Thanks for the suggestion. We will include a chart on this in the final draft.", "As requested, we ran the experiment with a single fully shared model (no task-specific fc layers) on CIFAR. It achieves 37% accuracy (averaged over 2 runs) after 100 epochs of training. This is below the lowest baseline (task-specific-all-fc) which achieves 42%.", "Hi Chrisantha -- thanks for the nice comment and making this interesting connection to your work on pathnets! \n\nWe think there is a clear overlap in the way we are both thinking about this. In particular having a task-specific \"agent\" (whether in the sense of the evolutionary approach or the RL approach) seems to give benefit over the shared (single agent) approach. We too noted during our experiments that diversity was not an issue for the multi-agent case (though it was for the single agent) and originally introduced the collaboration reward to see if we could encourage the multi-agent approach to use fewer paths without sacrificing performance. The hyper-parameter rho is a knob that allows us to experiment with this. Since we put up the first version of the paper we have conducted additional experiments on rho (see the new version Appendix, Sec. 7.1) which show that it is marginally helpful for this task. We found that it wasn't helpful to use rho for the CIFAR and MIN experiments so there set it to 0, but it was useful for MNIST and there we set it to 0.3. \n\nOn the point about using the previous modules activation (output) for the router decision-making we've added a table to the appendix (Table 3) which shows which of the RL algorithms use which parts of the state. The state for us consists of three values (v, t, d), where v is the previous layer's activation (or the input), t is an integer task label, and d is the current recursion depth. The most successful approach for us was the WPL (multi-agent) trainer which uses just t and d. That approach is tabular which means that it is not possible to add v to the input directly (though we are looking at approximator versions that could do it...). We did try a few single agent RL algorithm variations which do use v -- for example the PG and Q (approximator) versions -- but these didn't do as well. Appendix table 5 also compares against the multi-agent case (dispatched-routing-all-fc) and the single agent case (single-agent-routing-all-fc) but these also were not able to match the tabular WPL. We think there is clearly potential for approaches which use the previous activation to do better than WPL but doing so greatly increases the size of the search space and we think this does more harm than good in the current training approach. We're working on it though...\n", "-Is it assumed that the actions are binary? \n\nActions are in the set {1, ..., k, PASS} where k is the number of function blocks and PASS is a special action which just causes the iteration to be skipped (the state remains the same).\n\n-I do not understand why positive gradients are multiplied with the action probability and negative gradients with 1 minus this probability. What is the source of a-symmetry between positive and negative gradients? \n\nRepeated from Section 7.6: The WPL algorithm is a multi-agent policy gradient algorithm designed to help dampen policy oscillation and encourage convergence. It does this by slowly scaling down the learning rate for an agent after a gradient change in that agent’s policy. It determines when there has been a gradient change by using the difference between the immediate reward and historical average reward for the action taken. Depending on the sign of the gradient the algorithm is in one of two scenarios. If the gradient is positive then it is scaled by 1-pi(a_i). Over time if the gradient remains positive it will cause pi(a_i) to increase and so 1-pi(a_i) will go to 0, slowing the learning. If the gradient is negative then it is scaled by pi(a_i). Here again if the gradient remains negative over time it will cause pi(a_i) to decrease eventually to 0, slowing the learning again. Each such slowing helps dampen the variations or oscillations in the policies and eventually to help them converge.\n\n- Page 6:\n- It is not clear why MNist is tested over 200 examples, where there is a much larger test set available\nMNIST is actually tested on 200 samples *per task*, which for 10 tasks is 2,000 examples in total. We can increase the number of test samples and re-run the experiments if you feel that this is insufficient.\n\n- In MIN-MTL I do not understand the motivation from creating superclasses composed of 5 random classes each: why do we need such arbitrary and un-natural class definitions? \n\nFirst, as a practical matter, mini-ImageNet does not have superclasses to use as a natural tasks (e.g. Fish -> {goldfish, flounder, carp}) the way that CIFAR-100 does. Second, in real-world problems what constitutes a \"task\" may be quite far from what we would consider \"natural\" taxonomic classification. Imagine, for example, we collect data every hour from a website for several weeks. We can treat each hour in the day as a separate task. Here there will likely be some intra-task similarities and some inter-task differences but not in as coherent or natural a way as in image classification on CIFAR. And we would like to understand if MTL will be helpful in this regime as well. A task with a randomly chosen label set tests our ability to handle tasks with less coherence. It's a logical extreme in which we might expect relatively less positive transfer and potentially more negative transfer, so worth examining.\n\n-Page 7: \nThe results on Cifar-100 are compared to several baselines, but not to the standard non-MTL solution: Solve the multi-class classification problem using a softmax loss and a unified, non routing architecture in which all the layers are shared by all classes, with the only distinction in the last classification layer. If the routing solution does not beat this standard baseline, there is no justification for its more complex structure and optimization.\n\nWe performed this experiment early on many times, but since the results were so much worse than either of the baselines we chose, omitted it from the final results. We will run it again with the current hyper-parameters and post the results before the end of the review period.\n\n -Page 8: The author report that when training the controller with single agent methods the policy collapses into choosing a single module for most tasks. However, this is not-surprising, given that the action-based reward (whos strength is unclear) seems to promote such winner-takes-all behavior.\n\nWe have tried this with no collaboration reward (rho=0.0) on and see exactly the same behavior. We believe that it is more a function of a shared policy for all tasks than any particular reward structure.", "Thanks for pointing out many places where we were less than clear. We have re-written or edited many sections of the paper (mostly the earlier sections) to address both the specific typos you point out as well as more systemic issues in clarity. In particular, we have taken your advice and made the presentation less general, focusing on a clear description of what we implemented and employing a consistent notation. We think this has improved the paper. \n\nResponses to questions and comments below:\n\n-The ‘collaboration reward’ is not clearly justified: If I understand correctly, It is stated that actions which were chosen often in the past get higher reward when chosen again. This may create a ‘winner takes all’ effect, but it is not clear why this is beneficial for good routing. Specifically, this term is optimized when a single action is always chosen with high probability – but such a single winner does not seem to be the behavior we want to encourage.\n\nAs we have clarified in the new version, the collaboration reward was intended to test the ability of the network to learn a more compressed routing scheme while not sacrificing performance. Because we don't know ahead of time how many function blocks per layer are really required to solve the problem (we use #function blocks = #tasks) we wanted to see if we could encourage the network to use fewer. We have added a comparison of the effect of the collaboration reward using the hyper-parameter multiplier rho which ranges from 0 (no collaboration reward) to 10 (large collaboration reward). Figure 12 in the appendix shows that to a large extent the performance is robust to changes in rho in the [0, 1] range on CIFAR-MTL using the WPL algorithm. There is the potential for this reward to produce a lack of diversity in the function block choices but this doesn’t happen empirically in the multi-agent approaches. And we see a lack of diversity in the choices made by the single agent approach even with no collaboration reward (rho = 0.0).\n\n-Page 5: It is not described clearly (and better: defined formally) what exactly is the state representation. It is said to include the current network output (which is a vector in R^d), the task label and the depth, but it is not stated how this information is condensed into a single integer index for the tabular methods. If I understand correctly, the state representation used in the tabular algorithms includes only the current depth. If this is true, this constitutes a highly restricted controller, making decisions only based on depth without considering the current output. \n\nWe hope we’ve made this clearer in the paper. First, a state is as you say a triple of (v, t, i) where v is in R^d, t is an integer task label, and i is the current recursion depth. Let us take the WPL-router as an example of how this works for a tabular approach. WPL has one agent per task. Each agent has a table of size max depth x num function blocks which holds the action probabilities for each layer. So the routing function depends on both the task label and depth not just the depth. The individual policy decision function of each task-agent uses only the depth to decide its route. We have also experimented with a \"dispatched\" approach which adds an extra dispatching agent which learns pick the agent used to route the instance rather than using the task label to index it. This approach (called \"routing-all-fc dispatched\" in Figure 5) uses v and t to pick the router agent, which itself uses depth to make its decision. We have added a table to the appendix which states explicitly which parts of the state are used for each approach used in Figures 4 and 5.\n\n- The functional approximation versions are even less clear: Again it is not clear what information is contained in the state and how it is represented. In addition it is not clear in this case what network architecture is used for computation of the policy (PG) or valkue (Q-learning), and how exactly they are optimized.\n\nWe have clarified this in the paper. The network architecture used for approximation is a 2 layer MLP with hidden dimension of size 64. The state is encoded by converting the task and depth components to one-hot vectors and concatenating them to the input vector representation (if used). All training is performed with SGD (hyperparameters described in the text). We also tried with Adam but it performed marginally less well.\n\n-The WPL algorithm is not clear to me\n\nWe have added a more detailed explanation to the appendix in Section 7.6.\n\n-In algorithm box 3, what is R_k? I do not see it defined anywhere. Is it related to \\hat{R}? how?\n\nThe algorithms have been completely reworked to make clear their inputs and outputs. R_i is the return for action a_i (the sum of the future rewards + a final reward). \\hat{R}_i is the running average return for action a_i over the samples in the dataset.\n\n\n\ncontinued in next reply", "-The idea is very interesting and the paper is well-written. The extensive review of the existing literature, along with systematically presented qualitative and quantitative results make for a clear and communicative read.\n\nThanks!\n\n-I do think that further scrutiny and research would benefit the work since there are instances when the presented approach enjoys some benefits when solving the problems considered. For example, the more general formulation that uses the dispatcher does not appear to work ver well (Fig. 5), and the situation is only improved by using a task specific router. \n\nWe agree. The dispatched router which tries to learn which agent to send the instance to doesn't perform as well. It would be even more beneficial (but harder still) to do dispatched routing without using the task label at all. This would greatly expand the applicability of the approach (it could then be used potentially to improve any neural network on any dataset). We are actively investigating ways to improve this...\n\n", "The Related Work section includes many recent work, however, the comparison of this work and previous work is not clear. For example: \"Routing Networks share a common goal with techniques for automated selective transfer learning using attention (Rajendran et al., 2017) and learning gating mechanisms between representations (Stollenga et al., 2014), (Misra et al., 2016), (Ruder et al., 2017). However, these techniques have not been shown to scale to large numbers of routing decisions and task.\" \n\n-Why couldn't these techniques scale to large numbers of routing decisions and task? How could the proposed network in this paper scale?\n\nThe papers by Misra and Ruder experiment with only 2 tasks at a time so it was not clear if they would scale to more tasks. These approaches make 2 copies of a convnet and connect them at a small number of junctures (e.g. 5) with cross-stitch or sluice connections to allow inter-task sharing. This is a soft-attention approach which computes the input for each function block at layer i as a linear combination of the activations of the function blocks at layer i-1. If there are k such function blocks at each layer, this introduces O(k^2) additional parameters for each connection. By comparison for routing using the WPL (tabular) approach we need no additional parameters and only O(num tasks x max depth x num function blocks) additional memory. \n\nWe have now re-implemented Misra and applied it to 2, 3, 5, 10, and 20 task problems for CIFAR-100. In all cases routing networks are uniformly better (on a separate test set) over the entire course of training by a large margin. We have conducted scaling experiments (see the Appendix) which show that the per-task training cost is effectively constant for routing networks but grows linearly for cross-stitch networks. In addition, routing networks are significantly faster for all numbers of tasks, achieving per-task time improvements of approximately 2-6X on the 2, 3, 5, and 10 task experiments. These per-task improvements translate to real gains in training time. For example, on CIFAR-100 (20 task), we see an average (over 3 runs) training time improvement of 85% (5.6 hours for routing nets to reach cross-stitch final performance levels achieved after 38 hours of training).\n\n-The result section also has no comparison with the previously published work. Is it possible to set similar experiments with the previously published material on this topic and compare the results?\n\nYes, see the new comparison with Misra's cross-stitch networks.\n", "We want to thank all the reviewers for their careful reading and thoughtful comments. We have tried to address all the issues and questions raised in a new revision of the paper. \n\nTL;DR:\n1. Comparison to Cross-stitch networks (CVPR 2016, oral) for multi-task learning over which we show significant gains in accuracy for 2, 3, 5, 10, and 20 tasks; a constant vs. linear time scaling of per-function-block training cost and a roughly 85% average reduction in actual training time for routing nets to achieve cross-stitch accuracy levels (38 hours for cross-stitch vs 5.6 hours for routing nets on CIFAR-MTL)\n\n2. Extensive re-writing of the paper for clarity and consistency.\n\n3. Additional experiments showing the performance effect of the collaboration reward rho as well as comparison to cross-stitch and the baselines for 2, 3, 5, and 10 task problems.\n\nSpecific responses in separate replies to each reviewer.", "Having experimented with learned routing in order to extend our pathNet work, I was very interested in the authors insight that an RL algorithm which explicitly rewards convergence to the same pathway was apparently critical. This makes sense. Previous methods of gating such as \"Outrageously large nets\" have the opposite diversity cost, but in RL where it is necessary to train a pathway hard for some time, the opposite prior is clearly beneficial. I think this is an important insight of the authors. In pathNet this was not required as evolutionary dynamics automatically achieved this convergence to a single pathway per task. \n\nAlso, much like our pathNet work the authors found it was important to reset the RL agent for each task, just as we reset the population of pathways at each task. We also found such a resetting to be critical. Finally, whereas we focus on transfer learning the authors focus on multi-task learning which has the additional constraint that it is not possible to fix modules learned in previous tasks. Also, I was interested in the output of the previous module being important for the router. I would like to see a test of whether this is critical, or whether it is enough to see the observation and task label. Another difference is that gating is not combinatorial in this paper, only using one module per layer instead of K per layer. \n\nPathNet: https://arxiv.org/abs/1701.08734\n\nMany thanks for a fascinating paper. " ]
[ 7, 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ry8dvM-R-", "iclr_2018_ry8dvM-R-", "iclr_2018_ry8dvM-R-", "r1AHkVdgf", "r1AHkVdgf", "ry3NDBHGf", "BJIwfksMG", "r1AHkVdgf", "Hk65p-5lf", "HyNnyzceG", "iclr_2018_ry8dvM-R-", "iclr_2018_ry8dvM-R-" ]
iclr_2018_rkhlb8lCZ
Wavelet Pooling for Convolutional Neural Networks
Convolutional Neural Networks continuously advance the progress of 2D and 3D image and object classification. The steadfast usage of this algorithm requires constant evaluation and upgrading of foundational concepts to maintain progress. Network regularization techniques typically focus on convolutional layer operations, while leaving pooling layer operations without suitable options. We introduce Wavelet Pooling as another alternative to traditional neighborhood pooling. This method decomposes features into a second level decomposition, and discards the first-level subbands to reduce feature dimensions. This method addresses the overfitting problem encountered by max pooling, while reducing features in a more structurally compact manner than pooling via neighborhood regions. Experimental results on four benchmark classification datasets demonstrate our proposed method outperforms or performs comparatively with methods like max, mean, mixed, and stochastic pooling.
accepted-poster-papers
The idea of using wavelet pooling is novel and will bring many interesting research work in this direction. But more thorough experimental justification such as those recommended by the reviewers would make the paper better. Overall, the committee feels this paper will bring value to the conference.
train
[ "SJgEVADSz", "Sk8IoTPSf", "rJJWFNNef", "B1zf5Uvxf", "B1Q6kMqgM", "r1JTSdEzf", "SJv6OO4Mf", "B1C-Nximz" ]
[ "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Short answer is yes. We would love to give access to the code. The longer answer is that it needs to be made more efficient so that the implementation time is reduced. When it was written it wasn't written for CUDA, or MEX, and thus doesn't have the speedups afforded by precompiling, etc. When that happens we will make it available.", "I was just wondering if your code will be available to other researchers at some point? The idea is very interesting", "The paper proposes \"wavelet pooling\" as an alternative for traditional subsampling methods, e.g. max/average/global pooling, etc., within convolutional neural networks. \nExperiments on the MNIST, CIFAR-10, SHVN and KDEF datasets, shows the proposed wavelet-based method has\ncompetitive performance with existing methods while still being able to address the overfitting behavior of max pooling.\n\nStrong points\n- The method is sound and well motivated.\n- The proposes method achieves competitive performance.\n\nWeak points\n- No information about added computational costs is given.\n- Experiments are conducted in relatively low-scale datasets.\n\n\nOverall the method is well presented and properly motivated. The paper as a good flow and is easy to follow. The authors effectively demonstrate with few toy examples the weaknesses of traditional methods, i.e max pooling and average pooling. Moreover, their extended evaluation on several datasets show the performance of the proposed method in different scenarios.\n\nMy main concerns with the manuscript are the following.\n\nCompared to traditional methods, the proposed methods seems to require higher computation costs. In a deep neural network setting where operations are conducted a large number of times, this is a of importance. However, no indication is given on what are the added computation costs of the proposed method and how that compares to existing methods. A comparison on that regard would strengthen the paper.\n\nIn many of the experiments, the manuscript stresses the overfitting behavior of max pooling. This makes me wonder whether this is caused by the fact that experiments are conducted or relatively smaller datasets. While the currently tested datasets are a good indication of the performance of the proposed method, an evaluation on a large scale scenario, e.g. ILSVRC'12, could solidify the message sent by this manuscript. Moreover, it would increase the relevance of this work in the computer vision community.\n\nFinally, related to the presentation, I would recommend presenting the plots, i.e. Fig. 8,10,12,14, for the training and validation image subsets in two separate plots. Currently, results for training and validation sets are mixed in the same plot, and due to the clutter it is not possible to see the trends clearly.\nSimilarly, I would recommend referring to the Tables added in the paper when discussing the performance of the proposed method w.r.t. traditional alternatives.\n\nI encourage the authors to address my concerns in their rebuttal", "I think this paper presents an interesting take on feature pooling. In particular, the idea is to look at pooling as some form of a lossy process, and try to find such a process such that it discards less information given some decimation criterion. Once formulating the problem like this, it becomes obvious that wavelets are a very good candidate.\n\nPros:\n- The nice thing about this method is that average pooling is in some sense a special case of this method, so we can see a clear connection.\n- Lots of experiments, and results, which show the method both performing the best in some cases, and not the best in others. I applaud the authors for publishing all the experiments they ran because some may have been tempted to \"forget\" about the experiments in which the proposed method did not perform the best.\n\nCons:\n- No comparison to non-wavelet methods. For example, one obvious comparison would have been to look at using a DCT or FFT transform where the output would discard high frequency components (this can get very close to the wavelet idea!).\n- This method has the potential to show its potential on larger pooling windows than 2x2. I would have loved to see some experiments that prove/disprove this.\n\nOther comments:\n- Given that this method's flexibility, I could imagine this generate a new class of pooling methods based on lossy transforms. For example, given a MxNxK input, the wavelet idea can be made to output (M/D)x(N/D)x(K/D) (where D is decimation factor). Of interest is the fact that channels can be treated just like any other dimension, since information will be preserved!\n\nFinal comments:\n- I like the idea and it seems novel it may lead to some promising research directions related to lossy pooling methods/channel aggregation. As such, I think it will be a nice addition to ICLR, especially if the authors decide to run some of the experiments I was suggesting, namely: show what happens when larger pooling windows are used (say 4x4 instead of 2x2), and compare to other lossy techniques (such as Fourier or cosine-transforms).", "The paper proposes to use discrete wavelet transforms combined with downsampling to achieve arguably better pooling output compared to average pooling or max pooling. The idea is tested on small-scale datasets such as MNIST and CIFAR.\n\nOverall, a major issue of the paper is the linear nature of DWT. Unless I misunderstood the paper, linear DWT is being adopted in the paper, and combined with the downsampling and iDWT stage, the transform is linear with respect to the input: DWT and iDWT are by definition linear, and the downsampling can be viewed as multiplying by 0. As a result, if my understanding is correct, this explains why the wavelet pooling is almost the same as average pooling in the experiments (other than MNIST). See figures 10, 12 and 14.\n\nThe rest of the paper reads reasonable, but I am not sure if they offset the issue above.\n\nOther minor comments:\n\n- I am not sure if the issue in Figure 2 applies in general to image classification issues. It surely is a mathematical adversarial to the max pooling principle, but note that this only applies to the first input layer, as such behavior in later layers could be offset by switching the sign bits of the previous layer's filter.\n\n- The experiments are largely conducted with very small scale datasets. As a result I am not sure if they are representative enough to show the performance difference between different pooling methods.", "** Update ** I have not been able to run experiments or add computational costs, etc due to varying life factors (i.e. graduation & packing/moving for my postdoc in such a short window of time).\n\nThank you for your review. I agree with everything you mentioned in your pros, cons, and other commentary. I tried to show the whole spectrum of our results so that we could show integrity, and a lane for improvement into this initial idea. I will briefly address some of the points you made as well.\n\nNon-wavelet methods:\n- I wholeheartedly agree with comparing the DWT method to DCT, FFT, etc. \n- I didn't implement these for this paper because I wanted to focus on the effect of the DWT\n- I can see a pathway for a journal or another paper comparing DWT, DCT, FFT, etc style methods to traditional methods\n\nLarger pooling windows:\n- I agree, the experiments should have included another window size\n- Initially I wanted to see the performance on the 1st level (2x2) before reevaluating.\n- I will run a comparison on 4x4!\n\nI also can see a lane for further research into lossy pooling/channel aggregation, and I hope to be a contributor. I will try to compare to the other lossy techniques. However, I may still reserve those results for another work for the purpose of explaining deeper the reasoning, pros, cons, etc. of this type of approach vs. the traditional approach.\n", "** Update ** I have not been able to run experiments or add computational costs, etc due to varying life factors (i.e. graduation & packing/moving for my postdoc in such a short window of time).\n\nThank you for your review. I will address your main concerns below with our paper.\n\nComputation costs:\n- You are correct, this method does require higher computational costs\n- In our initial implementation of this method, we didn't employ advanced programming methods and pre-compiling that would speed up the computations and reduce the number of operations. In the future these will be integrated to ensure usability.\n- Our method uses FWT (Fast Wavelet transform) which is much faster than DWT. O(N) versus O(k*2^n)\n- I will add a comparison based on mathematical operations and an explanation on how to lessen these costs\n\nMax pooling overfitting:\n- It is a strong possibility that the nature of the datasets contributes to max pooling overfitting faster.\n- However, in various papers we surveyed, this conclusion was reached because of the nature of the algorithm.\n- I do suspect therefore that in a larger dataset this trend would still be true, but perhaps not as fast.\n\nLarger Datasets:\n- I agree that testing on a larger dataset would remove all doubt.\n- I will apply this method to a larger dataset, but I am not sure it will make it into this paper, or be reserved for another manuscript.\n\nPresentation:\n- I will redo the plots to fit the manner you described\n- I will reference the tables when discussing the performance of the proposed wrt traditional alternatives", "Linear DWT concerns:\n- Haar wavelet is linear in nature\n- Wavelet are linear, but there also are nonlinear wavelets\n- The linearity of the wavelet doesn't impact its ability to constructively be applied to linear or nonlinear data\n- Average pooling and our method have some overlap in their approach, but differ greatly in execution.\n\nLarger Datasets:\n- I agree that testing on a larger dataset would remove all doubt.\n- I will apply this method to a larger dataset, but I am not sure it will make it into this paper, or be reserved for another manuscript.\n\nWe used the Haar wavelet basis as a starting point, a prototype to prove that wavelets could be a viable alternative to the traditional methods. Although Haar is linear as a basis, there are others that are not, and are more advanced in nature. We believe that such a comparison to these bases would be best suitable for another paper or journal article where more depth and discussion could be given on the wavelets themselves, versus the viability of the method." ]
[ -1, -1, 7, 9, 4, -1, -1, -1 ]
[ -1, -1, 4, 3, 4, -1, -1, -1 ]
[ "Sk8IoTPSf", "iclr_2018_rkhlb8lCZ", "iclr_2018_rkhlb8lCZ", "iclr_2018_rkhlb8lCZ", "iclr_2018_rkhlb8lCZ", "B1zf5Uvxf", "rJJWFNNef", "B1Q6kMqgM" ]
iclr_2018_SJ1Xmf-Rb
FearNet: Brain-Inspired Model for Incremental Learning
Incremental class learning involves sequentially learning classes in bursts of examples from the same class. This violates the assumptions that underlie methods for training standard deep neural networks, and will cause them to suffer from catastrophic forgetting. Arguably, the best method for incremental class learning is iCaRL, but it requires storing training examples for each class, making it challenging to scale. Here, we propose FearNet for incremental class learning. FearNet is a generative model that does not store previous examples, making it memory efficient. FearNet uses a brain-inspired dual-memory system in which new memories are consolidated from a network for recent memories inspired by the mammalian hippocampal complex to a network for long-term storage inspired by medial prefrontal cortex. Memory consolidation is inspired by mechanisms that occur during sleep. FearNet also uses a module inspired by the basolateral amygdala for determining which memory system to use for recall. FearNet achieves state-of-the-art performance at incremental class learning on image (CIFAR-100, CUB-200) and audio classification (AudioSet) benchmarks.
accepted-poster-papers
A novel dual memory system inspired by brain for the important incremental learning and very good results.
test
[ "rJ96Jgclf", "rkvqPjUVz", "HyirgqDgM", "HkvoWK6ef", "SkTJreuzM", "HkmO7eOGf", "HyFfVxOzG", "Bk1Cfe_MM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "I quite liked the revival of the dual memory system ideas and the cognitive (neuro) science inspiration. The paper is overall well written and tackles serious modern datasets, which was impressive, even though it relies on a pre-trained, fixed ResNet (see point below).\n\nMy only complaint is that I felt I couldn’t understand why the model worked so well. A better motivation for some of the modelling decisions would be helpful. For instance, how much the existence (and training) of a BLA network really help — which is a central new part of the paper, and wasn’t in my view well motivated. It would be nice to compare with a simpler baseline, such as a HC classifier network with reject option. I also don’t really understand why the proposed pseudorehearsal works so well. Some formal reasoning, even if approximate, would be appreciated.\n\nSome additional comments below:\n\n- Although the paper is in general well written, it falls on the lengthy side and I found it difficult at first to understand the flow of the algorithm. I think it would be helpful to have a high-level pseudocode presentation of the main steps.\n\n- It was somewhat buried in the details that the model actually starts with a fixed, advanced feature pre-processing stage (the ResNet, trained on a distinct dataset, as it should). I’m fine with that, but this should be discussed. Note that there is evidence that the neuronal responses in areas as early as V1 change as monkeys learn to solve discrimination tasks. It should be stressed that the model does not yet model end-to-end learning in the incremental setting.\n\n- p. 4, Eq. 4, is it really necessary to add a loss for the intermediate layers, and not only for the input layer? I think it would be clearer to define the \\mathcal{L} explictily somewhere. Also, shouldn’t the sum start at j=0?", "I am happy with the revision! My concerns regarding the FearNet mechanism have been properly addressed. The issue of class imbalance, computational hurdles in storage and the issue of multiple data modalities have been addressed appropriately.", "Quality: The paper presents a novel solution to an incremental classification problem based on a dual memory system. The proposed solution is inspired by the memory storage mechanism in brain.\n\nClarity: The problem has been clearly described and the proposed solution is described in detail. The results of numerical experiments and the real data analysis are satisfactory and clearly shows the superior performance of the method compared to the existing ones.\n\nOriginality: The solution proposed is a novel one based on a dual memory system inspired by the memory storage mechanism in brain. The memory consolidation is inspired by the mechanisms that occur during sleep. The numerical experiments showing the FearNet performance with sleep frequency also validate the comparison with the brain memory system.\n\nSignificance: The work discusses a significant problem of incremental classification. Many of the shelf deep neural net methods require storage of previous training samples too and that slows up the application to larger dataset. Further the traditional deep neural net also suffers from the catastrophic forgetting. Hence, the proposed work provides a novel and scalable solution to the existing problem.\n\npros: (a) a scalable solution to the incremental classification problem using a brain inspired dual memory system\n (b) mitigates the catastrophic forgetting problem using a memory consolidation by pseudorehearsal.\n (c) introduction of a subsystem that allows which memory system to use for the classification\n\ncons: (a) How FearNet would perform if imbalanced classes are seen in more than one study sessions?\n (b) Storage of class statistics during pseudo rehearsal could be computationally expensive. How to cope with that?\n (c) How FearNet would handle if there are multiple data sources?", "\nThis paper addresses the problem of incremental class learning with brain inspired memory system. This relies on 1/ hippocampus like system relying on a temporary memory storage and probabilistic neural network classifier, 2/ a prefrontal cortex-like ladder network architecture, performing joint autoencoding and classification, 3/ an amygdala-like classifier that combines the decision of both structures. The experiments suggests that the approach performs better than state-of-the-art incremental learning approaches, and approaches offline learning.\nThe paper is well written. The main issue I have with the approach is the role of the number of examples stored in hippocampus and its implication for the comparison to state-of-the art approaches.\nComments:\nIt seems surprising to me that the network manages to outperform other approaches using such a simplistic network for hippocampus (essentially a Euclidian distance based classifier). I assume that the great performance is due to the fact that a lot of examples per classes are stored in hippocampus. I could not find an investigation of the effect of this number on the performance. I assume this number corresponds to the mini-batch size (450). I would like that the authors elaborate on how fair is the comparison to methods such as iCaRL, which store very little examples per classes according to Fig. 2. I assume the comparison must take into account the fact that FearNet stores permanently relatively large covariance matrices for each classes.\nOverall, the hippocampus structure is the weakness of the approach, as it is so simple that I would assume it cannot adapt well to increasingly complex tasks. Also, making an analogy with hippocampus for such architecture seems a bit exaggerated.\n", "\nReviewer #3: How FearNet would perform if imbalanced classes are seen in more than one study sessions?\n\nAuthors: FearNet generates a balanced number of pseudoexamples during its sleep phase and when updating BLA, so class imbalance is not an issue. To test this, we did an experiment with CIFAR-100 where we selected a random number of samples from each class (20-500) so that the class distribution was imbalanced. We would expect a slight degradation in performance because we aren’t using as many samples to train FearNet as we did in the paper (i.e., the model doesn’t generalize as well for the test set.) The results (\\omega_{base} = 0.884, \\omega_{new} = 0.729, and \\omega_{all} = 0.897) indicate that FearNet is robust to imbalanced class distributions.\n\nReviewer #3: Storage of class statistics during pseudo rehearsal could be computationally expensive. How to cope with that?\n\nAuthors: We agree that storing class statistics is a major bottleneck, but FearNet still manages to be less memory intensive than other models. In Table 5, we show that the storage cost for FearNet is still lower than previous methods. In Table 6, we show that FearNet still outperforms other methods when only a diagonal covariance matrix is stored for each class, while decreasing storage costs by 65%. \n\nReviewer #3: How FearNet would handle if there are multiple data sources?\n\nAuthors: We assume that the reviewer is referring to FearNet’s ability to handle multiple data modalities. In Table 4, we explored FearNet’s ability to simultaneously learn audio and visual information. The results showed that FearNet was able to simultaneously learn datasets with very different data representations. \n", "\nReviewer #1: It seems surprising to me that the network manages to outperform other approaches using such a simplistic network for hippocampus (essentially a Euclidian distance based classifier). I assume that the great performance is due to the fact that a lot of examples per classes are stored in hippocampus.\n\nAuthors: Our comparison to the state-of-the-art is valid, and we have thoroughly checked our results. The HC network is simple, and we are comparing not only it, but the entire network to the state-of-the-art. We chose to implement HC using nearest neighbor density estimation because it is able to make inferences from new data immediately without expensive loops through the training data and often works well when data is scarce (low-shot learning), and it has the perfect properties for enabling us to use pseudorehearsal for consolidating information from HC to mPFC. Psuedorehearsal requires mixing the raw recently observed data with generated examples of data observed long ago. Our HC model can be thought of as a simple buffer that can enable inference to be made until the information is transferred to mPFC, at which point the HC model is erased. After “sleeping,” FearNet does not store old values in HC because those memories now reside in the mPFC network. FearNet uses its BLA module to determine where the memory resides. Since FearNet could erroneously predict the network where the memory resides, we don’t believe that HC artificially inflates FearNet performance. We tested this by performing the incremental learning experiment for the 1-nearest neighbor (1-NN) for all three datasets (see Table 2 in the revised manuscript). FearNet outperformed 1-NN because 1-NN was unable to generalize to the test data as well as FearNet. Additionally, compared to FearNet, 1-NN is significantly less memory efficient (Table 5) and very slow at making predictions.\n\nReviewer #1: I could not find an investigation of the effect of this number on the performance. I assume this number corresponds to the mini-batch size (450).\n\nAuthors: We did investigate FearNet performance as a function of how many classes are learned (stored in HC) before its sleep phase is performed (see Fig. 5 in the discussion). The mini-batch is only for 1) the sleep phase and 2) updating BLA. To make this clearer, we added “We investigate FearNet’s performance as a function of how much data is stored in HC in Section 6.2.” to the end of Section 4.1.\n\nReviewer #1: I would like that the authors elaborate on how fair is the comparison to methods such as iCaRL, which store very little examples per classes according to Fig. 2. I assume the comparison must take into account the fact that FearNet stores permanently relatively large covariance matrices for each classes.\n\nAuthors: For CIFAR-100, iCaRL stores 2,000 exemplars for replay. At the beginning, they are able to store most/all of the exemplars (there are 500 per class available) since the buffer maxes out at 2,000. As time moves on, that number decreases as it has to make room for new classes. By the end, there are 20 exemplars per class. We have re-written the last paragraph in Section 2 to clarify this point. In comparison, our model stores the mean/covariance matrix for each class, and then generates new “exemplars” (pseudoexamples) during sleep. Note that our method still outperforms iCaRL and other methods when only a diagonal covariance is stored, as discussed in Section 6.2 (see Table 6). Using MLP type architectures for iCaRL and FearNet, we showed that storing class statistics is still more memory efficient than storing these exemplars (see Table 5). Our future work will focus on using generative models that don’t require class statistics for pseudorehearsal. \n\nReviewer #1: Overall, the hippocampus structure is the weakness of the approach, as it is so simple that I would assume it cannot adapt well to increasingly complex tasks. Also, making an analogy with hippocampus for such architecture seems a bit exaggerated.\n\nAuthors: We agree that HC could be improved, and we included in our future work that we want to replace HC with a semi-parametric model, instead of an entirely non-parametric model. We also agree that the low-level operations that occur in the individual FearNet modules (e.g., HC) are not entirely analogous to operations that occur in the brain; and to be fair, we don’t make that claim. Our main inspiration for FearNet is 1) the brain’s dual-memory architecture for rapid acquisition of new information and long-term storage of old information, 2) how mammalian brains consolidate recent memories to long term storage during sleep, and 3) the recent and remote recall pathways that BLA uses. \n", "\nReviewer #2: My only complaint is that I felt I couldn’t understand why the model worked so well. A better motivation for some of the modelling decisions would be helpful. For instance, how much the existence (and training) of a BLA network really help — which is a central new part of the paper, and wasn’t in my view well motivated. It would be nice to compare with a simpler baseline, such as a HC classifier network with reject option. \n\nAuthors: We do explore how the BLA effects FearNet performance in an ablation study shown in Table 3. We actually tried different variants for BLA before settling on the model that we used in the paper. We have included the results of the other variants in the supplemental material to help justify our decisions. \n\nReviewer #2: I also don’t really understand why the proposed pseudorehearsal works so well. Some formal reasoning, even if approximate, would be appreciated.\n\nAuthors: Rehearsal and psuedorehearsal are old ideas from the 1990s. We have added more justification for why they help alleviate catastrophic forgetting in Section 2 and in the discussion.\n\nReviewer #2: Although the paper is in general well written, it falls on the lengthy side and I found it difficult at first to understand the flow of the algorithm. I think it would be helpful to have a high-level pseudocode presentation of the main steps.\n\nAuthors: The high-level pseudocode for FearNet’s train and predict functionality is a great idea. We have included this in the supplemental material of our revised version. \n\nReviewer #2: It was somewhat buried in the details that the model actually starts with a fixed, advanced feature pre-processing stage (the ResNet, trained on a distinct dataset, as it should). I’m fine with that, but this should be discussed. Note that there is evidence that the neuronal responses in areas as early as V1 change as monkeys learn to solve discrimination tasks. It should be stressed that the model does not yet model end-to-end learning in the incremental setting.\n\nAuthors: We have included the following sentence in the beginning of Section 4, “In this paper, we use pre-trained embeddings of the input (e.g., ResNet).” We think representation learning is an important next step, and it will be incorporated into FearNet 2.0, which is currently in its early planning stages.\n\nReviewer #2: p. 4, Eq. 4, is it really necessary to add a loss for the intermediate layers, and not only for the input layer? I think it would be clearer to define the \\mathcal{L} explictily somewhere. Also, shouldn’t the sum start at j=0?\n\nAuthors: Thank you for pointing that out. We have fixed Eq. 4 and defined the \\mathcal{L} term to make it clear that we are computing the MSE loss between the output of each hidden layer and the input/output of the mPFC autoencoder. The rationale for using MSE losses at the intermediate layers stem from Valpola (2015), where he showed that errors in deeper layers had a harder time being corrected because they were further away from the training signal (i.e., data layer). Adding the multi-layer loss forces the autoencoder to correct errors at every layer. A good autoencoder fit is important for our framework because it is directly related to the fidelity of the pseudoexamples being generated for sleep phases.\n", "First, we would like to thank the reviewers for their valuable feedback. Their comments have helped us to improve the original manuscript. All three reviewers expressed that they liked our new brain-inspired algorithm for incremental class learning. We identified two main issues that they raised: 1) the reviewers wanted more justification for architectural decisions (with a focus on HC and BLA); and 2) the reviewers wanted more explanation for why pseudorehearsal works for mitigating catastrophic forgetting during incremental class learning. Additionally, the reviewers suggested a number of minor changes that will make the paper clearer and enable others to better reproduce our work, although we will make all of our code available once the paper is accepted. We address each reviewer comment individually. Please let us know if there are any other questions/concerns regarding our revised manuscript. Thank you!" ]
[ 7, -1, 7, 6, -1, -1, -1, -1 ]
[ 2, -1, 4, 2, -1, -1, -1, -1 ]
[ "iclr_2018_SJ1Xmf-Rb", "SkTJreuzM", "iclr_2018_SJ1Xmf-Rb", "iclr_2018_SJ1Xmf-Rb", "HyirgqDgM", "HkvoWK6ef", "rJ96Jgclf", "iclr_2018_SJ1Xmf-Rb" ]
iclr_2018_BJehNfW0-
Do GANs learn the distribution? Some Theory and Empirics
Do GANS (Generative Adversarial Nets) actually learn the target distribution? The foundational paper of Goodfellow et al. (2014) suggested they do, if they were given sufficiently large deep nets, sample size, and computation time. A recent theoretical analysis in Arora et al. (2017) raised doubts whether the same holds when discriminator has bounded size. It showed that the training objective can approach its optimum value even if the generated distribution has very low support. In other words, the training objective is unable to prevent mode collapse. The current paper makes two contributions. (1) It proposes a novel test for estimating support size using the birthday paradox of discrete probability. Using this evidence is presented that well-known GANs approaches do learn distributions of fairly low support. (2) It theoretically studies encoder-decoder GANs architectures (e.g., BiGAN/ALI), which were proposed to learn more meaningful features via GANs, and consequently to also solve the mode-collapse issue. Our result shows that such encoder-decoder training objectives also cannot guarantee learning of the full distribution because they cannot prevent serious mode collapse. More seriously, they cannot prevent learning meaningless codes for data, contrary to usual intuition.
accepted-poster-papers
* presents a novel way analyzing GANs using the birthday paradox and provides a theoretical construction that shows bidirectional GANs cannot escape specific cases of mode collapse * significant contribution to the discussion of whether GANs learn the target disctibution * thorough justifications
train
[ "rkhhruYgM", "B1jWee9eM", "B1g5pBTxz", "ByMIdi0mz", "H1O1IXeff", "BJImH7xfz", "SJaoN7gfz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper adds to the discussion on the question whether Generative Adversarial Nets (GANs) learn the target distribution. Recent theoretical analysis of GANs by Arora et al. show that of the discriminator capacity of is bounded, then there is a solution the closely meets the objective but the output distribution has a small support. The paper attempts to estimate the size of the support for solutions produced by typical GANs experimentally. The main idea used to estimate the support is the Birthday theorem that says that with probability at least 1/2, a uniform sample (with replacement) of size S from a set of N elements will have a duplicate given S > \\sqrt{N}. The suggested plan is to manually check for duplicates in a sample of size s and if duplicate exists, then estimate the size of the support to be s^2. One should note that the birthday theorem assumes uniform sampling. In the revised versions, it has been clarified that the tested distribution is not assumed to be uniform but the distribution has \"effectively\" small support size using an indistinguishability notion. Given this method to estimate the size of the support, the paper also tries to study the behaviour of estimated support size with the discriminator capacity. Arora et al. showed that the output support size has nearly linear dependence on the discriminator capacity. Experiments are conducted in this paper to study this behaviour by varying the discriminator capacity and then estimating the support size using the idea described above. A result similar to that of Arora et al. is also given for the special case of Encoder-Decoder GAN.\n\nEvaluation: \nSignificance: The question whether GANs learn the target distribution is important and any significant contribution to this discussion is of value. \n\nClarity: The paper is written well and the issues raised are well motivated and proper background is given. \n\nOriginality: The main idea of trying to estimate the size of the support using a few samples by using birthday theorem seems new. \n\nQuality: The main idea of this work is to give a estimation technique for the support size for the output distribution of GANs. \n", "This paper proposes a clever new test based on the birthday paradox for measuring diversity in generated samples. The main goal is to quantify mode collapse in state-of-the-art generative models. The authors also provide a specific theoretical construction that shows bidirectional GANs cannot escape specific cases of mode collapse.\nUsing the birthday paradox test, the experiments show that GANs can learn and consistently reproduce the same examples, which are not necessarily exactly the same as training data (eg. the triplets in Figure 1).\nThe results are interpreted to mean that mode collapse is strong in a number of state-of-the-art generative models.\nBidirectional models (ALI, BiGANs) however demonstrate significantly higher diversity that DCGANs and MIX+DCGANs.\nFinally, the authors verify empirically the hypothesis that diversity grows linearly with the size of the discriminator.\n\nThis is a very interesting area and exciting work. The main idea behind the proposed test is very insightful. The main theoretical contribution stimulates and motivates much needed further research in the area. In my opinion both contributions suffer from some significant limitations. However, given how little we know about the behavior of modern generative models, it is a good step in the right direction.\n\n\n1. The biggest issue with the proposed test is that it conflates mode collapse with non-uniformity. The authors do mention this issue, but do not put much effort into evaluating its implications in practice, or parsing Theorems 1 and 2. My current understanding is that, in practice, when the birthday paradox test gives a collision I have no way of knowing whether it happened because my data distribution is modal, or because my generative model has bad diversity. Anecdotally, real-life distributions are far from uniform, so this should be a common issue. I would still use the test as a part of a suite of measurements, but I would not solely rely on it. I feel that the authors should give a more prominent disclaimer to potential users of the test.\n\n2. Also, given how mode collapse is the main concern, it seems to me that a discussion on coverage is missing. The proposed test is a measure of diversity, not coverage, so it does not discriminate between a generator that produces all of its samples near some mode and another that draws samples from all modes of the true data distribution. As long as they yield collisions at the same rate, these two generative models are ‘equally diverse’. Isn’t coverage of equal importance?\n\n3. The other main contribution of the paper is Theorem 3, which shows—via a very particular construction on the generator and encoder—that bidirectional GANs can also suffer from serious mode collapse. I welcome and are grateful for any theory in the area. This theorem might very well capture the underlying behavior of bidirectional GANs, however, being constructive, it guarantees nothing in practice. In light of this, the statement in the introduction that “encoder-decoder training objectives cannot avoid mode collapse” might need to be qualified. In particular, the current statement seems to obfuscate the understanding that training such an objective would typically not result into the construction of Theorem 3.", "The article \"Do GANs Learn the Distribution? Some Theory and Empirics\" considers the important problem of quantifying whether the distributions obtained from generative adversarial networks come close to the actual distribution of images. The authors argue that GANs in fact generate the distributions with fairly low support.\n\nThe proposed approach relies on so-called birthday paradox which allows to estimate the number of objects in the support by counting number of matching (or very similar) pairs in the generated sample. This test is expected to experimentally support the previous theoretical analysis by Arora et al. (2017). The further theoretical analysis is also performed showing that for encoder-decoder GAN architectures the distributions with low support can be very close to the optimum of the specific (BiGAN) objective.\n\nThe experimental part of the paper considers the CelebA and CIFAR-10 datasets. We definitely see many very similar images in fairly small sample generated. So, the general claim is supported. However, if you look closely at some pictures, you can see that they are very different though reported as similar. For example, some deer or truck pictures. That's why I would recommend to reevaluate the results visually, which may lead to some change in the number of near duplicates and consequently the final support estimates.\n\nTo sum up, I think that the general idea looks very natural and the results are supportive. On theoretical side, the results seem fair (though I didn't check the proofs) and, being partly based on the previous results of Arora et al. (2017), clearly make a step further.", "We've made a few minor revisions to the manuscript, mostly for clarity and brevity.", "It is important to note that Theorem 1 and 2 do *not* assume that the tested distribution is uniform. (The birthday paradox holds even if human birthdays are distributed in a highly nonuniform way.) This confusion possibly underlies the reviewer’s score.\n \nTheorem 2 clarifies that if one can consistently see collisions in batches, then the distribution has a major component that has *limited* support size but is almost *indistinguishable* from the full distribution via sampling a small number of samples. (For example, it could assign very tiny probability to a lot of other images.) Thus the distribution *effectively* has small support size, which is what one should care about when sampled from. We will try other phrasing of that section to clarify this issue further. \n\nIt may help to point out (as proven in paper [1] below) that to correctly estimate support size of a distribution with n modes, at least n / log n samples need to be seen by the human examiner. Since support size is ~10^6 in some GANs studied here, examining n / log n is infeasible for a human. Though conceivably some follow-up work could do this via a giant mechanical turk experiment. We will be sure to add these notes to the final version so other readers are not confused. \n(Possibly the reviewer is also alluding to the possibility that CelebA dataset is a highly nonuniform distribution of faces. This is possible, but the constructors [2] tried hard to make it unbiased (it contains ten thousand identities, each of which has twenty images) [2]. Also, we report results on it because it was used in many GANs papers.)\n\nTo the best of our knowledge, our birthday paradox test ---which is of course related to classical ideas in statistics--- is more rigorous and quantitative than past tests for mode collapse we are aware of.\n\nFinally, the reviewer appears to have missed the important theoretical contribution showing how encoder-decoder GANs may learn un-informative codes.\t\t\t\t\t\t\t\n\n[1] Valiant, Gregory, and Paul Valiant, Estimating the Unseen: An n/log(n)-sample Estimator for Entropy and Support Size, Shown Optimal via New CLTs, STOC 2011\n[2] Liu, Ziwei, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep Learning Face Attributes in the Wild\n", "Thank you for your positive and detailed comment! We’ll address your concerns point by point.\n\n“Conflates mode collapse with non-uniformity” “coverage” \nIt is important to clarify ---though very likely the reviewer understood--- that Theorem 1 and 2 hold without the assumption of uniformity. (The birthday paradox holds even though human birthdays are not uniformly distributed.) That said, the reviewer is correct that our test does not test for coverage and we will add a disclaimer to this effect. We note that testing coverage of n in general requires at least n/log n samples. (See our response to 3rd reviewer.) \n\nIndeed, we are assuming that CelebA dataset is reasonably well-balanced (it contains ten thousand identities, each of which has twenty images) [2], and therefore a GAN that produces a highly non-uniform distribution of faces is some kind of failure mode. It is conceivable that CelebA is not well-constructed for the reasons mentioned by the reviewer, but it has been used in most previous GANs papers, so it was natural to report our findings on that. In the final version we’ll put a suitable disclaimer about this issue. \n\n\n“Practical implication of Theorem 3?”\nThe reviewer is correct that we have only shown *existence* of a bad equilibrium, not proved that SGD or other algorithms *find* it. (Analysing SGD’s behavior for deep learning is of course an open problem.) But note that some of the problems raised by Theorem 3 are observed in practice too; see e.g. empirical studies ([1], [2]) which suggest that BiGANs/ALI can learn un-informative codes. We’ll rewrite to make these issues clearer. \n\t\t \t \t \t\t\n[1] Chunyuan Li, Hao Liu, Changyou Chen, Yunchen Pu, Liqun Chen, Ricardo Henao and Lawrence Carin, ALICE: Towards Understanding Adversarial Learning for Joint Distribution Matching, NIPS 2017\n[2] Jun-Yan Zhu* Taesung Park* Phillip Isola Alexei A. Efros Unpaired image-to-image translation using cycle-consistent adversarial networks. ICCV, 2017. \n", "Thank you for the positive and careful review! We agree that a judgement call needs to be made for assessing whether two images are “essentially” the same. For the final version of the paper we will utilize a second human examiner and report collisions only if both examiners judge the image to be same. It is correct that this may slightly affect the estimate of support size, though we expect the conclusions to not change too much. " ]
[ 7, 6, 7, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1 ]
[ "iclr_2018_BJehNfW0-", "iclr_2018_BJehNfW0-", "iclr_2018_BJehNfW0-", "iclr_2018_BJehNfW0-", "rkhhruYgM", "B1jWee9eM", "B1g5pBTxz" ]