paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
iclr_2018_BydjJte0-
Towards Reverse-Engineering Black-Box Neural Networks
Many deployed learned models are black boxes: given input, returns output. Internal information about the model, such as the architecture, optimisation procedure, or training data, is not disclosed explicitly as it might contain proprietary information or make the system more vulnerable. This work shows that such attributes of neural networks can be exposed from a sequence of queries. This has multiple implications. On the one hand, our work exposes the vulnerability of black-box neural networks to different types of attacks -- we show that the revealed internal information helps generate more effective adversarial examples against the black box model. On the other hand, this technique can be used for better protection of private content from automatic recognition models using adversarial examples. Our paper suggests that it is actually hard to draw a line between white box and black box models.
accepted-poster-papers
Novel way of analyzing neural networks to predict NN attributes such as architecture, training method, batch size etc. And the method works surprisingly good on the MNIST and ImageNet.
test
[ "HJ3gesPSf", "B1h4qp9xz", "rJGK3urgz", "Hyqnu-clf", "BJpZGOoQM", "Sk7ej-KQM", "SyGyjWKmz", "HkfR9ZtmM", "r1GocWtmf", "H1W59-F7z", "rye_qZtXG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ "Thanks to the authors for the extensive response with further results and analysis. Table 7 was very helpful in understanding how similar/different various architectures are, and the expanded table 3 was helpful in evaluating kennen-o in the extrapolation setting. I didn't find the results in section 4.3 to be convincing or to provide further insight into why it's possible at all to predict hyperparameters with better than chance accuracy, except for perhaps the act hyperparameter. \n\nI still find it mysterious that, even though the relationship between the hyperparameters and the query input-output pairs is obviously highly nonlinear (an iterative training process to produce a nonlinear function, followed by applying that nonlinear function to the query input to produce the corresponding output), that relationship can be inverted to the accuracy level shown in the paper. Nevertheless I think this work can benefit from broader attention from the community, which may motivate others to try out the proposed approaches on more datasets to get more insights. So I'm changing my rating to 7 despite my skepticism. ", "The basic idea is to train a neural network to predict various hyperparameters of a classifier from input-output pairs for that classifier (kennen-o approach). It is surprising that some of these hyperparameters can even be predicted with more than chance accuracy. As a simple example, it's possible that there are values of batch size for which the classifiers may become indistinguishable, yet Table 2 shows that batch size can be predicted with much higher accuracy than chance. It would be good to provide insights into under what conditions and why hyperparameters can be predicted accurately. That would make the results much more interesting, and may even turn out to be useful for other problems, such as hyperparameter optimization.\n\nThe selection of the queries for kennen-o is not explained. What is the procedure for selecting the queries? How sensitive is the performance of kennen-o to the choice of the queries? One would expect that there is significant sensitivity, in which case it may even make sense to consider learning to select a sequence of queries to maximize accuracy.\n\nIn table 3, it would be useful to show the results for kennen-o as well, because Split-E seems to be the more realistic problem setting and kennen-o seems to be a more realistic attack than kennen-i or kennen-io.\n\nIn the ImageNet classifier family prediction, how different are the various families from each other? Without going through all the references, it is difficult to get a sense of the difficulty of the prediction task for a non-computer-vision reader.\n\nOverall the results seem interesting, but without more insights it's difficult to judge how generally useful they are.", "\n-----UPDATE------\n\nHaving read the responses from the authors, and the other reviews, I am happy with my rating and maintain that this paper should be accepted.\n\n----------------------\n\n\n\nIn this paper, the authors trains a large number of MNIST classifier networks with differing attributes (batch-size, activation function, no. layers etc.) and then utilises the inputs and outputs of these networks to predict said attributes successfully. They then show that they are able to use the methods developed to predict the family of Imagenet-trained networks and use this information to improve adversarial attack.\n\nI enjoyed reading this paper. It is a very interesting set up, and a novel idea.\n\nA few comments:\n\nThe paper is easy to read, and largely written well. The article is missing from the nouns quite often though so this is something that should be amended. There are a few spelling slip ups (\"to a certain extend\" --> \"to a certain extent\", \"as will see\" --> \"as we will see\")\n\nIt appears that the output for kennen-o is a discrete probability vector for each attribute, where each entry corresponds to a possibility (for example, for \"batch-size\" it is a length 3 vector where the first entry corresponds to 64, the second 128, and the third 256). What happens if you instead treat it as a regression task, would it then be able to hint at intermediates (a batch size of 96) or extremes (say, 512).\n\nA flaw of this paper is that kennen-i and io appear to require gradients from the network being probed (you do mention this in passing), which realistically you would never have access to. (Please do correct me if I have misunderstood this)\n\nIt would be helpful if Section 4 had a paragraph as to your thoughts regarding why certain attributes are easier/harder to predict. Also, the caption for Table 2 could contain more information regarding the network outputs.\n\nYou have jumped from predicting 12 attributes on MNIST to 1 attribute on Imagenet. It could be beneficial to do an intermediate experiment (a handful of attributes on a middling task).\n\nI think this paper should be accepted as it is interesting and novel.\n\nPros\n------\n- Interesting idea\n- Reads well\n- Fairly good experimental results\n\nCons\n------\n- kennen-i seems like it couldn't be realistically deployed\n- lack of an intermediate difficulty task\n", "The paper attempts to study model meta parameter inference e.g. model architecture, optimization, etc using a supervised learning approach. They take three approaches one whereby the target models are evaluated on a fixed set of inputs, one where the access to the gradients is assumed and using that an input is crafted that can be used to infer the target quantities and one where both approaches are combined. The authors also show that these inferred quantities can be used to generate more effective attacks against the targets.\n\nThe paper is generally well written and most details for reproducibility are seem enough. I also find the question interesting and the fact that it works on this relatively broad set of meta parameters and under a rigorous train/test split intriguing. It is of course not entirely surprising that the system can be trained but that there is some form of generalization happening. \n\nAside that I think most system in practical use will be much more different than any a priori enumeration/brute force search for model parameters. I suspect in most cases practical systems will be adapted with many subsequent levels of preprocessing, ensembling, non-standard data and a number of optimization and architectural tricks that are developer dependent. It is really hard to say what a supervised learning meta-model approach such as the one presented in this work have to say about that case. \n\nI have found it hard to understand what table 3 in section 4.2 actually means. It seems to say for instance that a model is trained on 2 and 3 layers then queried with 4 and the accuracy only slightly drops. Accuracy of what ? Is it the other attributes ? Is it somehow that attribute ? if so how can that possibly ? \n\nMy main main concern is extrapolation out of the training set which is particularly important here. I don't find enough evidence in 4.2 for that point. One experiment that i would find compelling is to train for instance a meta model on S,V,B,R but not D on imagenet, predict all the attributes except architecture and see how that changes when D is added. If these are better than random and the perturbations are more successful it would be a much more compelling story. ", "The paper has been updated during the rebuttal period, following suggestions from the reviewers. We only briefly list the updates here; for more detailed information, please see the \"Response k/2\" (k=1 or 2) comments to each reviewer.\n\n1. Section 3.2 (metamodel method description)\nUpdated description for kennen-i and kennen-io -- they do not require gradients from black box. (AR1,2)\n\n2. Table 2 (summary table for metamodel results on MNIST classifiers)\nDefinition of \"prob\", \"ranking\", and \"1 label\" added. (AR2)\n\n3. Section 4 (metamodel results on MNIST classifiers)\nMore detailed rationales for the results. (AR2)\n\n4. Section 4.2 (extrapolation experiments for metamodels)\nDetailed description of experimental setup and evaluation metric. (AR1)\n\n5. Table 3 (extrapolation experiments for metamodels)\nkennen-o results added. (AR3)\n\n6. Section 4.3 & D, figures 9-12 (more insights on why and how metamodel works)\nNewly added section & figures containing t-SNE visualisation of neural net outputs and confusion matrix analysis for metamodel outputs. (AR3)\n\n7. Section C, figure 6 (finding optimal set of queries for kennen-o)\nNew section containing experiments and discussion on finding the optimal set of queries for kennen-o. (AR3)\n\n8. Table 7 (architecture profiles for 19 ImageNet classifiers used in Section 5)\nNew table showing intra-family diversity and inter-family similarity of considered ImageNet classifiers. (AR3)\n\n9. Entire paper - spelling & grammar errors. (AR2)", "<<“More insights are needed” (AR3)>>\n We agree that more clearly determining patterns in the neural network outputs that correlate with the model hyperparameters will shed more light on the interpretability and further applicability of the metamodels -- e.g. “hyperparameter optimization” (AR3). However, we would like to point out that such an analysis deserves a separate paper on its own. The main point of our paper is the *existence* of correlation between model hyperparameters and the outputs and *novel methods* for amplifying and scraping such correlation.\n\nTo give a glimpse of possible patterns in the model outputs and inner workings of the metamodels, we have updated the paper with two more analyses - one studying patterns in the model outputs (t-SNE visualisation) and the other studying the confusion patterns in the metamodel predictions. See the new section 4.3 for the complete results and discussion. Summarising key observations, t-SNE visualisation of the model outputs show the existence of clusters for some hyperparameters that could explain the good prediction ability of metamodels. In the confusion matrix analysis, we show that the metamodel consistently confuses between similar attributes more often than between dissimilar ones (e.g. confuses more between batch size 64 and 128 than between 64 and 256), for most attribute types. This is a strong indication that the metamodel indeed learns semantically meaningful features rather than mere artifacts.\n\n<<On the importance of choice of inputs for kennen-o (AR3)>>\nFor MNIST and ImageNet, the queries were selected (uniform) randomly from the respective validation sets. We have updated the paper with (1) analysis on the importance of the choice of queries for kennen-o and (2) conceptual and empirical comparison between kennen-io and the suggested “optimised choice of queries from validation images” in Appendix section C and figure 6. We provide a brief summary of the experiments and discussion here.\n\nThe random choice of queries within the validation set turns out to be not critical towards the final performance -- kennen-o trained over 100 independent samples of single queries have the mean average accuracy 42.3%, with standard deviation only 1.2 pp. For more number of queries (10 and 100), the standard deviation of performance among different samples is further reduced (0.7 and 0.5 pp).\n\nAs AR3 has suggested, one could also search for information-maximising set of queries -- this is an interesting idea left as future work. Instead of solving this potentially difficult combinatorial problem, we have proposed kennen-i/io metamodels that search for the query inputs from the *entire input space*. Empirically, this approach envelops the performance of the 100 kennen-o performances trained with 100 different random query sets at different numbers of queries (figure 6). However, as kennen-i/io submit unnatural input to the model, they could be more detectable. We thus observe a trade-off between performance and detectability of the attack; the proposed idea of maximising query within the natural image domain could potentially relax the trade-off, again an interesting future research direction.\n\n<<Lack of intermediate size experiment (AR2)>>\nWe agree that this would be nice to have, but we chose to allocate resources (time and page limit) on (1) extensive analysis on small (therefore efficient) dataset (MNIST) and (2) small number of representative experiments on a more realistic dataset (ImageNet). \n\n<<Solving as regression task (AR2)>>\nIndeed some attributes (e.g. batch size or number of layers) are endowed with natural structures (e.g. batch sizes ‘64’ and ‘128’ are closer than ‘64’ and ‘256’), and sometimes solving the problem as a regression task is more natural. This is an interesting future work. This paper is focused on showing that such attributes can be detected at all, rather than maximising the performance.\n\nFinally thank you for the suggestions for improving the paper clarity:\nAugmenting table 3 with kennen-o (AR3)\nExplaining ImageNet arch (AR3)\nCaption for table 2 (AR2)\nRationale for the results (AR2)\nSpelling & grammar (AR2)\nWe have updated the paper according to your suggestions.\n", "<<“More insights are needed” (AR3)>>\n We agree that more clearly determining patterns in the neural network outputs that correlate with the model hyperparameters will shed more light on the interpretability and further applicability of the metamodels -- e.g. “hyperparameter optimization” (AR3). However, we would like to point out that such an analysis deserves a separate paper on its own. The main point of our paper is the *existence* of correlation between model hyperparameters and the outputs and *novel methods* for amplifying and scraping such correlation.\n\nTo give a glimpse of possible patterns in the model outputs and inner workings of the metamodels, we have updated the paper with two more analyses - one studying patterns in the model outputs (t-SNE visualisation) and the other studying the confusion patterns in the metamodel predictions. See the new section 4.3 for the complete results and discussion. Summarising key observations, t-SNE visualisation of the model outputs show the existence of clusters for some hyperparameters that could explain the good prediction ability of metamodels. In the confusion matrix analysis, we show that the metamodel consistently confuses between similar attributes more often than between dissimilar ones (e.g. confuses more between batch size 64 and 128 than between 64 and 256), for most attribute types. This is a strong indication that the metamodel indeed learns semantically meaningful features rather than mere artifacts.\n\n<<On the importance of choice of inputs for kennen-o (AR3)>>\nFor MNIST and ImageNet, the queries were selected (uniform) randomly from the respective validation sets. We have updated the paper with (1) analysis on the importance of the choice of queries for kennen-o and (2) conceptual and empirical comparison between kennen-io and the suggested “optimised choice of queries from validation images” in Appendix section C and figure 6. We provide a brief summary of the experiments and discussion here.\n\nThe random choice of queries within the validation set turns out to be not critical towards the final performance -- kennen-o trained over 100 independent samples of single queries have the mean average accuracy 42.3%, with standard deviation only 1.2 pp. For more number of queries (10 and 100), the standard deviation of performance among different samples is further reduced (0.7 and 0.5 pp).\n\nAs AR3 has suggested, one could also search for information-maximising set of queries -- this is an interesting idea left as future work. Instead of solving this potentially difficult combinatorial problem, we have proposed kennen-i/io metamodels that search for the query inputs from the *entire input space*. Empirically, this approach envelops the performance of the 100 kennen-o performances trained with 100 different random query sets at different numbers of queries (figure 6). However, as kennen-i/io submit unnatural input to the model, they could be more detectable. We thus observe a trade-off between performance and detectability of the attack; the proposed idea of maximising query within the natural image domain could potentially relax the trade-off, again an interesting future research direction.\n\n<<Lack of intermediate size experiment (AR2)>>\nWe agree that this would be nice to have, but we chose to allocate resources (time and page limit) on (1) extensive analysis on small (therefore efficient) dataset (MNIST) and (2) small number of representative experiments on a more realistic dataset (ImageNet). \n\n<<Solving as regression task (AR2)>>\nIndeed some attributes (e.g. batch size or number of layers) are endowed with natural structures (e.g. batch sizes ‘64’ and ‘128’ are closer than ‘64’ and ‘256’), and sometimes solving the problem as a regression task is more natural. This is an interesting future work. This paper is focused on showing that such attributes can be detected at all, rather than maximising the performance.\n\nFinally thank you for the suggestions for improving the paper clarity:\nAugmenting table 3 with kennen-o (AR3)\nExplaining ImageNet arch (AR3)\nCaption for table 2 (AR2)\nRationale for the results (AR2)\nSpelling & grammar (AR2)\nWe have updated the paper according to your suggestions.\n", "<<“More insights are needed” (AR3)>>\n We agree that more clearly determining patterns in the neural network outputs that correlate with the model hyperparameters will shed more light on the interpretability and further applicability of the metamodels -- e.g. “hyperparameter optimization” (AR3). However, we would like to point out that such an analysis deserves a separate paper on its own. The main point of our paper is the *existence* of correlation between model hyperparameters and the outputs and *novel methods* for amplifying and scraping such correlation.\n\nTo give a glimpse of possible patterns in the model outputs and inner workings of the metamodels, we have updated the paper with two more analyses - one studying patterns in the model outputs (t-SNE visualisation) and the other studying the confusion patterns in the metamodel predictions. See the new section 4.3 for the complete results and discussion. Summarising key observations, t-SNE visualisation of the model outputs show the existence of clusters for some hyperparameters that could explain the good prediction ability of metamodels. In the confusion matrix analysis, we show that the metamodel consistently confuses between similar attributes more often than between dissimilar ones (e.g. confuses more between batch size 64 and 128 than between 64 and 256), for most attribute types. This is a strong indication that the metamodel indeed learns semantically meaningful features rather than mere artifacts.\n\n<<On the importance of choice of inputs for kennen-o (AR3)>>\nFor MNIST and ImageNet, the queries were selected (uniform) randomly from the respective validation sets. We have updated the paper with (1) analysis on the importance of the choice of queries for kennen-o and (2) conceptual and empirical comparison between kennen-io and the suggested “optimised choice of queries from validation images” in Appendix section C and figure 6. We provide a brief summary of the experiments and discussion here.\n\nThe random choice of queries within the validation set turns out to be not critical towards the final performance -- kennen-o trained over 100 independent samples of single queries have the mean average accuracy 42.3%, with standard deviation only 1.2 pp. For more number of queries (10 and 100), the standard deviation of performance among different samples is further reduced (0.7 and 0.5 pp).\n\nAs AR3 has suggested, one could also search for information-maximising set of queries -- this is an interesting idea left as future work. Instead of solving this potentially difficult combinatorial problem, we have proposed kennen-i/io metamodels that search for the query inputs from the *entire input space*. Empirically, this approach envelops the performance of the 100 kennen-o performances trained with 100 different random query sets at different numbers of queries (figure 6). However, as kennen-i/io submit unnatural input to the model, they could be more detectable. We thus observe a trade-off between performance and detectability of the attack; the proposed idea of maximising query within the natural image domain could potentially relax the trade-off, again an interesting future research direction.\n\n<<Lack of intermediate size experiment (AR2)>>\nWe agree that this would be nice to have, but we chose to allocate resources (time and page limit) on (1) extensive analysis on small (therefore efficient) dataset (MNIST) and (2) small number of representative experiments on a more realistic dataset (ImageNet). \n\n<<Solving as regression task (AR2)>>\nIndeed some attributes (e.g. batch size or number of layers) are endowed with natural structures (e.g. batch sizes ‘64’ and ‘128’ are closer than ‘64’ and ‘256’), and sometimes solving the problem as a regression task is more natural. This is an interesting future work. This paper is focused on showing that such attributes can be detected at all, rather than maximising the performance.\n\nFinally thank you for the suggestions for improving the paper clarity:\nAugmenting table 3 with kennen-o (AR3)\nExplaining ImageNet arch (AR3)\nCaption for table 2 (AR2)\nRationale for the results (AR2)\nSpelling & grammar (AR2)\nWe have updated the paper according to your suggestions.\n", "AR{n} = AnonReviewer{n}\n\nWe thank all the reviewers for their recognition of the task of whitening black box to be “interesting” and “novel”, and finding the experimental results “interesting” and even “surprising”. In particular, AR2 has commented that “[the task] is a very interesting set up, and a novel idea” and that the paper “should be accepted”. Yet, we find some misunderstandings from the reviewers that could potentially have led to less recognition of the importance & novelty of our work -- we have clarified them here and have updated the paper accordingly. The update is substantial -- 6 more figures & tables, 10s of paragraphs added & updated.\n\nStressing our contribution again, our results have crucial implications to privacy and security of deep neural network models -- the paper opens an avenue for enhancing the effectiveness of attacks on black boxes. The paper investigates for the first time an important observation that a “relatively broad set of meta parameters” (AR1) can be reliably predicted only from black-box access. We also empirically test our methods in challenging, realistic conditions: What if outputs are single labels? What if there is a big generalisation gap between training models and the test black box? (We will answer AR1’s questions regarding this point.) In addition, kennen-i/io are novel methods that not only learns to interpret the black-box output, but also actively searches for effective query inputs. In particular, contrary to the misunderstanding of AR1 and AR2, kennen-i/io still only requires black-box access to the model at test time. (We will describe in greater detail later.)\n\nWe will now answer issues raised by the reviewers one by one. \n\n<<Generalisability of metamodel beyond the training models (AR1)>>\nWe also find generalisation problem absolutely important, since in practice DNNs can be trained with diverse “preprocessing, ensembling, [... and] architectural tricks” (AR1). The reviewer has suggested an experiment where the metamodel is trained to predict, say, optimization hyperparameters for a black box model with architecture “D”, when it is only trained on architectures “S,V,B,R” (i.e. generalisation across architecture). \n\nSection 4.2 and table 3 are exactly doing this analysis. In the updated paper, we have added quite some more details of experimental procedure and evaluation details to make them more understandable (experiments themselves are unchanged). We have defined the term “splitting attribute” that denotes the attribute that separates the training and testing models (e.g. architecture family for the example given by AR1). For evaluation, we measure the performance only over non-splitting attributes (e.g. optimization hyperparameters in AR1’s example).\n\nWe have shown in the paper that the metamodels do indeed generalise across domain gaps of 1-2 attributes. For example, row 3 of table 3 shows that even when metamodels are trained only on models with #conv<=3 and #fc<=3 (shallower), they can still predict hyperparameters of models with #conv=4 and #fc=4 (deeper) at 80.7% level of the random-split accuracy. The set of experiments in table 3 gives strong evidence that the metamodels do generalise, and this certainly makes the story “more compelling” (AR1). We have also updated table 3 with kennen-o results, following AR3’s suggestion.\n\nAR1 is also wondering about the generalisability of adversarial image perturbation (AIP) attacks across attributes. Our AIP results in section 5.3 are already exhibiting some form of generalisation - we do “leave-one-out cross validation” (section 5.3) evaluation within each family for every AIP evaluation. As the newly added table 7 shows, there exists high intra-family diversity of models in terms of #parameters and #layers. High fooling rates across such a diversity, again, gives evidence that AIPs also do generalise.\n\n<<kennen-i/io requires gradient from the black box (AR2,AR1)>>\nNo. All metamodels, including kennen-i/io, *only queries* the test black-box. The requirement for model gradient arises during the training time for kennen-i/io (which is legitimate). We were stressing the fact that kennen-o does not require gradients even from the training models. \n\nBriefly re-describing the procedure of kennen-i/io, at training time they treat the query input as a set of learnable parameters (as for MLP model parameters for kennen-o), and then at test time they feed the *learned* query input to the test black-box model, and read off the outputs.\n\nKennen-i/io are therefore novel and distinctive methods - they treat *inputs* to a network as a generalisable model parameter. It is indeed quite surprising that the learned input generalises very well to unseen models -- analysing this intriguing phenomenon would be a good future research direction. We have updated the method description in section 3.2 to make our novelty much clearer. \n", "AR{n} = AnonReviewer{n}\n\nWe thank all the reviewers for their recognition of the task of whitening black box to be “interesting” and “novel”, and finding the experimental results “interesting” and even “surprising”. In particular, AR2 has commented that “[the task] is a very interesting set up, and a novel idea” and that the paper “should be accepted”. Yet, we find some misunderstandings from the reviewers that could potentially have led to less recognition of the importance & novelty of our work -- we have clarified them here and have updated the paper accordingly. The update is substantial -- 6 more figures & tables, 10s of paragraphs added & updated.\n\nStressing our contribution again, our results have crucial implications to privacy and security of deep neural network models -- the paper opens an avenue for enhancing the effectiveness of attacks on black boxes. The paper investigates for the first time an important observation that a “relatively broad set of meta parameters” (AR1) can be reliably predicted only from black-box access. We also empirically test our methods in challenging, realistic conditions: What if outputs are single labels? What if there is a big generalisation gap between training models and the test black box? (We will answer AR1’s questions regarding this point.) In addition, kennen-i/io are novel methods that not only learns to interpret the black-box output, but also actively searches for effective query inputs. In particular, contrary to the misunderstanding of AR1 and AR2, kennen-i/io still only requires black-box access to the model at test time. (We will describe in greater detail later.)\n\nWe will now answer issues raised by the reviewers one by one. \n\n<<Generalisability of metamodel beyond the training models (AR1)>>\nWe also find generalisation problem absolutely important, since in practice DNNs can be trained with diverse “preprocessing, ensembling, [... and] architectural tricks” (AR1). The reviewer has suggested an experiment where the metamodel is trained to predict, say, optimization hyperparameters for a black box model with architecture “D”, when it is only trained on architectures “S,V,B,R” (i.e. generalisation across architecture). \n\nSection 4.2 and table 3 are exactly doing this analysis. In the updated paper, we have added quite some more details of experimental procedure and evaluation details to make them more understandable (experiments themselves are unchanged). We have defined the term “splitting attribute” that denotes the attribute that separates the training and testing models (e.g. architecture family for the example given by AR1). For evaluation, we measure the performance only over non-splitting attributes (e.g. optimization hyperparameters in AR1’s example).\n\nWe have shown in the paper that the metamodels do indeed generalise across domain gaps of 1-2 attributes. For example, row 3 of table 3 shows that even when metamodels are trained only on models with #conv<=3 and #fc<=3 (shallower), they can still predict hyperparameters of models with #conv=4 and #fc=4 (deeper) at 80.7% level of the random-split accuracy. The set of experiments in table 3 gives strong evidence that the metamodels do generalise, and this certainly makes the story “more compelling” (AR1). We have also updated table 3 with kennen-o results, following AR3’s suggestion.\n\nAR1 is also wondering about the generalisability of adversarial image perturbation (AIP) attacks across attributes. Our AIP results in section 5.3 are already exhibiting some form of generalisation - we do “leave-one-out cross validation” (section 5.3) evaluation within each family for every AIP evaluation. As the newly added table 7 shows, there exists high intra-family diversity of models in terms of #parameters and #layers. High fooling rates across such a diversity, again, gives evidence that AIPs also do generalise.\n\n<<kennen-i/io requires gradient from the black box (AR2,AR1)>>\nNo. All metamodels, including kennen-i/io, *only queries* the test black-box. The requirement for model gradient arises during the training time for kennen-i/io (which is legitimate). We were stressing the fact that kennen-o does not require gradients even from the training models. \n\nBriefly re-describing the procedure of kennen-i/io, at training time they treat the query input as a set of learnable parameters (as for MLP model parameters for kennen-o), and then at test time they feed the *learned* query input to the test black-box model, and read off the outputs.\n\nKennen-i/io are therefore novel and distinctive methods - they treat *inputs* to a network as a generalisable model parameter. It is indeed quite surprising that the learned input generalises very well to unseen models -- analysing this intriguing phenomenon would be a good future research direction. We have updated the method description in section 3.2 to make our novelty much clearer. \n", "AR{n} = AnonReviewer{n}\n\nWe thank all the reviewers for their recognition of the task of whitening black box to be “interesting” and “novel”, and finding the experimental results “interesting” and even “surprising”. In particular, AR2 has commented that “[the task] is a very interesting set up, and a novel idea” and that the paper “should be accepted”. Yet, we find some misunderstandings from the reviewers that could potentially have led to less recognition of the importance & novelty of our work -- we have clarified them here and have updated the paper accordingly. The update is substantial -- 6 more figures & tables, 10s of paragraphs added & updated.\n\nStressing our contribution again, our results have crucial implications to privacy and security of deep neural network models -- the paper opens an avenue for enhancing the effectiveness of attacks on black boxes. The paper investigates for the first time an important observation that a “relatively broad set of meta parameters” (AR1) can be reliably predicted only from black-box access. We also empirically test our methods in challenging, realistic conditions: What if outputs are single labels? What if there is a big generalisation gap between training models and the test black box? (We will answer AR1’s questions regarding this point.) In addition, kennen-i/io are novel methods that not only learns to interpret the black-box output, but also actively searches for effective query inputs. In particular, contrary to the misunderstanding of AR1 and AR2, kennen-i/io still only requires black-box access to the model at test time. (We will describe in greater detail later.)\n\nWe will now answer issues raised by the reviewers one by one. \n\n<<Generalisability of metamodel beyond the training models (AR1)>>\nWe also find generalisation problem absolutely important, since in practice DNNs can be trained with diverse “preprocessing, ensembling, [... and] architectural tricks” (AR1). The reviewer has suggested an experiment where the metamodel is trained to predict, say, optimization hyperparameters for a black box model with architecture “D”, when it is only trained on architectures “S,V,B,R” (i.e. generalisation across architecture). \n\nSection 4.2 and table 3 are exactly doing this analysis. In the updated paper, we have added quite some more details of experimental procedure and evaluation details to make them more understandable (experiments themselves are unchanged). We have defined the term “splitting attribute” that denotes the attribute that separates the training and testing models (e.g. architecture family for the example given by AR1). For evaluation, we measure the performance only over non-splitting attributes (e.g. optimization hyperparameters in AR1’s example).\n\nWe have shown in the paper that the metamodels do indeed generalise across domain gaps of 1-2 attributes. For example, row 3 of table 3 shows that even when metamodels are trained only on models with #conv<=3 and #fc<=3 (shallower), they can still predict hyperparameters of models with #conv=4 and #fc=4 (deeper) at 80.7% level of the random-split accuracy. The set of experiments in table 3 gives strong evidence that the metamodels do generalise, and this certainly makes the story “more compelling” (AR1). We have also updated table 3 with kennen-o results, following AR3’s suggestion.\n\nAR1 is also wondering about the generalisability of adversarial image perturbation (AIP) attacks across attributes. Our AIP results in section 5.3 are already exhibiting some form of generalisation - we do “leave-one-out cross validation” (section 5.3) evaluation within each family for every AIP evaluation. As the newly added table 7 shows, there exists high intra-family diversity of models in terms of #parameters and #layers. High fooling rates across such a diversity, again, gives evidence that AIPs also do generalise.\n\n<<kennen-i/io requires gradient from the black box (AR2,AR1)>>\nNo. All metamodels, including kennen-i/io, *only queries* the test black-box. The requirement for model gradient arises during the training time for kennen-i/io (which is legitimate). We were stressing the fact that kennen-o does not require gradients even from the training models. \n\nBriefly re-describing the procedure of kennen-i/io, at training time they treat the query input as a set of learnable parameters (as for MLP model parameters for kennen-o), and then at test time they feed the *learned* query input to the test black-box model, and read off the outputs.\n\nKennen-i/io are therefore novel and distinctive methods - they treat *inputs* to a network as a generalisable model parameter. It is indeed quite surprising that the learned input generalises very well to unseen models -- analysing this intriguing phenomenon would be a good future research direction. We have updated the method description in section 3.2 to make our novelty much clearer. \n" ]
[ -1, 7, 7, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "HkfR9ZtmM", "iclr_2018_BydjJte0-", "iclr_2018_BydjJte0-", "iclr_2018_BydjJte0-", "iclr_2018_BydjJte0-", "r1GocWtmf", "H1W59-F7z", "rye_qZtXG", "rJGK3urgz", "Hyqnu-clf", "B1h4qp9xz" ]
iclr_2018_B1J_rgWRW
Understanding Deep Neural Networks with Rectified Linear Units
In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give an algorithm to train a ReLU DNN with one hidden layer to {\em global optimality} with runtime polynomial in the data size albeit exponential in the input dimension. Further, we improve on the known lower bounds on size (from exponential to super exponential) for approximating a ReLU deep net function by a shallower ReLU net. Our gap theorems hold for smoothly parametrized families of ``hard'' functions, contrary to countable, discrete families known in the literature. An example consequence of our gap theorems is the following: for every natural number k there exists a function representable by a ReLU DNN with k2 hidden layers and total size k3, such that any ReLU DNN with at most k hidden layers will require at least 12kk+1−1 total nodes. Finally, for the family of \Rn→\R DNNs with ReLU activations, we show a new lowerbound on the number of affine pieces, which is larger than previous constructions in certain regimes of the network architecture and most distinctively our lowerbound is demonstrated by an explicit construction of a \emph{smoothly parameterized} family of functions attaining this scaling. Our construction utilizes the theory of zonotopes from polyhedral theory.
accepted-poster-papers
Theoretical analysis and understanding of DNNs is a crucial area for ML community. This paper studies characteristics of the relu DNNs and makes several important contributions.
train
[ "rJUiN3DeM", "BkQ3IWcxM", "Sy66Z9sgG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents several theoretical results regarding the expressiveness and learnability of ReLU-activated deep neural networks. I summarize the main results as below:\n\n(1) Any piece-wise linear function can be represented by a ReLU-acteivated DNN. Any smooth function can be approximated by such networks.\n\n(2) The expressiveness of 3-layer DNN is stronger than any 2-layer DNN.\n\n(3) Using a polynomial number of neurons, the ReLU-acteivated DNN can represent a piece-wise linear function with exponentially many pieces\n\n(4) The ReLU-activated DNN can be learnt to global optimum with an exponential-time algorithm.\n\nAmong these results (1), (2), (4) are sort of known in the literature. This paper extends the existing results in some subtle ways. For (1), the authors show that the DNN has a tighter bound on the depth. For (2), the \"hard\" functions has a better parameterization, and the gap between 3-layer and 2-layer is proved bigger. For (4), although the algorithm is exponential-time, it guarantees to compute the global optimum.\n\nThe stronger results of (1), (2), (4) all rely on the specific piece-wise linear nature of ReLU. Other than that, I don't get much more insight from the theoretical result. When the input dimension is n, the representability result of (1) fails to show that a polynomial number of neurons is sufficient. Perhaps an exponential number of neurons is necessary in the worst case, but it will be more interesting if the authors show that under certain conditions a polynomial-size network is good enough.\n\nResult (3) is more interesting as it is a new result. The authors present a constructive proof to show that ReLU-activated DNN can represent many linear pieces. However, the construction seems artificial and these functions don't seem to be visually very complex.\n\nOverall, this is an incremental work in the direction of studying the representation power of neural networks. The results might be of theoretical interest, but I doubt if a pragmatic ReLU network user will learn anything by reading this paper.", "The paper presents a series of definitions and results elucidating details about the functions representable by ReLU networks, their parametrisation, and gaps between deep and shallower nets. \n\nThe paper is easy to read, although it does not seem to have a main focus (exponential gaps vs. optimisation vs. universal approximation). The paper makes a nice contribution to the details of deep neural networks with ReLUs, although I find the contributed results slightly overstated. The 1d results are not difficult to derive from previous results. The advertised new results on the asymptotic behaviour assume a first layer that dominates the size of the network. The optimisation method appears close to brute force and is limited to 2 layers. \n\nTheorem 3.1 appears to be easily deduced from the results from Montufar, Pascanu, Cho, Bengio, 2014. For 1d inputs, each layer will multiply the number of regions at most by the number of units in the layer, leading to the condition w’ \\geq w^{k/k’}. Theorem 3.2 is simply giving a parametrization of the functions, removing symmetries of the units in the layers. \n\nIn the list at the top of page 5. Note that, the function classes might be characterized in terms of countable properties, such as the number of linear regions as discussed in MPCB, but still they build a continuum of functions. Similarly, in page 5 ``Moreover, for fixed n,k,s, our functions are smoothly parameterized''. This should not be a surprise. \n\nIn the last paragraph of Section 3 ``m = w^k-1'' This is a very big first layer. This also seems to subsume the first condition, s\\geq w^k-1 +w(k-1) for the network discussed in Theorem 3.9. In the last paragraph of Section 3 ``To the best of our knowledge''. In the construction presented here, the network’s size is essentially in the layer of size m. Under such conditions, Corollary 6 of MPCB also reads as s^n. Here it is irrelevant whether one artificially increases the depth of the network by additional, very narrow, layers, which do not contribute to the asymptotic number of units. \n\nThe function class Zonotope is a composition of two parts. It would be interesting to consider also a single construction, instead of the composition of two constructions. \n\nTheorem 3.9 (ii) it would be nice to have a construction where the size becomes 2m + wk when k’=k. \n\nSection 4, while interesting, appears to be somewhat disconnected from the rest of the paper. \n\nIn Theorem 2.3. explain why the two layer case is limited to n=1. \n\nAt some point in the first 4 pages it would be good to explain what is meant by ``hard’’ functions (e.g. functions that are hard to represent, as opposed to step functions, etc.) \n", "The paper presents an analysis and characterization of ReLU networks (with a linear final layer) via the set of functions these networks can model, especially focusing on the set of “hard” functions that are not easily representable by shallower networks. It makes several important contributions, including extending the previously published bounds by Telgarsky et al. to tighter bounds for the special case of ReLU DNNs, giving a construction for a family of hard functions whose affine pieces scale exponentially with the dimensionality of the inputs, and giving a procedure for searching for globally optimal solution of a 1-hidden layer ReLU DNN with linear output layer and convex loss. I think these contributions warrant publishing the paper at ICLR 2018. The paper is also well written, a bit dense in places, but overall well organized and easy to follow. \n\nA key limitation of the paper in my opinion is that typically DNNs do not contain a linear final layer. It will be valuable to note what, if any, of the representation analysis and global convergence results carry over to networks with non-linear (Softmax, e.g.) final layer. I also think that the global convergence algorithm is practically unfeasible for all but trivial use cases due to terms like D^nw, would like hearing authors’ comments in case I’m missing some simplification.\n\nOne minor suggestion for improving readability is to explicitly state, whenever applicable, that functions under consideration are PWL. For example, adding PWL to Theorems and Corollaries in Section 3.1 will help. Similarly would be good to state, wherever applicable, the DNN being discussed is a ReLU DNN." ]
[ 6, 6, 7 ]
[ 4, 5, 4 ]
[ "iclr_2018_B1J_rgWRW", "iclr_2018_B1J_rgWRW", "iclr_2018_B1J_rgWRW" ]
iclr_2018_rytNfI1AZ
Training wide residual networks for deployment using a single bit for each weight
For fast and energy-efficient deployment of trained deep neural networks on resource-constrained embedded hardware, each learned weight parameter should ideally be represented and stored using a single bit. Error-rates usually increase when this requirement is imposed. Here, we report large improvements in error rates on multiple datasets, for deep convolutional neural networks deployed with 1-bit-per-weight. Using wide residual networks as our main baseline, our approach simplifies existing methods that binarize weights by applying the sign function in training; we apply scaling factors for each layer with constant unlearned values equal to the layer-specific standard deviations used for initialization. For CIFAR-10, CIFAR-100 and ImageNet, and models with 1-bit-per-weight requiring less than 10 MB of parameter memory, we achieve error rates of 3.9%, 18.5% and 26.0% / 8.5% (Top-1 / Top-5) respectively. We also considered MNIST, SVHN and ImageNet32, achieving 1-bit-per-weight test results of 0.27%, 1.9%, and 41.3% / 19.1% respectively. For CIFAR, our error rates halve previously reported values, and are within about 1% of our error-rates for the same network with full-precision weights. For networks that overfit, we also show significant improvements in error rate by not learning batch normalization scale and offset parameters. This applies to both full precision and 1-bit-per-weight networks. Using a warm-restart learning-rate schedule, we found that training for 1-bit-per-weight is just as fast as full-precision networks, with better accuracy than standard schedules, and achieved about 98%-99% of peak performance in just 62 training epochs for CIFAR-10/100. For full training code and trained models in MATLAB, Keras and PyTorch see https://github.com/McDonnell-Lab/1-bit-per-weight/ .
accepted-poster-papers
The paper presents a way of training 1bit wide resnet to reduce the model footprint while maintaining good performance. The revisions added more comparisons and discussions, which make it much better. Overall, the committee feels this work will bring value to the conference.
train
[ "ByDm13EBG", "SJ2dCsEHz", "SkaoIrVHz", "HkvE8wI4G", "BJyxkbFxz", "SkGtH2Kxf", "HJ0pVRqxM", "rymXcljQz", "rkAgKpHQG", "rkzl-zvXG", "ryrbVpBmG" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Good suggestion - we will do that at the next opportunity for revision.", "Ah, that is an impressive speedup! Thanks! (I would suggest citing this in the paper since it serves nicely at making the motivation of using 1bit weights obvious. But I leave that decision up to you).", "Regarding speedup on GPUs: we did all our work using the standard approach of using 32-bit GPU implementations to simulate the 1-bit case, in which case there's no speedup. The reason is that custom GPU code needs to be written, presumably in cuda; we didn't need to do this to conduct our study.\n\nHowever, we found a paper submitted to this ICLR: \"Espresso: Efficient Forward Propagation for Binary Deep Neural Networks\" (https://openreview.net/forum?id=Sk6fD5yCb) that reports a 5x speed increase for optimized GPU code for binary networks applied to CIFAR. ", "Thankyou for your response and the updated manuscript, especially for detailing your motivation for using 1bit convolutions. Just out of curiosity: Do you happen to have rough numbers how large the speedup on regular GPUs is when implementing 1bit convolutions as you suggested instead of using standard GPU convolutions with 1bit numbers?\n\nRegarding the compression rate of 100 that I cited: I was in fact referring to VGGNet and I primarily tried to make the point that it would be useful to compare your method to completely different approaches. With respect to this, I appreciate the newly added section on SqueezeNet.", "This paper introduces several ideas: scaling, warm-restarting learning rate, cutout augmentation. \n\nI would like to see detailed ablation studies: how the performance is influenced by the warm-restarting learning rates, how the performance is influenced by cutout. Is the scaling scheme helpful for existing single-bit algorithms?\n\nQuestion for Table 3: 1-bit WRN 20-10 (this paper) outperforms WRN 22-10 with the same #parameters on C100. I would like to see more explanations. \n", "The authors propose to train neural networks with 1bit weights by storing and updating full precision weights in training, but using the reduced 1bit version of the network to compute predictions and gradients in training. They add a few tricks to keep the optimization numerically efficient. Since right now more and more neural networks are deployed to end users, the authors make an interesting contribution to a very relevant question.\n\nThe approach is precisely described although the text sometimes could be a bit clearer (for example, the text contains many important references to later sections).\n\nThe authors include a few other methods for comparision, but I think it would be very helpful to include also some methods that use a completely different approach to reduce the memory footprint. For example, weight pruning methods sometimes can give compression rates of around 100 while the 1bit methods by definition are limited to a compression rate of 32. Additionally, for practical applications, methods like weight pruning might be more promising since they reduce both the memory load and the computational load.\n\nSide mark: the manuscript has quite a few typos.\n", "The paper trains wide ResNets for 1-bit per weight deployment.\nThe experiments are conducted on CIFAR-10, CIFAR-100, SVHN and ImageNet32.\n\n+the paper reads well\n+the reported performance is compelling \n\nPerhaps the authors should make it clear in the abstract by replacing:\n\"Here, we report methodological innovations that result in large reductions in error rates across multiple datasets for deep convolutional neural networks deployed using a single bit for each weight\"\nwith\n\"Here, we report methodological innovations that result in large reductions in error rates across multiple datasets for wide ResNets deployed using a single bit for each weight\"\n\nI am curious how the proposed approach compares with SqueezeNet (Iandola et al.,2016) in performance and memory savings.\n\n", "1. Note first that changes made in response to specific comments from reviewers are written in our response to each reviewer. In summary, significant changes in this regard include:\n1a. the addition of a figure for an ablation study, as requested by a reviewer\n1b. the addition of a section comparing our work to SqueezeNet in the Discussion, as requested by a reviewer\n1c. more emphasis on our improvements to full-precision training, by not learning batch-norm parameters\n\n2. We also updated all results, finished experiments on ImageNet, updating the abstract accordingly.", "Thankyou for your comments and questions.\n\n***\n\nReviewer Comment: “I would like to see detailed ablation studies: how the performance is influenced by the warm-restarting learning rates, how the performance is influenced by cutout”\n\nAuthor Response: \n\nWe already separated out the influence of cutout in the original submission. We only used cutout for CIFAR 10/100, and Table 1 showed separate columns for results without cutout (indicated by superscript +) and those with cutout (indicated by superscript ++). Figure 5 (right panel) shows how the use of cutout influences convergence.\n\nWe did not originally provide comparisons with and without warm-restart, because the benefits of the warm-restart method (both faster convergence and better accuracy) for CIFAR 10/100 has already been established by Loshchilov and Hutter (2016) who compared the approach with a more typical schedule with a learning rate of 0.1/0.01/0.001 for 80/80/80 epochs. However, the reviewer has a point that the comparison has not previously been done for the case of single-bit-weights and hence we have now conducted some experiments.\n\nAuthor actions:\n\n1.\tWe have added a section in Results called “Ablation Studies” and included a new figure for CIFAR-100. The figure highlights that the warm-restart method does not provide a significant accuracy benefit for the full-precision case but does in the single-bit-weights case. The figure also shows a comparison of learning and not learning the batch-norm offsets and gains, in response to another question by this Reviewer, responded to below.\n\n***\n\nReviewer question: “Is the scaling scheme helpful for existing single-bit algorithms? \n\nAuthor Response: Our Section 2.1 describes how our approach builds on and enables the improvement of existing single-bit algorithms. Our new Section 4.1 shows how our use of warm-restart accelerates convergence, and provides best accuracy, especially for CIFAR-10. Our Section 5.2 discusses the specific case of how our method compares with Rastegari et al (2016).\n\nAuthor Action: we have added Section 4.1 and revised Section 5.2.\n\n***\n\nReviewer question: “Question for Table 3: 1-bit WRN 20-10 (this paper) outperforms WRN 22-10 with the same #parameters on C100. I would like to see more explanations.”\n\nAuthor response: In this initial submission, we only explained this in general terms in Section 5.3 as being a result of our approach described in 3.2.1. So we agree that the reviewer has a point. We did not highlight this aspect very much in the original submission, nor explain it in the specific cases tabulated, as we wanted the central emphasis to be on our single-bit-weight results. However, on reflection, improvements to the baseline approach are surely of interest to the community and worth emphasis.\n\nFor the specific case mentioned by the Reviewer, we remark that our 20-10 network is essentially the same as the 22-10 comparison network, where the extra 2 conv layers appear due to the use of learnt 1x1 convolutional projections in downsampling residual paths, whereas we use average pooling instead.\n\nTo directly answer the question, there is one single factor that enabled us to significantly lower the error rate for the width-10 wide ResNet architecture for CIFAR, which is that we turn off the learning of the batch norm parameters, as we found this reduces overfitting. \n\nAuthor actions:\n\n1.\tWe have now highlighted this contribution in the abstract.\n2.\tWe have now highlighted the specific case mentioned in the Discussion in Section 5.3.\n3.\tWe have added results in a new Section (“Ablation studies”) that show how the test error rate changes through training with and without learning of the batch-norm scale and offset.\n", "Thankyou for your comments.\n\n***\n\nReviewer Comment: “could be a bit clearer… the text contains many important references to later sections”\n\nAuthor Action: We have edited the text to improve this aspect.\n\n***\n\nReviewer Comment: “I think it would be very helpful to include also some methods that use a completely different approach to reduce the memory footprint…. for practical applications, methods like weight pruning might be more promising since they reduce both the memory load and the computational load”\n\nAuthor Response: \n\nAs well as reducing model size, our approach is strongly motivated by significantly reducing computational load by a different approach to reducing parameter number. The key point is that performing convolutions using 1-bit weights can be implemented using adders rather than multipliers. Removing the need for multipliers offers enormous benefits in terms of chip size, speed and power consumption in custom digital hardware implementations of trained networks, and also offers substantial speedups even if implemented on GPUs. This has been demonstrated by Rastegari et al in “ XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks” (arxiv: 1603.05279, 2016).\n\nExisting pruning methods do not automatically offer the opportunity to avoid use of multiplications, since the un-pruned parameters are learned using full precision. It remains an open question beyond the scope of the current submission to determine whether pruning can be successfully applied to 1-bit models like ours to in turn reduce the number of parameters.\n\nQuestion to Reviewer: we have been unable to find methods that reduce the size of all-convolutional networks by a factor of 100. This magnitude of reduction is, to our knowledge, only available in networks with very large fully-connected layers, such as AlexNet and VGGnet. For example, one submission to ICLR 2018 “To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression” uses pruning applied to an Inception network that reduces the number of non-zero parameters from 27M to 3M, which is a factor of 9x. Can you please clarify if you know of papers that achieve this scale of pruning in all-convolutional networks such as ResNets?\n\nAuthor Action: we have added comments strengthening our motivation of reducing the use of multiplications, and added to our discussion of pruning in the prior work and discussion.\n\n***\n\nReviewer Comment: “the manuscript has quite a few typos..”\n\nAuthor Action: we have carefully reviewed the entire manuscript and corrected typos.\n", "Thankyou for your comments. \n\n***\n\nReviewer comment: \"+the reported performance is compelling\": \n\nAuthor Response: To reinforce this aspect, since initial submission we have found the following ways to surpass the performance we initially reported: \n\n1. We now have conducted experiments on the full Imagenet dataset and have surpassed all previously published results for a single-bit per weight. Indeed, we provide the first report, to our knowledge, of a top-5 error rate under 10% for this case.\n2. For Imagenet32, we realised that the weight decay we used was set to the larger CIFAR value of 0.0005. We repeated our experiments with the usual Imagenet value of 0.0001 and achieved improved results.\n3. For experiments on CIFAR with cutout, we realised our previous experiments did not uniformly sample all pixels for cutout; after fixing we achieved further reduced error rates. \n4. We have also completed experiments with CIFAR 10/100 for ResNets with depth 26. We found the extra layers provided no benefit for the full-precision case, but a small advantage in the single-bit case.\n\nAuthor Actions: We have updated the results tables in the revised manuscript, modified our descriptions of the use of CutOut, clarified our weight-decay values, and added comments in the Discussion section comparing aspects of the enhanced results.\n\n***\n\nReviewer Comment: “Perhaps the authors should make it clear in the abstract…”\n\nAuthor Response: You have a point that our experiments in the main text were all on wide ResNets. This followed from our strategy to commence with a near state-of-the-art baseline. However, our training approach is general and not specific to ResNets. For example, we provided some results for all-conv-nets in the Appendix B on the final page.\n\nAuthor Actions: To improve clarity as suggested, we have added the phrase \"Using depth-20 wide residual networks as our main baseline\" to our revised manuscript, but have retained the term \"deep convolutional neural networks.\"\n\n***\n\nReviewer Comment: “I am curious how the proposed approach compares with SqueezeNet (Iandola et al.,2016) in performance and memory savings. “\n\nAuthor Response: The Squeezenet paper focuses on memory savings relative to AlexNet. It uses two strategies to produce a memory-saving smaller model than an AlexNet: (1) replacing many 3x3 kernels with 1x1 kernels; (2) deep compression.\n\nRegarding Squeezenet memory-saving strategy (1), we note that SqueezeNet is an all-convolutional network. We tried our single-bit-weights approach in many all-convolutional variants (e.g. plain all-conv, Squeezenet, MobileNet, ResNeXt) and found its effectiveness relative to full-precision baselines to be comparable for all variants. We also observed in many experiments that the total number of learnt parameters correlates very well with classification accuracy. When we applied a SqueezeNet variant to CIFAR-100, we found that to obtain the same accuracy as our ResNets, we had to increase the \"width\" until the SqueezeNet had approximately the same number of learnt parameters as the ResNet. We conclude that our method therefore reduces the model size of the baseline SqueezeNet architecture (i.e. when no deep compression is used) by a factor of 32, albeit with an accuracy gap.\n\nRegarding SqueezeNet memory-saving strategy (2), the SqueezeNet paper reports that Deep Compression reduces the model size by approximately a factor of 10 with no accuracy loss. Our method reduces the same model size by a factor of 32, but with a small accuracy loss that typically becomes larger as the full-precision accuracy gets smaller. It would certainly be interesting to explore whether Deep-Compression might be applied to our 1-bit models, but our own focus is on methods that minimally alter training, and we leave investigation of more complex methods for future work.\n\nRegarding SqueezeNet performance, the best accuracy reported in the SqueezenNet paper is 39.6% top-1 error, requiring 4.8MB for the model’s weights. Our single-bit-weight models achieve better than 33% top-1 error, and require 8.3 MB for the model’s weights. \n\nAuthor Actions: We added these comments to a new subsection in the Discussion section of our paper." ]
[ -1, -1, -1, -1, 6, 6, 6, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 4, 3, 4, -1, -1, -1, -1 ]
[ "SJ2dCsEHz", "SkaoIrVHz", "HkvE8wI4G", "rkzl-zvXG", "iclr_2018_rytNfI1AZ", "iclr_2018_rytNfI1AZ", "iclr_2018_rytNfI1AZ", "iclr_2018_rytNfI1AZ", "BJyxkbFxz", "SkGtH2Kxf", "HJ0pVRqxM" ]
iclr_2018_HyzbhfWRW
Learn to Pay Attention
We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map. Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parametrised by the score matrices, must alone be used for classification. Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values. Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter. Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets. When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset. We also demonstrate improved robustness against the fast gradient sign method of adversarial attack.
accepted-poster-papers
quality: interesting idea to train an end-to-end attention together with CNNs and solid experiments to justify the benefits of using such attentions. clarity: the presentation has been updated according to review comments and improved a lot significance: highly relevant topic, good improvements over other methods
train
[ "ryqX5FFlG", "rJ7NDl9xM", "ry-5adjxG", "rkGzeIaXz", "Sk9_cra7z", "Bkdd8avXM", "BJv3NaIzz", "BkG4XT8GM", "BJ0XfTUGG", "Skdm7jJ-G", "SkJcqL5lz", "r1-vztUlz", "SJD74cJxM", "Bk6Jkhnyf", "HyKA1qdkf", "rJN1a7CCZ", "B1eI20uCb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "public", "author", "public", "author", "public", "author", "public" ]
[ "This paper proposes a network with the standard soft-attention mechanism for classification tasks, where the global feature is used to attend on multiple feature maps of local features at different intermediate layers of CNN. The attended features at different feature maps are then used to predict the final classes by either concatenating features or ensembling results from individual attended features. The paper shows that the proposed model outperforms the baseline models in classification and weakly supervised segmentation.\n\nStrength:\n- It is interesting idea to use the global feature as a query in the attention mechanism while classification tasks do not naturally involve a query unlike other tasks such as visual question answering and image captioning.\n\n- The proposed model shows superior performances over GAP in multiple tasks.\n\nWeakness:\n- There are a lot of missing references. There have been a bunch of works using the soft-attention mechanism in many different applications including visual question answering [A-C], attribute prediction [D], image captioning [E,F] and image segmentation [G]. Only two previous works using the soft-attention (Bahdanau et al., 2014; Xu et al., 2015) are mentioned in Introduction but they are not discussed while other types of attention models (Mnih et al., 2014; Jaderberg et al., 2015) are discussed more.\n\n- Section 2 lacks discussions about related work but is more dedicated to emphasizing the contribution of the paper.\n\n- The global feature is used as the query vector for the attention calculation. Thus, if the global feature contains information for a wrong class, the attention quality should be poor too. Justification on this issue can improve the paper.\n\n- [H] reports the performance on the fine-grained bird classification using different type of attention mechanism. Comparison and justification with this method can improve the paper. The performance in [H] is almost 10 % point higher accuracy than the proposed model.\n\n- In the segmentation experiments, the models are trained on extremely small images, which is unnatural in segmentation scenarios. Experiments on realistic settings should be included. Moreover, [G] introduces a method of using an attention model for segmentation, while the paper does not contain any discussion about it.\n\n\nOverall, I am concerned that the proposed model is not well discussed with important previous works. I believe that the comparisons and discussions with these works can greatly improve the paper.\n\nI also have some questions about the experiments:\n- Is there any reasoning why we have to simplify the concatenation into an addition in Section 3.2? They are not equivalent.\n\n- When generating the fooling images of VGG-att, is the attention module involved, or do you use the same fooling images for both VGG and VGG-att?\n\nMinor comments:\n- Fig. 1 -> Fig. 2 in Section 3.1. If not, Fig. 2 is never referred.\n\nReferences\n[A] Huijuan Xu and Kate Saenko. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In ECCV, 2016.\n[B] Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention networks for image question answering. In CVPR, 2016.\n[C] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Deep compositional question answering with neural module networks. In CVPR, 2016.\n[D] Paul Hongsuck Seo, Zhe Lin, Scott Cohen, Xiaohui Shen, and Bohyung Han. Hierarchical attention networks. arXiv preprint arXiv:1606.02393, 2016.\n[E] Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. Image captioning with semantic attention. In CVPR, 2016.\n[F] Jonghwan Mun, Minsu Cho, and Bohyung Han. Text-Guided Attention Model for Image Captioning. AAAI, 2017.\n[G] Seunghoon Hong, Junhyuk Oh, Honglak Lee and Bohyung Han, Learning Transferrable Knowledge for Semantic Segmentation with Deep Convolutional Neural Network, In CVPR, 2016.\n[H] Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu, Spatial Transformer Networks, NIPS, 2015\n\n\n", "This paper proposed an end-to-end trainable hierarchical attention mechanism for CNN. The proposed method computes 2d spatial attention map at multiple layers in CNN, where each attention map is obtained by computing compatibility scores between the intermediate features and the global feature. The proposed method demonstrated noticeable performance improvement on various discriminative tasks over existing approaches. \n\nOverall, the idea presented in the paper is simple yet solid, and showed good empirical performance. The followings are several concerns and suggestions. \n\n1. The authors claimed that this is the first end-to-end trainable hierarchical attention model, but there is a previous work that also addressed the similar task:\nSeo et al, Progressive Attention Networks for Visual Attribute Prediction, in Arxiv preprint:1606.02393, 2016 \n\n2. The proposed attention mechanism seems to be fairly domain (or task ) specific, and may not be beneficial for strong generalization (generalization over unseen category). Since this could be a potential disadvantage, some discussions or empirical study on cross-category generalization seems to be interesting.\n\n3. The proposed attention mechanism is mainly demonstrated for single-class classification task, but it would be interesting to see if it can also help the multi-class classification (e.g. image classification on MS-COCO or PASCAL VOC datasets)\n\n4. The localization performance of the proposed attention mechanism is evaluated by weakly-supervised semantic segmentation tasks. In that perspective, it would be interesting to see the comparisons against other attention mechanisms (e.g. Zhou et al 2016) in terms of localization performance.\n", "This paper proposes an end-to-end trainable attention module, which takes as input the 2D feature vector map and outputs a 2D matrix of scores for each map. The goal is to make the learned attention maps highlight the regions of interest while suppressing background clutter. Experiments conducted on image classification and weakly supervised segmentation show the effectiveness of the proposed method.\n\nStrength of this paper:\n1) Most previous work are all implemented as post-hoc additions to fully trained networks while this work is end-to-end trainable. Not only the newly added weights for attention will be learned, so are the original weights in the network.\n2) The generalization ability shown in Table 3 is very good, outperforming other existing network by a large margin.\n3) Visualizations shown in the paper are convincing. \n\nSome weakness:\n1) Some of the notations are unclear in this paper, vector should be bold, hard to differentiate vector and scalar.\n2) In equation (2), l_i and g should have different dimensionality, how does addition work? Same as equation (3)\n3) The choice of layers to add attention modules is unclear to me. The authors just pick three layers from VGG to add attention, why picking those 3 layers? Is it better to add attention to lower layers or higher layers? Why is it the case that having more layers with attention achieves worse performance?\n", "In the proposed framework, the global feature is indeed used as the query vector for the attention calculations. Thus, by changing the global feature vector, one could expect to affect the estimated attention patterns in a predictable manner. We have now included in the appendix section A.5., a brief discussion and a qualitative comparison of the extent to which the two different compatibility functions allow for a post-hoc control of the estimated attention scores by influencing the global image vector.", "Thank you for your question.\n\nWe use the ResNet implementation provided at the following link as our baseline ResNet model - https://github.com/szagoruyko/wide-residual-networks/tree/fp16/models. As specified earlier, we work with a 164-layered network. This should help to clarify the details regarding the location and specifications of batch normalisation, max-pooling, non-linearity and convolutional operations. This reference has now also been added to the appendix section A.2.\n\nAs discussed before, for the ResNet architecture, we incorporate our attention modules at the outputs of the last two levels, i.e. on local feature vectors of dimensionalities 128 and 256, respectively. We remove the spatial averaging step after the final convolutional layer in the original architecture. Instead, we obtain the global feature vector by processing the batch-normalised and ReLU-activated output of the final level using a convolutional layer with a kernel size of 3x3 and 256 output channels, a ReLU non-linearity, and a fully connected 256x256 layer. The convolutional layer is itself sandwiched between two max-pooling layers, that together downsample the input by a factor of 8 in each of the two spatial dimensions, to yield a single 256 dimensional vector. The global feature vector is used directly for estimating attention at the final level where the local features have a dimensionality of 256. For the lower level, with a dimensionality of 128, the global vector is downsized to a matching dimensionality by a single 256x128 fully connected layer. \n\nTo add to the above, we will make our implementation of the proposed method public, post an internal review.", "Once again, we thank the reviewers for their comments.\nWe have now added to the paper a complete experimental comparison against the progressive attention approach proposed by Seo et al. We incorporate the progressive attention mechanism at 2 levels in the baseline VGG architecture and evaluate it on the various tasks considered in the paper. The details of the implementation are provided in the appendix.\nThe results for image classification and fine-grained recognition can be found in Tables 1 and 2 respectively, those for domain shifted classification in Table 3, and those for weakly supervised segmentation in Figure 9. The proposed attention method consistently outperforms the former across the board.\nFurthermore, an indirect comparison against the spatial transformer networks of Jaderberg et al. on the task of fine-grained recognition is now included in the discussion in the results section. As we state there, we are unable to compare with the CUB result of Jaderberg et al. directly, due to a difference in dataset pre-processing. However, we improve over the progressive attention approach of Seo et al. by 4.5%, and note that progressive attention has itself been shown to perform better than spatial transformer networks at the similar task of attribute prediction using attention.", "We thank the reviewer for the comments.\n\nPlease find as follows, our point-by-point response to the concerns raised in the weakness section.\n\n1 and 2 - missing references and discussion about related work: We thank the reviewer for pointing us to the most recent relevant literature regarding the proposed attention scheme. We have provided a brief discussion of the suggested works in the third paragraph of Sec. 1. A more thorough treatment is taken up in Sec. 2 (Related Work), which has now been reorganised to more exhaustively capture the variety of existing approaches in the area of attention in deep neural networks.\nWe have also produced an experimental comparison against the progressive attention mechanism of Seo et al., incorporated into the VGG model and trained using the global feature as the query, for the task of classification of CIFAR datasets. The details of the implementation are provided in appendix Sec. A.2. The results are compiled in the updated Table 1. A quantitative evaluation of the above mechanism for the task of fine-grained recognition on the CUB and SVHN datasets is forthcoming and will be made available in the next revision.\n\n3 - the global feature as a driver of attention: The global feature is indeed used as the query vector for our attention calculations. The global and local features vector don't always need to be obtained from the same input image. In fact, in our framework, we can extract the local features from a given image A: call that the target image. The global feature vector can be obtained from another image B, which we call the query image. Under the proposed attention scheme, it is expected that the attention maps will highlight objects in the target image that are 'similar' to the query image object. The precise notion of 'similarity' may be different for the two compatibility functions, where parameterised compatibility is likely to capture the concept of objectness while the dot-product compatibility is likely to learn a high-order appearance-based match between the query and target image objects. We are investigating the two different compatibility functions w.r.t. the above hypothesis. The experimental results in the form of visualisations will be made available in the next update.\n\n4 - performance comparison with [H]: We are unable to compare with [H] for the task of fine-grained recognition due to a difference in dataset preprocessing. The CUB dataset in [H] has not been tighly cropped to the birds in the images. Thus, the network has access to the background information which in case of birds can offer useful information about their habitat and climate, something that is key to their classification.\nHowever, note that our experimental comparison with the progressive attention mechanism of Seo et al. is forthcoming. The progressive attention scheme has been shown to outperform [H] for the task of attribute prediction, see Table 1 in [D]. Hence, by presenting a comparison with the former, we would be able to indirectly compare the proposed approach against the spatial transformer network architecture of [H].\n\n5 - segmentation experiments and comparison: Our weakly supervised segmentation experiments make use of the Object Discovery dataset, a known benchmark in the community widely used for evaluating approaches developed for the said task [I, J]. We note that [G] presents an attention-based model for segmentation. Our work uses category labels for weak segmentation and is related to the soft-attention approach of [G]. However, unlike the aformentioned, we do not explicitly train our model for the task of segmentation using any kind of pixel-level annotations. We evaluate the binarised spatial attention maps, learned as a by-product of training for image classification, for their ability to segment objects. We have added this discussion to the paragraph on weakly supervised segmentation in Sec. 2 (Related Work).\n\n\nOur responses to the questions asked are below:\n\n1. concatenation vs addition: Given the existing free parameters between the local and the global image descriptors in a CNN pipeline, we can simplify the concatenation of the two descriptors to an addition operation, without loss of generality. This allows us to limit the parameters of the attention unit.\n\n2. generation of fooling images: When generating the fooling images of VGG-att, we do use the attention module. Thus, the fooling images for both VGG and VGG-att are conditioned on their respective architectures, and hence different.\n\n\nWe have incorporated the minor comment in the updated version.\n\n\nReferences\n[I] Dutt Jain, S., & Grauman, K. (2016). Active image segmentation propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2864-2873).\n[J] Rubinstein, M., Liu, C., & Freeman, W. T. (2016). Joint Inference in Weakly-Annotated Image Datasets via Dense Correspondence. International Journal of Computer Vision, 119(1), 23-45.\n", "We thank the reviewer for the comments.\nOur responses are provided below, numbered correspondingly:\n\n1. The missing comparison with the progressive attention mechanism of Seo et al. was unintentional, and the authors agree that their work is indeed closely related to the proposed work. We have now included a discussion on it in Sec. 2 (Related Work). We have also produced an experimental comparison against the progressive attention mechanism, incorporated into the VGG architecture and trained using the global feature as the query, for the task of classification of CIFAR datasets. The details of the implementation are provided in appendix Sec. A.2. The results are compiled in the updated Table 1. A quantitative evaluation of the above mechanism for the task of fine-grained recognition on the CUB and SVHN datasets is forthcoming and will be made available in the next revision.\n\n2. For our detailed investigation of cross-category generalisation of the image features learned using the proposed attention scheme, we would like to point the reviewer to Sec. 5.3. Here, we use the baseline and attention-enhanced models as off-the-shelf feature extractors. The models are trained for the tasks of classification on CIFAR datasets and are queried to obtain high-order representations of images from unseen datasets, such as the Action-40 and Scene-67 datasets. At train time, the training-set features are used to optimise a linear SVM as a classifier. At test time, we evaluate the quality of generalisation via the quality of classification of the test set based on the extracted features.\n\n3. The multi-class classification task can be posed as a set of single-class or one-hot-encoded classification tasks. In this regard, as confirmed experimentally, our proposed attention scheme would be able to offer performance benefits. However, a more direct multi-class classification, on datasets such as MS-COCO or PASCAL, can be covered in future versions given that the standard protocol to train and test such networks is typically lengthy. The models are usually pre-trained on ImageNet and fine-tuned on the above datasets, putting this a bit outside our scope for revision/addition at present.\n\n4. The comparison with the attention mechanism proposed by Zhou et al. was actually included in Fig. 9 (\"VGG-GAP\"): the reference was missing, which we have now added. The model has been trained for the CIFAR-10 dataset, as the classes of this dataset overlap with the Object Discovery dataset that includes car, horse and airplane as categories.\n", "We thank the reviewer for the comments.\n\n1) notation: We have updated the paper, in particular Section 3, to represent the vectors in bold to differentiate them from scalars.\n\n2) potential for differing dimensionalities of l_i and g: There is some discussion of this in the second paragraph of Sec. 3.1. We propose the use of one fully connected layer for each CNN layer s, which projects the local feature vectors of s to the dimensionality of g. These linear parameters are learned along with all other network parameters during end-to-end training. There is an implementation detail, though, which we had neglected to mention: in order to limit the network parameters at the classification stage, we actually project g to the lower-dimensional space of the local features l_i. A note of clarification on this has been added to the first paragraph of Sec. 4.\n\n3) selection of layers for attention: A brief discussion on the choice of adding attention to higher layers as opposed to the lower ones was included in Sec. 3.3. We have now augmented this discussion, in place, with further clarification on the specific layers that we choose for estimating the attention. \nFor l_i and g to be comparable using the proposed compatibility functions, they should be mapped to a common high-dimensional space. In other words, the effective filters operating over image patches in the layers s must represent relatively ‘mature’ features that are captured in g for the classification goal. We thus expect to see the greatest benefit in deploying attention relatively late in the pipeline to provide for the learning of these features in l_i. In fact, att2 architectures often outperform their att3 counterparts, as can be seen in Tables 1 and 2.\nFurther, different kinds of class details are more easily accessible at different scales. Thus, in order to facilitate the learning of diverse and complementary attention-weighted features, we propose the use of attention over different spatial resolutions. The combination of the two factors stated above results in our deploying the attention units after the convolutional blocks that are late in the pipeline, but before their corresponding max-pooling operations, i.e. before a reduction in the spatial resolution.", "(Not an author, but) \n\nFor equation (2), the authors have stated in the comments that the actual choice for which mapping method to use to have the dimensionalities of local features and the global descriptor align is an implementation detail. They've proposed two methods: \n\n1. Map the local features whose dimensionalities don't align with that of g to the correct dimensionality through densely connected layers.\n\n2. Map g to the dimensionalities of local features through densely connected layers.", "Does the input go through batch normalization before it is passed to the first convolutional layer?\nWhat number of filters, kernel size and strides are used in the two convolutional layers in level 1? \nIs the second convolutional layer in level 1 followed by ReLU too, or is the only activation used in level 1 the ReLU between the two convolutional layers? \nIs there a max-pooling layer at the end of each of the four levels in the implementation, including level 1 and level 4? If so, what pool size and strides are used in each? \nDoes the implementation use the bottleneck design for residual blocks from the original paper? If not, what design is used for the residual blocks? (So far, we've assumed the bottleneck design, with Conv(16,1)->BN->ReLu->Conv(16,3)->BN->ReLU->Conv(64,1)->BN->identity addition->ReLU in level 2, same in level 3 but using convolutions with parameters (32,1), (32,3), and (128,1), and same in level 4 but using convolution with parameters (64,1), (64,3), and (256,1))\n\nIs every convolutional layer in the model (except the ones used for dimensionality increases) followed by batch normalization, including convolutional layers in level 1?\nWhat number of filters, kernel size, and strides are used for the final convolutional layer? Is this layer followed by batch normalization and ReLU as well? \nWhat pool size and strides are used for the final pooling layer?\nIs the flattened output of the final pooling layer directly mapped to the dimensionalities of the local features through three separate linear fully-conected layers, or is it first passed through a ReLU-activated fully connected layer?\n\nThank you for the answers.", "Thank you for your questions.\nWhen training the models on CUBS-200-2011 with weights initialised from those learned on CIFAR-100, we continue to use 10^-7 as the learning rate decay. Only the schedule for the learning rates is modified as explained.\n\nWe use the ZCA whitening method[1,2] for mean, standard deviation and color normalisation widely adopted for the pre-processing of CIFAR datasets.\n\n1. Goodfellow, Ian J., et al. \"Maxout networks.\" arXiv preprint arXiv:1302.4389 (2013).\n2. Zagoruyko, Sergey, and Nikos Komodakis. \"Wide residual networks.\" arXiv preprint arXiv:1605.07146 (2016).", "When training the models on CUB-200-2011 with weights initialized from those learned on CIFAR-100, did you still use 10^-7 as learning rate decay, or did you set learning rate decay to 0 and strictly stick to the exact learning rates from the transfer learning schedule?\n\nExactly which color normalization algorithm did you use? Also, in what order were dataset-wide mean and standard deviation normalization and color normalization done?", "Thank you for your questions, answered sequentially below:\n\nThe weight vector u used in the parameterised compatibility score calculation corresponds to a given layer s; a different weight vector u_s is learned for each layer. As written, the expression in (2) assumes the context of a layer, and so the s index is implicit. We can make it explicit for better clarity.\n\nThe input image resolution of CIFAR-trained models is 32x32x3. This is the size to which the images of the test datasets are downsampled in the cross-domain classification experiments.\n\nThe local features are the ReLU-activated outputs of the convolutional layers before the corresponding max-pooling operation. This is done to keep the resolution of the attention maps as high as possible.\n\nThe global vector g is mapped to the dimensionality of the local features l (at a given layer s) by a linear layer if and only if their dimensionalities differ. Following from this, yes, the global vector g, once mapped to a given dimensionality, is then shared by the local features from different layers s as long as they are of that dimensionality.", "Is the weight vector used in parametrised compatibility score calculation ('u') the same for every c calculation, or should a separate u vector be trained for each of the three attention submodules?\nWhat target resolution was used for downsampling images in the cross-domain classification datasets?\nAre the local features fed to the attention submodules extracted from the outputs of the pooling layers or from the outputs of the convolutional layer preceding them? (For example, in VGG-att3, is L_1 extracted directly from the output of the third 256-filter convolutional layer or from the 2x2 max pooling layer right after it?)\nIn the comment below, you proposed that in order to align the dimensionality of global g and the local feature vectors for the compatibility calculation step, local features should first be passed through an additional fully connected layer; should this step be done even if the dimensionality of g and local feature vectors already line up? (For example, as in the vectors from L_3 in VGG-att3)\nAdditionally, you proposed that instead of the local features being mapped to the dimensionality of g, g can be mapped to the dimensionality of the local features; in this case, if there are two or more submodules with local feature vectors of equal dimensionality, should g be mapped to each one separately, or should g only be projected to any given dimensionality once for shared use in the submodules with local features of that dimensionality? ", "Thank you for your question. There is some discussion of this in paragraph 2, Sec. 3.1 though we should perhaps provide more details. We propose the use of one fully connected layer for each CNN layer s, that projects the local feature vectors of s to the dimensionality of g. These linear parameters are learned along with all other network parameters over the end-to-end training. There is an implementation detail, though, which we've neglected to mention: in order to limit the network parameters at the classification stage, we actually project g to the lower-dimensional space of the local features. We can update the section to reflect this in more detail.", "It's not totally clear what the ResNet model architecture is. Section A.2 provides some details, but specifically I don't see how you're getting the dimensions of g and the intermediate layers to match up. Are you able to provide some more detailed information about the architectures?" ]
[ 5, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyzbhfWRW", "iclr_2018_HyzbhfWRW", "iclr_2018_HyzbhfWRW", "iclr_2018_HyzbhfWRW", "SkJcqL5lz", "iclr_2018_HyzbhfWRW", "ryqX5FFlG", "rJ7NDl9xM", "ry-5adjxG", "ry-5adjxG", "iclr_2018_HyzbhfWRW", "SJD74cJxM", "iclr_2018_HyzbhfWRW", "HyKA1qdkf", "iclr_2018_HyzbhfWRW", "B1eI20uCb", "iclr_2018_HyzbhfWRW" ]
iclr_2018_Hko85plCW
Monotonic Chunkwise Attention
Sequence-to-sequence models with soft attention have been successfully applied to a wide variety of problems, but their decoding process incurs a quadratic time and space cost and is inapplicable to real-time sequence transduction. To address these issues, we propose Monotonic Chunkwise Attention (MoChA), which adaptively splits the input sequence into small chunks over which soft attention is computed. We show that models utilizing MoChA can be trained efficiently with standard backpropagation while allowing online and linear-time decoding at test time. When applied to online speech recognition, we obtain state-of-the-art results and match the performance of a model using an offline soft attention mechanism. In document summarization experiments where we do not expect monotonic alignments, we show significantly improved performance compared to a baseline monotonic attention-based model.
accepted-poster-papers
This clearly written paper describes a simple extension to hard monotonic attention -- the addition of a soft attention mechanism that operates over a fixed length window of inputs that ends at the point selected by the hard attention mechanism. Experiments on speech recognition (WSJ) and on a document summarization task demonstrate that the new attention mechanism improves significantly over the hard monotonic mechanism. About the only "con" the reviewers noted is that the paper is a minor extension over Raffel et al., 2017, but the authors successfully argue that the strong empirical results render this simplicity a "pro."
train
[ "HJS8P6Vgf", "ryr2L0FeG", "H1J9s-cef", "S1hbW4Pmz", "Byr5xd-mG", "rJ5me_-7G", "Hkx_jvZmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes a small modification to the monotonic attention in [1] by adding a soft attention to the segment predicted by the monotonic attention. The paper is very well written and easy to follow. The experiments are also convincing. Here are a few suggestions and questions to make the paper stronger.\n\nThe first set of questions is about the monotonic attention. Training the monotonic attention with expected context vectors is intuitive, but can this be justified further? For example, how far does using the expected context vector deviate from marginalizing the monotonic attention? The greedy step, described in the first paragraph of page 4, also has an effect on the produced attention. How does the greedy step affect training and decoding? It is also unclear how tricks in the paragraph above section 2.4 affect training and decoding. These questions should really be answered in [1]. Since the authors are extending their work and since these issues might cause training difficulties, it might be useful to look into these design choices.\n\nThe second question is about the window size $w$. Instead of imposing a fixed window size, which might not make sense for tasks with varying length segments such as the two in the paper, why not attend to the entire segment, i.e., from the current boundary to the previous boundary?\n\nIt is pretty clear that the model is discovering the boundaries in the utterance shown in Figure 2. (The spectrogram can be made more visible by removing the delta and delta-delta in the last subplot.) How does the MoCha attention look like for words whose orthography is very nonphonemic, for example, AAA and WWW?\n\nFor the experiments, it is intriguing to see that $w=2$ works best for speech recognition. If that's the case, would it be easier to double the hidden layer size and use the vanilla monotonic attention? The latter should be a special case of the former, and in general you can always increase the size of the hidden layer to incorporate the windowed information. Would the special cases lead to worse performance and if so why is there a difference?\n\n[1] C Raffel, M Luong, P Liu, R Weiss, D Eck, Online and linear-time attention by enforcing monotonic alignments, 2017", "The paper proposes an extension to a previous monotonic attention model (Raffel et al 2017) to attend to a fixed-sized window up to the alignment position. Both the soft attention approximation used for training the monotonic attention model, and the online decoding algorithm is extended to the chunkwise model. In terms of the model this is a relatively small extention of Raffel et al 2017.\n\nResults show that for online speech recognition the model matches the performance of an offline soft attention baseline, doing significantly better than the monotonic attention model. Is the offline attention baseline unidirectional or bidirectional? In case it is unidirectional it cannot really be claimed that the proposed model's performance is competitive with an offline model.\n\nMy concern with the statement that all hyper-parameters are kept the same as the monotonic model is that the improvement might partly be due to the increase in total number of parameters in the model. Especially given that w=2 works best for speech recognition, it not clear that the model extension is actually helping. My other concern is that in speech recognition the time-scale of the encoding is somewhat arbitrary, so possibly a similar effect could be obtained by doubling the time frame through the convolutional layer. While the empirical result is strong it is not clear that the proposed model is the best way to obtain the improvement.\n\nFor document summarization the paper presents a strong result for an online model, but the fact that it is still less accurate than the soft attention baseline makes it hard to see the real significance of this. If the contribution is in terms of speed (as shown with the synthetic benchmark in appendix B) more emphesis should be placed on this in the paper. \nSentence summarization tasks do exhibit mostly monotonic alignment, and most previous models with monotonic structure were evaluated on that, so why not test that here?\n\nI like the fact that the model is truely online, but that contribution was made by Raffel et al 2017, and this paper at best proposes a slightly better way to train and apply that model.\n\n---\n The additional experiments in the new version gives stronger support in favour of the proposed model architecture (vs the effect of hyperparameter choices). While I'm still on the fence on whether this paper is strong enough to be accepted for ICLR, this version is certainly improves the quality of the paper. \n", "This paper extends a previously proposed monotonic alignment based attention mechanism by considering local soft alignment across features in a chunk (certain window). \n\nPros.\n- the paper is clearly written.\n- the proposed method is applied to several sequence-to-sequence benchmarks, and the paper show the effectiveness of the proposed method (comparable to full attention and better than previous hard monotonic assignments).\nCons.\n- in terms of the originality, the methodology of this method is rather incremental from the prior study (Raffel et al), but it shows significant gains from it.\n- in terms of considering a monotonic alignment, Hori et al, \"Advances in Joint CTC-Attention based End-to-End Speech Recognition with a Deep CNN Encoder and RNN-LM,\" in Interspeech'17, also tries to solve this issue by combining CTC and attention-based methods. The paper should also discuss this method in Section 4.\n\nComments:\n- Eq. (16): $j$ in the denominator should be $t_j$.\n", "Thanks for your detailed response and additional experiments and clarifications.", "Thank you for your thorough review and thoughtful questions! We're glad you found the paper easy to follow and the experiments convincing. We've updated the paper to address your questions with additional experiments, and we provide some additional context below.\n\n> how far does using the expected context vector deviate from marginalizing the monotonic attention? The greedy step, described in the first paragraph of page 4, also has an effect on the produced attention. How does the greedy step affect training and decoding?\nBecause we encourage the monotonic selection probabilities to be binary over the course of training by adding pre-sigmoid noise, these probabilities indeed tend to be 0 or 1 at convergence. As a result, the greedy process is effectively equivalent to completely marginalizing out the alignment. Note that we don't use the greedy step during training because we explicitly compute the probability distribution induced by the possible alignment paths. We have added some wording to the paper to clarify these points.\n\n> It is also unclear how tricks in the paragraph above section 2.4 affect training and decoding. These questions should really be answered in [1]. Since the authors are extending their work and since these issues might cause training difficulties, it might be useful to look into these design choices.\nIndeed, [1] includes a \"Practitioner's Guide\" in Appendix G, which has discussion of how the sigmoid noise, weight norm, etc. can affect results. We will add a reference to this practitioner's guide in the main text. If you think it would be helpful, we can provide similar recommendations based on our own experiences in an appendix.\n\n> The second question is about the window size $w$. Instead of imposing a fixed window size, which might not make sense for tasks with varying length segments such as the two in the paper, why not attend to the entire segment, i.e., from the current boundary to the previous boundary?\nIn fact, we also tried exactly what you suggested in early experiments! We had planned to call this approach \"MAtChA\" for Monotonic Adaptive Chunkwise Attention. As it turns out, it is possible to train this type of attention mechanism with an efficient dynamic program (analogous to the one used for monotonic attention and MoChA). However, ultimately MAtChA did not outperform MoChA in any of our experiments. In addition, the dynamic program used for training takes O(T^2U) memory instead of O(TU) memory (because you must marginalize out both the chunk start and end point, instead of just the end point), so we decided not to include it in the paper. Prompted by your question, we've decided to put a discussion of MAtChA with a derivation of the dynamic program into the appendix.\n\n> The spectrogram can be made more visible by removing the delta and delta-delta in the last subplot.\nGreat idea, we changed the figure to remove the delta and delta-delta features.\n\n> How does the MoCha attention look like for words whose orthography is very nonphonemic, for example, AAA and WWW?\nThat's a very interesting point to discuss, so we added a note about this in the paper. However, we were unable to find any such examples in the development set of the Wall Street Journal corpus, so we weren't able to study this directly. Note that even for nonphonemic utterances, the attention alignment still tends to be monotonic - see for example Appendix A of \"Listen, Attend and Spell\" where a softmax attention model gives a monotonic alignment for your \"AAA\" example.\n\n> If that's the case, would it be easier to double the hidden layer size and use the vanilla monotonic attention?\nThanks for this suggestion - indeed, using MoChA incurs a modest parameter increase (about 1% in our speech recognition experiments) because of the second independent energy function. To address this difference, we ran an experiment where we doubled the attention energy function's hidden dimension in a monotonic attention model (similar in terms of parameter count to adding a second attention energy function) and halved this hidden dimension in a MoChA model. In both cases, the change in performance was not significant over eight trials, implying that large gains achieved by MoChA were not caused by this change. We added information about this experiment to the main text.", "Thanks for your thorough review! We updated the paper to address your comments, and provide some additional discussion below.\n\n> Is the offline attention baseline unidirectional or bidirectional? In case it is unidirectional it cannot really be claimed that the proposed model's performance is competitive with an offline model.\nThank you for pointing out this important distinction. The encoder in the softmax attention baseline is indeed unidirectional. We made this choice because using a unidirectional encoder is a prerequisite for an online model. We are interested in answering the question \"how much performance is lost when using MoChA compared to using an offline attention mechanism?\" so changing the encoder could conflate the difference in performance between the two models. The question \"how much performance is lost when switching from a bidirectional encoder to a unidirectional encoder?\" is interesting and important, but is orthogonal to what we are studying and has also been thoroughly considered in the past (e.g. in Graves et al. 2013). We have updated our wording to reflect exactly what we are studying and claiming.\n\n> My concern with the statement that all hyper-parameters are kept the same as the monotonic model is that the improvement might partly be due to the increase in total number of parameters in the model.\nThis is also an important concern; however, the additional parameters required by MoChA compared to monotonic attention is tiny compared to the total number of parameters in the model because switching to MoChA amounts solely to adding a second attention energy function. For example, in our speech experiments, using MoChA increases the number of parameters by only 1.1%. To fully address this question, we ran experiments where we doubled the attention energy function's hidden dimension in a monotonic attention model and halved this hidden dimension in a MoChA model. This reconciles the difference in parameters in a natural way. In both cases, the change in performance was not significant over eight trials, implying that large gains achieved by MoChA were not caused by the change in parameter count. We added this information to the main text so that the comparison is clearer.\n\n> My other concern is that in speech recognition the time-scale of the encoding is somewhat arbitrary, so possibly a similar effect could be obtained by doubling the time frame through the convolutional layer. \nWhile we agree that increasing the receptive field of the convolutional layers could be helpful, we note that the recurrent layers in the encoder can in principle provide an arbitrarily long temporal context on their own. In addition, Bahdanau et al. 2014 implied that attention provides a more efficient way give the decoder greater long-term context. To test this empirically, we ran the suggested experiment where we doubled the convolutional filter size along the time axis in a monotonic attention-based model and found that it did not significantly change performance over eight trials. We added this experiment to the main text.\n\n> For document summarization the paper presents a strong result for an online model, but the fact that it is still less accurate than the soft attention baseline makes it hard to see the real significance of this.\nOur main rationale for including the document summarization experiment was to test MoChA in a setting where the input-output alignment was not monotonic. In terms of practicality, using MoChA would result in both a more efficient model (as you suggested) but could also allow for new applications such as online summarization. We added some additional clarification to the text as to our intentions behind this experiment.\n\n> Sentence summarization tasks do exhibit mostly monotonic alignment, and most previous models with monotonic structure were evaluated on that, so why not test that here?\nWe avoided sentence summarization for the simple reason that it is an easy enough task that monotonic attention already matches the performance of softmax attention (see results in Raffel et al. 2017). We expect that MoChA would also match softmax attention's performance. Instead, we chose to try it on the more difficult (and more realistic) setting of CNN/daily mail. We included this discussion in the text of our paper to further motivate our experiment.\n\n> this paper at best proposes a slightly better way to train and apply that model.\nWe consider MoChA to be a conceptually simple but remarkably effective improvement to monotonic attention. This is backed up by our experimental results, showing that we are able to significantly beat monotonic attention in settings where the alignment is monotonic (speech) and nonmonotonic (summarization). We see the simplicity of implementing MoChA on top of monotonic attention as a strength of our approach, in that it allows researchers and practitioners to easily leverage it.", "Thank you for your review! We are glad you found the paper clearly written, and that you were convinced by our experimental evaluation. Addressing your specific comments:\n\n> in terms of the originality, this method is rather incremental from the prior study (Raffel et al)\nWe would argue that the strength of our model demonstrates that this is not an incremental result; specifically, we saw a roughly 20% relative improvement compared to monotonic attention in terms of both the word error rate on speech recognition and ROUGE-2 on document summarization. Further, on speech recognition, we showed for the first time that an online attention mechanism could match the performance of an (offline) softmax attention mechanism, which opens up the possibilities of using this framework in online settings. While MoChA can be seen as a conceptually straightforward extension of Monotonic Attention, we actually see that as a benefit of the approach - it would potentially be less impactful if achieving these benefits required a complicated modification to the seq2seq framework. We have added some language to emphasize this at the end of Section 1.\n\n> - in terms of considering a monotonic alignment, Hori et al, \"Advances in Joint CTC-Attention based End-to-End Speech Recognition with a Deep CNN Encoder and RNN-LM,\" in Interspeech'17, also tries to solve this issue by combining CTC and attention-based methods. The paper should also discuss this method in Section 4.\nThank you for bringing this paper to our attention. The primary difference between that paper and ours it that it still uses an offline softmax attention mechanism, so could not be used in online settings. However, it provides promising evidence that our approach could be combined with CTC to achieve further gains in online settings. We've added this reference and some discussion of it to our related work section.\n\n> Eq. (16): $j$ in the denominator should be $t_j$.\nExcellent catch, thank you! We have fixed this." ]
[ 7, 6, 8, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_Hko85plCW", "iclr_2018_Hko85plCW", "iclr_2018_Hko85plCW", "rJ5me_-7G", "HJS8P6Vgf", "ryr2L0FeG", "H1J9s-cef" ]
iclr_2018_BJ_UL-k0b
Recasting Gradient-Based Meta-Learning as Hierarchical Bayes
Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task. Bayesian hierarchical modeling provides a theoretical framework for formalizing meta-learning as inference for a set of parameters that are shared across tasks. Here, we reformulate the model-agnostic meta-learning algorithm (MAML) of Finn et al. (2017) as a method for probabilistic inference in a hierarchical Bayesian model. In contrast to prior methods for meta-learning via hierarchical Bayes, MAML is naturally applicable to complex function approximators through its use of a scalable gradient descent procedure for posterior inference. Furthermore, the identification of MAML as hierarchical Bayes provides a way to understand the algorithm’s operation as a meta-learning procedure, as well as an opportunity to make use of computational strategies for efficient inference. We use this opportunity to propose an improvement to the MAML algorithm that makes use of techniques from approximate inference and curvature estimation.
accepted-poster-papers
Pros: + The paper introduces a non-trivial interpretation of MAML as hierarchical Bayesian learning and uses this perspective to develop a new variation of MAML that accounts for curvature information. Cons: - Relatively small gains over MAML on mini-Imagenet. - No direct comparison against the state-of-the-art on mini-Imagenet. The reviewers agree that the interpretation of MAML as a form of hierarchical Bayesian learning is novel, non-trivial, and opens up an interesting direction for future research. The only concerns are that the empirical results on mini-Imagenet do not show a particularly large improvement over MAML, and there is no direct comparison to the state-of-the-art results on the task. However, the value of the new perspective on meta-learning outweighs these concerns.
train
[ "SycLo2FxG", "SJPGTRSEz", "HJiZYRT7z", "Hkv54AYeM", "HJBzB65xf", "rJBEH-fVf", "H1hUKApmG", "rJ1GD0T7f", "SkauLRaXz", "HkYWBCTQz", "r1aR9l5lG", "rJOy7ZYxG" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "Summary\nThe paper presents an interesting view on the recently proposed MAML formulation of meta-learning (Finn et al). The main contribution is a) insight into the connection between the MAML procedure and MAP estimation in an equivalent linear hierarchical Bayes model with explicit priors, b) insight into the connection between MAML and MAP estimation in non-linear HB models with implicit priors, c) based on these insights, the paper proposes a variant of MALM using a Laplace approximation (with additional approximations for the covariance matrix. The paper finally provides an evaluation on the mini ImageNet problem without significantly improving on the MAML results on the same task.\n\nPro:\n- The topic is timely and of relevance to the ICLR community continuing a current trend in building meta-learning system for few-shot learning.\n- Provides valuable insight into the MAML objective and its relation to probabilistic models\n\nCon:\n- The paper is generally well-written but I find (as a non-meta-learner expert) that certain fundamental aspects could have been explained better or in more detail (see below for details).\n- The toy example is quite difficult to interpret the first time around and does not provide any empirical insight into the converge of the proposed method (compared to e.g. MAML)\n- I do not think the empirical results provide enough evidence that it is a useful/robust method. Especially it does not provide insight into which types of problems (small/large, linear/ non-linear) the method is applicable to. \n\n\nDetailed comments/questions:\n- The use of Laplace approximation is (in the paper) motivated from a probabilistic/Bayes and uncertainty point-of-view. It would, however, seem that the truncated iterations do not result in the approximation being very accurate during optimization as the truncation does not result in the approximation being created at a mode. Could the authors perhaps comment on:\na) whether it is even meaningful to talk about the approximations as probabilistic distribution during the optimization (given the psd approximation to the Hessian), or does it only make sense after convergence? \nb) the consequence of the approximation errors on the general convergence of the proposed method (consistency and rate)\n\n- Sec 4.1, p5: Last equation: Perhaps useful to explain the term $log(\\phi_j^* | \\theta)$ and why it is not in subroutine 4 . Should $\\phi^*$ be $\\hat \\phi$ ?\n- Sec 4.2: “A straightforward…”: I think it would improve readability to refer back to the to the previous equation (i.e. H) such that it is clear what is meant by “straightforward”.\n- Sec 4.2: Several ideas are being discussed in Sec 4.2 and it is not entirely clear to me what has actually been adopted here; perhaps consider formalizing the actual computations in Subroutine 4 – and provide a clearer argument (preferably proof) that this leads to consistent and robust estimator of \\theta.\n- It is not clear from the text or experiment how the learning parameters are set.\n- Sec 5.1: It took some effort to understand exactly what was going on in the example and particular figure 5.1; e.g., in the model definition in the body text there is no mention of the NN mentioned/used in figure 5, the blue points are not defined in the caption, the terminology e.g. “pre-update density” is new at this point. I think it would benefit the readability to provide the reader with a bit more guidance.\n- Sec 5.1: While the qualitative example is useful (with a bit more text), I believe it would have been more convincing with a quantitative example to demonstrate e.g. the convergence of the proposal compared to std MAML and possibly compare to a std Bayesian inference method from the HB formulation of the problem (in the linear case)\n- Sec 5.2: The abstract clams increased performance over MAML but the empirical results do not seem to be significantly better than MAML ? I find it quite difficult to support the specific claim in the abstract from the results without adding a comment about the significance.\n- Sec 5.2: The authors have left out “Mishral et al” from the comparison due to the model being significantly larger than others. Could the authors provide insight into why they did not use the ResNet structure from the tcml paper in their L-MLMA scheme ?\n- Sec 6+7: The paper clearly states that it is not the aim to (generally) formulate the MAML as a HB. Given the advancement in gradient based inference for HB the last couple of years (e.g. variational, nested laplace , expectation propagation etc) for explicit models, could the authors perhaps indicate why they believe their approach of looking directly to the MAML objective is more scalable/useful than trying to formulate the same or similar objective in an explicit HB model and using established inference methods from that area ?\n\nMinor:\n- Sec 4.1 “…each integral in the sum in (2)…” eq 2 is a product\n", "Thank you for clarifying your request regarding the footnote on deeper comparison methods; we have modified the draft to include this footnote and relocated the Munkhdalai (2017) reference there.", "We thank R1 for thorough and constructive comments! We have attempted to address all concerns to the best of our ability.\n\n> “The toy example is quite difficult to interpret the first time around and does not provide any empirical insight into the converge of the proposed method (compared to e.g. MAML)…I do not think the empirical results provide enough evidence that it is a useful/robust method. Especially it does not provide insight into which types of problems (small/large, linear/ non-linear) the method is applicable to. ”\n\nWe have substantially revised the toy example in Figure 5 and its explanation in the text in Section 5.1 to better demonstrate the proposed novel algorithm. In Figure 5, we show various samples from the posterior of a model that is meta-trained on different sinusoids, when presented with a few datapoints (in red) from a new, previously unseen sinusoid. This sampling procedure is motivated by the connection we have made between MAML and HB inference. We emphasize that the quantified uncertainty evident in Figure 5 is indeed a desirable quality in a model that learns from a small amount of data.\n\n> “The use of Laplace approximation is (in the paper) motivated from a probabilistic/Bayes and uncertainty point-of-view. It would, however, seem that the truncated iterations do not result in the approximation being very accurate during optimization as the truncation does not result in the approximation being created at a mode.\n> Could the authors perhaps comment on:\n> a) whether it is even meaningful to talk about the approximations as probabilistic distribution during the optimization (given the psd approximation to the Hessian), or does it only make sense after convergence?\n> b) the consequence of the approximation errors on the general convergence of the proposed method (consistency and rate)”\n\nWe have revised the paper in Section 3.1 to better convey the following: The exact equivalence between early stopping and a Gaussian prior on the weights in the linear case, as well as the implicit regularization to the parameter initialization in the nonlinear case, tells us that *every iterate of truncated gradient descent is a mode of an implicit posterior.* Therefore, in making this approximation, we are not required to take the gradient descent procedure of fast adaptation to convergence. We thus emphasize that the PSD approximation to the curvature provided by KFAC is indeed justifiable even before convergence.\n\n> Sec 4.2: Several ideas are being discussed in Sec 4.2 and it is not entirely clear to me what has actually been adopted here; perhaps consider formalizing the actual computations in Subroutine 4 – and provide a clearer argument (preferably proof) that this leads to consistent and robust estimator of \\theta.\n\nRegarding the justification of using KFAC and the Laplace approximation to estimate \\theta: We employ the insight from Martens (2014). In summary, for the Laplace approximation, we require a curvature matrix that would ideally be the Hessian. However, it is infeasible to compute the Hessian for all but the simplest models. In its place, we use the KFAC approximation to the Fisher; the Fisher itself can be seen as an approximation to the Hessian as: 1) It corresponds to the expected Hessian under the model's own predictive distribution; and 2) It is equivalent under common loss functions (such as cross-entropy and squared error, which we employ) to the Generalized Gauss Newton (GGN) Matrix (Pascanu & Bengio 2014, Martens 2014). We would note that we are not the first to use the GGN as an approximation to the Hessian (see, for example, Martens 2010, Vinyals & Popev, 2012).\n\nRegarding the computation of the Laplacian loss in practice: In subroutine 4, we replace H-hat with the approximation to the Fisher found in Eq. (2) of Ba et al. (2017).\n\n> It is not clear from the text or experiment how the learning parameters are set.\n\nWe have clarified this in Section 5.2: We chose the regularization weight of 10^-6 via cross-validation; all other parameters are set to the values reported in Finn et al. (2017).\n\n> Comments re: Section 5.1\n\nWe apologize for the unclear diagram in the previous version of the paper. The toy example has been substantially revised in Figure 5 and elaborated in Section 5.1, as per our comment above.\n\n> Sec 5.2: The abstract clams increased performance over MAML but the empirical results do not seem to be significantly better than MAML ?\n\nWe note that Triantafillou et al. (2017) in NIPS 2017 reported a similar improvement after MAML was published in ICML 2017, and so the standard seems to be that an improvement of about 1% is publishable.", "The paper reformulates the model-agnostic meta-learning algorithm (MAML) in terms of inference for parameters of a prior distribution in a hierarchical Bayesian model. This provides an interesting and, as far as I can tell, novel view on MAML. The paper uses this view to improve the MAML algorithm. The writing of the paper is excellent. Experimental evalution is well done against a number of recently developed alternative methods in favor of the presented method, except for TCML which has been exluded using a not so convincing argument. The overview of the literature is also very well done. ", "MAML (Finn+ 2017) is recast as a hierarchical Bayesian learning procedure. In particular the inner (task) training is initially cast as point-wise max likelihood estimation, and then (sec4) improved upon by making use of the Laplace approximation. Experimental evidence of the relevance of the method is provided on a toy task involving a NIW prior of Gaussians, and the (benchmark) MiniImageNet task.\n\nCasting MAML as HB seems a good idea. The paper does a good job of explaining the connection, but I think the presentation could be clarified. The role of the task prior and how it emerges from early stopping (ie a finite number of gradient descent steps) (sec 3.2) is original and technically non-trivial, and is a contribution of this paper. \nThe synthetic data experiment sec5.1 and fig5 is clearly explained and serves to additionally clarify the proposed method. \nRegarding the MiniImageNet experiments, I read the exchange on TCML and agree with the authors of the paper under review. However, I recommend including the references to Mukhdalai 2017 and Sung 2017 in the footnote on TCML to strengthen the point more generically, and show that not just TCML but other non-shallow architectures are not considered for comparison here. In addition, the point made by the TCML authors is fair (\"nothing prevented you from...\") and I would also recommend mentioning the reviewed paper's authors' decision (not to test deeper architectures) in the footnote. This decision is in order but needs to be stated in order for the reader to form a balanced view of methods at her disposal.\nThe experimental performance reported Table 1 remains small and largely within one standard deviation of competitor methods.\n\nI am assessing this paper as \"7\" because despite the merit of the paper, the relevance of the reformulation of MAML, and the technical steps involved in the reformulation, the paper does not eg address other forms (than L-MAML) of the task-specific subroutine ML-..., and the benchmark improvements are quite small. I think the approach is good and fruitful. \n\n\n# Suggestions on readability\n\n* I have the feeling the paper inverts $\\alpha, \\beta$ from their use in Finn 2017 (step size for meta- vs task-training). This is unfortunate and will certainly confuse readers; I advise carefully changing this throughout the entire paper (eg Algo 2,3,4, eq 1, last eq in sec3.1, eq in text below eq3, etc)\n\n* I advise avoiding the use of the symbol f, which appears in only two places in Algo 2 and the end of sec 3.1. This is in part because f is given another meaning in Finn 2017, but also out of general parsimony in symbol use. (could leave the output of ML-... implicit by writing ML-...(\\theta, T)_j in the $sum_j$; if absolutely needed, use another symbol than f)\n\n* Maybe sec3 can be clarified in its structure by re-ordering points on the quadratic error function and early stopping (eg avoiding to split them between end of 3.1 and 3.2).\n\n* sec6 \"Machine learning and deep learning\": I would definitely avoid this formulation, seems to tail in with all the media nonsense on \"what's the difference between ML and DL ?\". In addition the formulation seems to contrast ML with hierarchical Bayesian modeling, which does not make sense/ is wrong and confusing.\n\n# Typos\n\n* sec1 second parag: did you really mean \"in the architecture or loss function\"? unclear.\n* sec2: over a family\n* \"common structure, so that\" (not such that)\n* orthgonal\n* sec2.1 suggestion: clarify that \\theta and \\phi are in the same space\n* sec2.2 suggestion: task-specific parameter $\\phi_j$ is distinct from ... parameters $\\phi_{j'}, j' \\neq j}\n* \"unless an approximate ... is provided\" (the use of the subjunctive here is definitely dated :-) )\n* sec3.1 task-specific parameters $\\phi_j$ (I would avoid writing just \\phi altogether to distinguish in usage from \\theta)\n* Gaussian-noised\n* approximation of the it objective\n* before eq9: \"that solves\": well, it doesn't really \"solve\" the minimisation, in that it is not a minimum; reformulate this?\n* sec4.1 innaccurate\n* well approximated\n* sec4.2 an curvature\n* (Amari 1989)\n* For the the Laplace\n* O(n^3) : what is n ?\n* sec5.2 (Ravi and L 2017)\n* for the the \n", "Thank you for your clarifications.\n\n> miniImageNet: I maintain the suggestions put forward in my review. I note that you cite Munkhdalai 2017 in your latest draft https://openreview.net/references/pdf?id=Bye6mda7z\n\n> performance: you write \"We note that Triantafillou et al. (2017) in NIPS 2017 reported a similar improvement after MAML was published in ICML 2017, and so the standard seems to be that an improvement of about 1% is publishable.\"\nI definitely disagree with the argument, which I think is specious. Certainly the absolute value of a performance improvement needs to be considered in realation with the standard deviation on the task. In addition, the fact that a paper with a weak performance improvement was published does not create a \"judicial precedent\" that would validate any further weak improvement as significant.", "> “Sec 5.2: The authors have left out “Mishral et al” from the comparison due to the model being significantly larger than others. Could the authors provide insight into why they did not use the ResNet structure from the tcml paper in their L-MLMA scheme ?”\n\nPlease see our detailed discussion with an author of TCML in the OpenReview comment thread (https://openreview.net/forum?id=BJ_UL-k0b&noteId=r1aR9l5lG). In summary, our contribution is to reinterpret MAML as approximate inference in a hierarchical Bayesian model, rather than to provide an exhaustive empirical comparison over neural network architectures (as the choice of architecture is largely orthogonal to the training loss or algorithm). Furthermore, the majority of other prior few-shot learning methods used the smaller architecture, so we felt that standardizing the architecture would provide a more informative comparison. Since we were able to obtain a number for SNAIL/TCML using the same architecture, we believe that this adequately rounds out the comparisons.\n\n> “Sec 6+7: The paper clearly states that it is not the aim to (generally) formulate the MAML as a HB. Given the advancement in gradient based inference for HB the last couple of years (e.g. variational, nested laplace , expectation propagation etc) for explicit models, could the authors perhaps indicate why they believe their approach of looking directly to the MAML objective is more scalable/useful than trying to formulate the same or similar objective in an explicit HB model and using established inference methods from that area ?”\n\nWe intend the connection between MAML and HB to provide an avenue to incorporate insights from gradient-based inference, not as an explicit alternative to established inference procedures. To clarify, with the Laplace approximation, we are making the assumption that the posterior over \\phi is a unimodal Gaussian with mean centered at the point estimate computed by a few steps of gradient descent during fast adaptation, and with covariance equal to the inverse Hessian evaluated at that point. However, we are not restricted to this assumption — we could potentially use another inference method (such as the nested Laplace approximation, variational Bayes, expectation propagation, or Hamiltonian Monte Carlo) to compute a more complex posterior distribution over \\phi and to potentially improve performance. We may also incorporate insights from the recent literature on interpreting gradient methods as forms of probabilistic inference (e.g., Zhang & Sun et al. 2017) due the gradient-based nature of our method. This is interesting future work!\n\nIf the reviewer has specific suggestions for related works that present generic inference methods (gradient-based or otherwise) that can reliably deal with high-dimensional stimuli and high-dimensional models (especially those that deal with raw images), we would be grateful to hear of them. \n\n> Sec 4.1 “…each integral in the sum in (2)…” eq 2 is a product\n\nFixed — thank you!\n\n\nWe encourage R2 to let us know of any additional questions or concerns about the clarity of the paper (especially things that could make the work clearer to a non-meta-learning audience).\n\n\n=========================================\nReferences\n\nBa (2017). “Distributed Second-Order Optimization using Kronecker-Factored Approximations.” In ICLR 2017.\nMartens (2010). \"Deep learning via Hessian-free optimization.\" In ICML 2010 http://www.cs.toronto.edu/~jmartens/docs/Deep_HessianFree.pdf\nMartens (2014). \"New insights and perspectives on the natural gradient method.\" arXiv preprint arXiv:1412.1193. https://arxiv.org/abs/1412.1193\nPascanu & Bengio (2013). \"Revisiting natural gradient for deep networks.\" arXiv preprint arXiv:1301.3584. https://arxiv.org/abs/1301.3584\nTriantafillou et al. (2017). “Few-Shot Learning Through an Information Retrieval Lens.” In NIPS 2017. https://arxiv.org/abs/1707.02610\nVinyals & Povey (2012). “Krylov Subspace Descent for Deep Learning.” In AISTATS 2012. https://arxiv.org/abs/1111.4259\nZhang & Sun et al. (2017). “Noisy Natural Gradient as Variational Inference.” https://arxiv.org/abs/1712.02390", "We thank R2 for feedback. Regarding R2’s comment on the exclusion of TCML from the miniImageNet results table: Our detailed discussion with an author of TCML is in the OpenReview comment thread (https://openreview.net/forum?id=BJ_UL-k0b&noteId=r1aR9l5lG). In summary, our contribution is to reinterpret MAML as approximate inference in a hierarchical Bayesian model, rather than to provide an exhaustive empirical comparison over neural network architectures (as the choice of architecture is largely orthogonal to the training loss or algorithm). Furthermore, the majority of other prior few-shot learning methods used the smaller architecture, so we felt that standardizing the architecture would provide a more informative comparison. Since we were able to obtain a number for SNAIL/TCML using the same architecture, we believe that this adequately rounds out the comparisons.", "We thank R3 for thorough and constructive comments! We have attempted to address them to the best of our ability.\n\nWe agree with R3’s characterization of the paper, but would like to clarify a small point for completeness:\n\n> “In particular the inner (task) training is initially cast as point-wise max likelihood estimation…”\n\nWe cast the task-specific training in the inner loop as maximum a posteriori estimation (instead of max likelihood), in which the induced prior is a result of gradient descent with early stopping (termed “fast adaptation”). In particular, the induced prior serves to regularize the task-specific parameters to initial conditions (the parameter initialization).\n\n> “Regarding the MiniImageNet experiments…”\n\nOur detailed discussion with an author of TCML is in the OpenReview comment thread (https://openreview.net/forum?id=BJ_UL-k0b&noteId=r1aR9l5lG). In summary, our contribution is to reinterpret MAML as approximate inference in a hierarchical Bayesian model, rather than to provide an exhaustive empirical comparison over neural network architectures (as the choice of architecture is largely orthogonal to the training loss or algorithm). Furthermore, the majority of other prior few-shot learning methods used the smaller architecture, so we felt that standardizing the architecture would provide a more informative comparison. Since we were able to obtain a number for SNAIL/TCML using the same architecture, we believe that this adequately rounds out the comparisons.\n\n> The experimental performance reported Table 1 remains small and largely within one standard deviation of competitor methods.\n\nWe note that Triantafillou et al. (2017) in NIPS 2017 reported a similar improvement after MAML was published in ICML 2017, and so the standard seems to be that an improvement of about 1% is publishable.\n\n> before eq9: \"that solves\": well, it doesn't really \"solve\" the minimisation, in that it is not a minimum; reformulate this?\n\nIn the linear regression case, the iterate indeed solves the *regularized* minimization problem (in particular, it is a solution that obtains the best trade-off (wrt the regularization parameter) between minimal objective and regularization costs). However, the iterate indeed does not solve the *unregularized* problem.\n\n> “…the paper does not eg address other forms (than L-MAML) of the task-specific subroutine ML-...,”\n\nWe could potentially use another inference method (such as the nested Laplace approximation, variational Bayes, expectation propagation, or Hamiltonian Monte Carlo) to compute a more complex posterior distribution over task-specific parameters \\phi. This is an interesting extension that we leave to future work.\n\n> “# Suggestions on readability” & “# Typos”\n\nMany thanks for catching all of these corrigenda — we’ve corrected them in the revised paper (as follows for the more major points):\n\n- \\alpha, \\beta → \\beta, \\alpha\n- replaced “f” with “E_{x from task} [-\\log p(x | \\theta)]\n- We kept the split of early stopping & the quadratic function between 3.1, 3.2 since 3.2 is “additional material” and 3.1 is already dense. But, thank you for the suggestion.\n- reformulated related work\n- clarified that \\theta and \\phi are in the same space\n- O(n^3) → O(d^3) for d-dimensional Kronecker factor", "We thank the reviewers for their constructive feedback! We have updated the paper as follows:\n\n- Section 3.1: We have clarified that every iterate of truncated gradient descent is a mode of an implicit posterior, and thus the gradient descent procedure during fast adaptation does not need to be taken to convergence.\n\n- Figure 5 & Section 5.1: We have substantially revised the toy example in Section 5. We show that the interpretation of the method as hierarchical Bayes makes it practical to directly sample model parameters in a sinusoid regression task, and that there is increased uncertainty over model parameters when the datapoints for the task are less informative.\n\n- Everywhere: We have incorporated minor reviewer comments regarding typos, clarifications, etc.\n\nWe respond to individual comments in direct replies to the reviewers’ comments.", "We thank the authors of TCML for their comment regarding a comparison to TCML.\n\nWe emphasize that the primary focus of our work is not to perform an exhaustive exploration of neural network architectures for few-shot classification, but instead to reinterpret and propose improvements to the MAML algorithm from a probabilistic perspective. This is the reason why we have chosen to work with the model architecture that, to the best of our knowledge, all prior work in this area that reports on the miniImageNet task (see below for a list) uses with the exceptions of TCML, MetaNetworks and Relation Networks. We thus consider the exploration of more expressive architectures not an omission but a standardization choice.\n\nCertainly, there is likely to be a better architecture for MAML and L-MAML, just as there is almost certainly a better architecture for matching networks, prototypical networks, and the various other meta-learning methods. But the focus of evaluating such meta-learning algorithms is to decouple the question of algorithm design from the question of architecture design. Nevertheless, we agree that scalability is an important criterion for evaluating meta-learning methods.\n\nLastly, we note that a similar method to TCML (https://openreview.net/forum?id=B1DmUzWAW) reports results with a shallow miniImageNet embedding: \"5-way mini-Imagenet: 45.1% and 55.2% (1-shot, 5-shot).\" In future revisions of this work, we will treat this as the comparison point for a temporal-convolution-based meta-learner that employs the standard shallower architecture.\n\n-----------------------------------------------------------------------------------------------------------------------------------------\nPrevious meta-learning methods applied to miniImageNet that employ the architecture of Vinyals et al. (2016):\n\n- Vinyals et al. (2016). \"Matching Networks for One Shot Learning.\" (https://arxiv.org/abs/1606.04080)\n- Ravi & Larochelle (2017). \"Optimization as a Model for Few-Shot Learning.\" (https://openreview.net/forum?id=rJY0-Kcll) \n- Finn et al. (2017). \"Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.\" (https://arxiv.org/abs/1703.03400)\n- Snell et al. (2017). \"Prototypical Networks for Few-shot Learning.\" (https://arxiv.org/abs/1703.05175)\n- Triantafillou et al. (2017). \"Few-Shot Learning Through an Information Retrieval Lens.\" (https://arxiv.org/abs/1707.02610)\n- Li et al. (2017). \"Meta-SGD: Learning to Learn Quickly for Few-Shot Learning.\" (https://arxiv.org/abs/1707.09835)\n\n-----------------------------------------------------------------------------------------------------------------------------------------\nPrevious meta-learning methods applied to miniImageNet that make use of an alternative architecture:\n\n- Munkhdalai & Yu (2017). \"Meta Networks.\" (https://arxiv.org/abs/1703.00837)\n - Use a \"CNN [with] 5 convolutional layers, each of which is a 3×3 convolution with 64 filters, followed by a ReLU non-linearity, a 2×2 max-pooling layer, a fully connected (FC) layer, and a softmax layer\" (App. A).\n\n- Mishra et al. (2017). \"Meta-Learning with Temporal Convolutions.\" (https://arxiv.org/abs/1707.03141)\n - Use \"14 layers of 4 residual blocks [each with] a series of [three] convolution layers followed by a residual connection and then a 2×2 max-pooling operation\" (App. C).\n\n- Sung et al. (2017). \"Learning to Compare: Relation Network for Few-Shot Learning.\" (https://arxiv.org/abs/1711.06025)\n - On top of the standard Vinyals et al. (2016) architecture, add a relation module that \"consists of two convolutional blocks and two fully-connected layers. Each of convolutional block [sic.] is a 3×3 convolution with 64 filters followed by batch normalisation, ReLU non-linearity and 2×2 maxpooling... The two fully-connected layers are 8 and 1 dimensional, respectively.\" (Section 3.4).", "Dear Authors,\n\nWe much appreciate the contributions made in this paper but would like to point out an issue with the writing / reporting of experiments. In particular, with respect to the reporting on mini-ImageNet results, we wish to draw your attention to the omission of TCML from the results table (https://arxiv.org/abs/1707.03141). To the best of our knowledge, TCML is in fact the current SOTA on this benchmark and seems important to be included to give the reader a complete picture.\n\nA footnote says “We omit TCML (Mishra et al., 2017) as their ResNet architecture has significantly more parameters than the other methods and is thus not comparable. Their 1-shot performance is 55.71 ± 0.99”. We do not believe this is a judicious exclusion. The ability of TCML to work well with a larger architecture is an indication of its ability to extract signal in a more difficult optimization setting, an important characteristic for meta-learning algorithms. Nothing is preventing other work (including yours) from using more expressive architectures. Or if there are underlying limitations that makes such methods limited to smaller models (such as computational complexity or overfitting), this is highly relevant to the study of meta-learning, rather than something to simply be omitted. \n\nNote we personally did test the larger models with not only TCML but also with MAML. Our finding was that MAML was not able to benefit from the larger models, and it ended up overfitting and in fact doing worse than MAML with smaller models.\n\nSincerely, \n\nMostafa, Nikhil, Peter, and Pieter (authors of TCML) \n" ]
[ 6, -1, -1, 7, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, -1, -1, 3, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJ_UL-k0b", "rJBEH-fVf", "SycLo2FxG", "iclr_2018_BJ_UL-k0b", "iclr_2018_BJ_UL-k0b", "SkauLRaXz", "SycLo2FxG", "Hkv54AYeM", "HJBzB65xf", "iclr_2018_BJ_UL-k0b", "rJOy7ZYxG", "iclr_2018_BJ_UL-k0b" ]
iclr_2018_B1Yy1BxCZ
Don't Decay the Learning Rate, Increase the Batch Size
It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times. We can further reduce the number of parameter updates by increasing the learning rate ϵ and scaling the batch size B∝ϵ. Finally, one can increase the momentum coefficient m and scale B∝1/(1−m), although this tends to slightly reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning. We train ResNet-50 on ImageNet to 76.1% validation accuracy in under 30 minutes.
accepted-poster-papers
Pros: + Nice demonstration of the equivalence between scaling the learning rate and increasing the batch size in SGD optimization. Cons: - While reporting convergence as a function of number of parameter updates is consistent, the paper would be more compelling if wall-clock times were given in some cases, as that will help to illustrate the utility of the approach. - The paper would be stronger if additional experimental results, which the authors appear to have at hand (based on their comments in the discussion) were included as supplemental material. - The results are not all that surprising in light of other recent papers on the subject.
train
[ "SyoR0j7BG", "H17TutI4G", "r1SNNxFlf", "B1i1vxqgz", "SJJrhg5lf", "BkCOZEamG", "SJdPl-KMf", "HkYkRxtzz", "B1FJsgFfG", "BJtuH2Qlf", "rJIErsI1M" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "We apologize for any confusion. We do not claim that most big data problems can be solved using single machine SGD; our experiments use distributed (but synchronous) SGD. We will edit the text to clarify that the incentive to use asynchronous training is reduced when the synchronous batch size can be scaled to a substantial fraction of the training set size (as observed in our paper). Although we expect this to be commonly the case, we acknowledge that it is not guaranteed to occur for all models/datasets.\n\nWe will add our additional results reporting wall-clock times on ResNet-50 (likely in an appendix). We will also update the section 5.2 to provide median of 5 runs for each case shown, and add experiments at a wider range of learning rate values to the appendix.\n\nWe state that parameter updates provide a measure of the training speed if one assumes near-perfect parallelism, since this clarifies how readers should interpret our results depending on their own hardware/scaling efficiency. We note that a number of recent works have achieved extremely good scaling. Eg Goyal et al. [1] report large batch scaling efficiency of 90% while Akiba et al. [2] report scaling efficiency of 80%; both compared to a single node of 8 GPUs. On our own hardware we can increase the batch size from 256 to 16k with >95% scaling efficiency.\n\n[1] \"Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour\", Goyal et al., 2017, arXiv:1706.02677\n[2] \"Extremely Large Minibatch SGD: Training ResNet-50 on ImageNet in 15 Minutes\", Akiba et al., 2017, arXiv:1711.04325", "I would be very cautious about claims that because we can succeed on ImageNet in an hour, the incentive to care about distributed/asyncronous SGD is \"much reduced\". The world of machine learning is so much bigger than deep convolutional networks for images, and the idea that we can solve all big data problems (or most) via single-machine SGD makes little sense to me.\n\nI would definitely encourage the authors to consider some models/datasets that are not related to images at all. This would be a much more sincere test about generalization, rather than just trying out MNIST.\n\nRE many vs few learning rate values: I am in agreement that main paper figures should focus on clarity. But I do think there needs to be some careful experiments, documented in a supplement, that are more exhaustive. I also especially suggest this with regard to my comment about reporting performance across many random initializations (with several different initialization strategies). From the current results, a careful reader might not know whether to call the conclusions \"reproducible\" or just \"lucky\".\n\nThanks for providing the wallclock speedup measurements. Of course wallclock time is dependent on hardware, but I do like to see comparisons of different methods on the same hardware for assessing practical utility. I would still suggest including these in the paper (with appropriate caveats and clarification of specific hardware used). I would not find \"number of parameter updates\" alone to be persuasive evidence. What if the cost of all those parameter updates is dwarfed by other factors? I would also drop the argument that \"assumes perfect parallelism\", because I've never met a real system that was close to near-perfect parallelism.\n", "The paper analyzes the the effect of increasing the batch size in stochastic gradient descent as an alternative to reducing the learning rate, while keeping the number of training epochs constant. This has the advantage that the training process can be better parallelized, allowing for faster training if hundreds of GPUs are available for a short time. The theory part of the paper briefly reviews the relationship between learning rate, batch size, momentum coefficient, and the noise scale in stochastic gradient descent. In the experimental part, it is shown that the loss function and test accuracy depend only on the schedule of the decaying noise scale over training time, and are independent of whether this decaying noise schedule is achieved by a decaying learning rate or an increasing batch size. It is shown that simultaneously increasing the momentum parameter and the batch size also allows for fewer parameters, albeit at the price of some loss in performance.\n\nCOMMENTS:\n\nThe paper presents a simple observation that seems very relevant especially as computing resources are becoming increasingly available for rent on short time scales. The observation is explained well and substantiated by clear experimental evidence. The main issue I have is with the part about momentum. The paragraph below Eq. 7 provides a possible explanation for the performance drop when $m$ is increased. It is stated that at the beginning of the training, or after increasing the batch size, the magnitude of parameter updates is suppressed because $A$ has to accumulate gradient signals over a time scale $B/(N(1-m))$. The conclusion in the paper is that training at high momentum requires additional training epochs before $A$ reaches its equilibrium value. This effect is well known, but it can easily be remedied. For example, the update equations in Adam were specifically designed to correct for this effect. The mechanism is called \"bias-corrected moment estimate\" in the Adam paper, arXiv:1412.6980. The correction requires only two extra multiplications per model parameter and update step. Couldn't the same or a very similar trick be used to correctly rescale $A$ every time one increases the batch size? It would be great to see the equivalent of Figure 7 with correctly rescaled $A$.\n\nMinor issues:\n* The last paragraph of Section 5 refers to a figure 8, which appears to be missing.\n* In Eqs. 4 & 5, the momentum parameter $m$ is not yet defined (it will be defined in Eqs. 6 & 7 below).\n* It appears that a minus sign is missing in Eq. 7. The update steps describe gradient ascent.\n* Figure 3 suggests that most of the time between the first and second change of the noise scale (approx. epochs 60 to 120) are spent on overfitting. This suggests that the number of updates in this segment was chosen unnecessarily large to begin with. It is therefore not surprising that reducing the number of updates does not deteriorate the test set accuracy.\n* It would be interesting to see a version of figure 5 where the horizontal axis is the number of epochs. While reducing the number of updates allows for faster training if a large number of parallel hardware instances are available, the total cost of training is still governed by the number of training epochs.\n* It appears like the beginning of the second paragraph in Section 5.2 describes figure 1. Is this correct?", "The paper represents an empirical validation of the well-known idea (it was published several times before) \nto increase the batch size over time. Inspired by recent works on large-batch studies, the paper suggests to adapt the learning rate as a function of the batch size.\n\nI am interested in the following experiment to see how useful it is to increase the batch size compared to fixed batch size settings. \n\n1) The total budget / number of training samples is fixed. \n2) Batch size is scheduled to change between B_min and B_max\n3) Different setting of B_min and B_max>=B_min are considered, e.g., among [64, 128, 256, 512, ...] or [64, 256, 1024, ...] if it is too expensive.\n4) Drops of the learning rates are scheduled to happen at certain times represented in terms of the number of training samples passed so far (not parameter updates).\n5) Learning rates and their drops should be rescaled taking into account the schedule of the batch size and the rules to adapt learning rates in large-scale settings as by Goyal. ", "## Review Summary\n\nOverall, the paper's paper core claim, that increasing batch sizes at a linear\nrate during training is as effective as decaying learning rates, is\ninteresting but doesn't seem to be too surprising given other recent work in\nthis space. The most useful part of the paper is the empirical evidence to\nbackup this claim, which I can't easily find in previous literature. I wish\nthe paper had explored a wider variety of dataset tasks and models to better\nshow how well this claim generalizes, better situated the practical benefits\nof the approach (how much wallclock time is actually saved? how well can it be\nintegrated into a distributed workflow?), and included some comparisons with\nother recent recommended ways to increase batch size over time.\n\n\n## Pros / Strengths\n\n+ effort to assess momentum / Adam / other modern methods\n\n+ effort to compare to previous experimental setups\n\n\n## Cons / Limitations\n\n- lack of wallclock measurements in experiments\n\n- only ~2 models / datasets examined, so difficult to assess generalization\n\n- lack of discussion about distributed/asynchronous SGD\n\n\n## Significance\n\nMany recent previous efforts have looked at the importance of batch sizes\nduring training, so topic is relevant to the community. Smith and Le (2017)\npresent a differential equation model for the scale of gradients in SGD,\nfinding a linear scaling rule proportional to eps N/B, where eps = learning\nrate, N = training set size, and B = batch size. Goyal et al (2017) show how\nto train deep models on ImageNet effectively with large (but fixed) batch\nsizes by using a linear scaling rule.\n\nA few recent works have directly tested increasing batch sizes during\ntraining. De et al (AISTATS 2017) have a method for gradually increasing batch\nsizes, as do Friedlander and Schmidt (2012). Thus, it is already reasonable to\npractitioners that the proposed linear scaling of batch sizes during training\nwould be effective.\n\nWhile increasing batch size at the proposed linear scale is simple and seems\nto be effective, a careful reader will be curious how much more could be\ngained from the backtracking line search method proposed in De et al.\n\n\n## Quality\n\nOverall, only single training runs from a random initialization are used. It\nwould be better to take the best of many runs or to somehow show error bars,\nto avoid the reader wondering whether gains are due to changes in algorithm or\nto poor exploration due to bad initialization. This happens a lot in Sec. 5.2.\n\nSome of the experimental setting seem a bit haphazard and not very systematic.\nIn Sec. 5.2, only two learning rate scales are tested (0.1 and 0.5). Why not\nexamine a more thorough range of values?\n\nWhy not report actual wallclock times? Of course having reduced number of\nparameter updates is useful, but it's difficult to tell how big of a win this\ncould be.\n\nWhat about distributed SGD or asyncronous SGD (hogwild)? Small batch sizes\nsometimes make it easier for many machines to be working simultaneously. If we\nscale up to batch sizes of ~ N/10, we can only get 10x speedups in\nparallelization (in terms of number of parameter updates). I think there is\nsome subtle but important discussion needed on how this framework fits into\nmodern distributed systems for SGD.\n\n\n## Clarity\n\nOverall the paper reads reasonably well.\n\nOffering a related work \"feature matrix\" that helps readers keep track of how\nprevious efforts scale learning rates or minibatch sizes for specific\nexperiments could be valueable. Right now, lots of this information is just\nprovided in text, so it's not easy to make head-to-head comparisons.\n\nSeveral figure captions should be updated to clarify which model and dataset\nare studied. For example, when skimming Fig. 3's caption there is no such\ninformation.\n\n## Paper Summary\n\nThe paper examines the influence of batch size on the behavior of stochastic\ngradient descent to minimize cost functions. The central thesis is that\ninstead of the \"conventional wisdom\" to fix the batch size during training and\ndecay the learning rate, it is equally effective (in terms of training/test\nerror reached) to gradually increase batch size during training while fixing\nthe learning rate. These two strategies are thus \"equivalent\". Furthermore,\nusing larger batches means fewer parameter updates per epoch, so training is\npotentially much faster.\n\nSection 2 motivates the suggested linear scaling using previous SGD analysis\nfrom Smith and Le (2017). Section 3 makes connections to previous work on\nfinding optimal batch sizes to close the generaization gap. Section 4 extends\nanalysis to include SGD methods with momentum.\n\nIn Section 5.1, experiments training a 16-4 ResNet on CIFAR-10 compare three\npossible SGD schedules: * increasing batch size * decaying learning rate *\nhybrid (increasing batch size and decaying learning rate) Fig. 2, 3 and 4 show\nthat across a range of SGD variants (+/- momentum, etc) these three schedules\nhave similar error vs. epoch curves. This is the core claimed contribution:\nempirical evidence that these strategies are \"equivalent\".\n\nIn Section 5.3, experiments look at Inception-ResNet-V2 on ImageNet, showing\nthe proposed approach can reach comparable accuracies to previous work at even\nfewer parameter updates (2500 here, vs. ∼14000 for Goyal et al 2007)\n", "We have uploaded an updated manuscript, responding to the comments of the referees. We were delighted that all three reviewers recommended the paper be accepted. As well as fixing some minor typos, the main changes are:\n\n1) We have edited the final paragraph of section 4, to clarify that the performance losses with large batches/momentum coefficients are not resolved by using initialization bias correction suggested by reviewer 3. When the momentum coefficient is too large, it takes many epochs for the accumulation to forget old gradients, and this prevents SGD from responding to changes in the loss landscape.\n\n2) We clarify in section 5.3 that we chose not to include wall-clock times, since these are not comparable across different hardware/software frameworks. As we stated in response to reviewer 2, we have confirmed that increasing batch sizes can be used to reduce wall-clock time.\n\n3) We include a brief discussion of asynchronous SGD in the related work section.\n", "We thank the reviewer for their positive review, \n\nWe will edit our discussion of momentum in section 4 to explain the problem more clearly. We are currently running experiments to double check, but we do not believe that the “bias-corrected moment estimate” trick will remove the performance gap when training at very large momentum coefficient. This is for two reasons: \n\n1) When one uses momentum, one introduces a new timescale into the dynamics, the time required for the direction of the parameter updates to change/forget old gradients. When one trains with large batch sizes and large momentum coefficients, this timescale becomes several epochs long. This invalidates the scaling rules, which assume this timescale is negligible. This issue arises throughout training, not just at initialization/after changing the noise scale. \n\n2) The “bias-corrected moment estimate” ensures that the expected magnitude of the parameter update at the start of training is correct, but it does not ensure that the variance in this parameter update is correct. As a result, bias correction introduces a very large noise scale at the start of training, which decays as the bias correction term falls. The same issue will arise if we used bias correction to reset the accumulation during training at a noise scale step; in fact it would temporarily increase the noise scale every time we try to reduce it.\n\nResponding to the minor issues raised: \ni) Our apologies, this should be figure 7b, we will fix it. \nii) The momentum coefficient is defined in the first line of the paragraph following eqns 4/5. \niii) Yes, we will fix this. \niv) We will check our conclusions hold when we reduce the number of epochs here, however we keep to pre-existing schedules in the paper to emphasize that our techniques can be applied without hyper-parameter tuning. \nv) All curves in figure 5 saw the same number of training epochs. \nvi) The first two schedules described in this paragraph match figure 1, however the following two schedules are new.", "We thank the reviewer for their positive assessment of our work. \nTo respond to the comments raised: \n\nThe wall clock time is primarily determined by the hardware researchers have at their disposal; not the quality of the research/engineering they have done. In the paper we choose to focus on the number of parameter updates, because we believe this is the simplest and most meaningful scientific measure of the speed of training. Assuming one can achieve perfect parallelism, the number of parameter updates and the wall clock time are identical. However we can confirm here that, using the increasing batch size trick, we were able to train ResNet-50 to 76.1% validation accuracy on ImageNet in 29 minutes. With a constant batch size, we achieve comparable accuracy in 44 minutes (replicating the set-up of Goyal et al.). This significantly under-estimates the gains available, as we only increased the batch size to 16k in these experiments, not 64k as in the paper. \n\nOne of the goals of large batch training is to remove the need for asynchronous SGD, which tends to slightly reduce test set accuracies. Since we are now able to scale the batch size to several thousand training examples and train accurate ImageNet models in under an hour with synchronous SGD, the incentive to use asynchronous training is much reduced. Intuitively, asynchronous SGD behaves somewhat like an increased momentum coefficient, averaging the gradient over recent parameter values. \n\nWe chose to focus on clarity, rather than including many equivalent experiments under different architectures, however we have checked that our claims are also valid for a DNN on MNIST and ResNet-50 on ImageNet. This is also why we do not present an exhaustive range of learning rate scales in section 5.2; we wanted to keep the figures clean and easy to interpret. It's worth noting that our observations also match theoretical predictions. We will update the figure captions to clarify which model/dataset they refer to.", "We thank the reviewer for their positive review. \n\nWe'd like to emphasize that our paper verifies a stronger claim than previous works. While previous papers have proposed increasing the batch size over time instead of decaying the learning rate, our work demonstrates that we can directly convert decaying learning rate schedules into increasing batch size schedules and vice-versa; obtaining identical learning curves on both training and test sets for the same number of training epochs seen. To do so, we replace decaying the learning rate by a factor q by increasing the batch size by the same factor q. This strategy allows us to convert between small and large batch training schedules without hyper-parameter tuning, which enabled us to achieve efficient large batch training, with batches of 65,000 examples on ImageNet. \n\nWe may have misunderstood, but we believe that we provided the experiment suggested in the review in section 5.1 (figures 1,2 and 3). We consider three schedules, each of which decay the noise scale by a factor of 5 after ~60, ~120 and ~160 epochs. Each schedule sees the same number of training examples. The “decaying learning rate schedule” achieves this by using a constant batch size of 128 and decaying the learning rate by a factor of 5 at each step. The “increasing batch schedule” holds the learning rate fixed and increases the batch size by a factor of 5 at the same steps. Finally the “hybrid” schedule is mix of the two strategies. All three curves achieve identical training curves in terms of number of examples seen (figure 2a), and achieve identical final test accuracy (figure 3a). In this sense, decaying the learning rate and increasing the batch size are identical; they require the same amount of computation to reach the same training/test accuracies. However if one increases the batch size one can benefit from greater parallelism to reduce wall clock time.", "Thank you for your interest in our work!\n\nWe would like to emphasize that the method of increasing the batch size during training instead of decaying the learning rate is not an alternative to large batch training, it is complimentary. Large batch training is achieved by increasing the initial learning rate and linearly scaling the batch size, thus holding the SGD noise scale constant. Meanwhile we propose increasing the batch size during training at constant learning rate, in order to maintain the same noise scale progression obtained by a decaying learning rate. \n\nWe showed in figure 5 that we could simultaneously increase the initial learning rate and batch size by a factor of 5, and also replace a decaying learning rate with an increasing batch size schedule. This did not cause any reduction in test performance, achieving final test accuracy of 94.5%. In response to your question, we attempted to instead increase the initial learning rate and batch size by a factor of 25 with a constant batch size and decaying learning rate schedule. The test set accuracy drops to 93.2%. We will add these results to the appendix after the review process. \n\nBest wishes,", "The authors claimed that one can achieve equivalent test accuracies by increasing the batch size proportionally instead of decaying the learning rate. They also claimed that one benefit of increasing batch size is it has fewer parameter updates. However, to make the latter claim more convincing, it is strongly suggested adding a comparison with a fixed-size large batch method (say, much larger than the initial batch size of the \"Increasing batch size\" method) in the evaluation setting, since large batch method may have even fewer updates than the \"Increasing batch size\" method. If the large batch method cannot reach same test accuracies after the same number of training epochs despite fewer updates, then the claim that \"Increasing batch size\" method can achieve equivalent test accuracies with fewer updates than fixed batch size method can be solidly confirmed.\n\nI am quite interested in work on changing batch sizes and found one paper introducing ways to dynamically adapt batch size as learning proceeds, called \"On Batch Adaptive Training for Deep Learning: Lower Loss and Larger Step Size\" (also submitted to ICLR 2018). They've done similar work but in a self-adaptive way. Specifically, it proposed a method to dynamically select the batch size for each update so that it may achieve lower training loss after scanning the same amount of training data. However, its batch adaptive method requires more computation costs to decide a proper batch size. Check it out if you are interested.\n" ]
[ -1, -1, 6, 7, 6, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "H17TutI4G", "HkYkRxtzz", "iclr_2018_B1Yy1BxCZ", "iclr_2018_B1Yy1BxCZ", "iclr_2018_B1Yy1BxCZ", "iclr_2018_B1Yy1BxCZ", "r1SNNxFlf", "SJJrhg5lf", "B1i1vxqgz", "rJIErsI1M", "iclr_2018_B1Yy1BxCZ" ]
iclr_2018_HyMTkQZAb
Kronecker-factored Curvature Approximations for Recurrent Neural Networks
Kronecker-factor Approximate Curvature (Martens & Grosse, 2015) (K-FAC) is a 2nd-order optimization method which has been shown to give state-of-the-art performance on large-scale neural network optimization tasks (Ba et al., 2017). It is based on an approximation to the Fisher information matrix (FIM) that makes assumptions about the particular structure of the network and the way it is parameterized. The original K-FAC method was applicable only to fully-connected networks, although it has been recently extended by Grosse & Martens (2016) to handle convolutional networks as well. In this work we extend the method to handle RNNs by introducing a novel approximation to the FIM for RNNs. This approximation works by modelling the covariance structure between the gradient contributions at different time-steps using a chain-structured linear Gaussian graphical model, summing the various cross-covariances, and computing the inverse in closed form. We demonstrate in experiments that our method significantly outperforms general purpose state-of-the-art optimizers like SGD with momentum and Adam on several challenging RNN training tasks.
accepted-poster-papers
This clearly written paper extends the Kronecker-factored approximate curvature optimizer to recurrent networks. Experiments on Penn Treebank language modeling and training of differentiable neural computers on a repeated copy task show that the proposed K-FAC optimizers are stronger than SGD, Adam, and Adam with layer normalization. The most negative reviewer objected to a lack of theoretical error bounds on the approximations made, but the authors successfully argue that obtaining such bounds would require making assumptions that are likely to be violated in practice, and that strong empirical performance on real tasks is sufficient justification for the approximations. Pros: + "Completes" K-FAC training by extending it to recurrent models. + Experiments show effects of different K-FAC approximations. Cons: - The algorithm is rather complex to implement.
train
[ "B1IfZi4mM", "rk1lre5xG", "HkH1mlrVz", "ryIhKEFxM", "Sk-wrg9gM", "Hy60-o47f", "SyubZTI7z", "Syd3NrumM", "BkyTlaL7G", "SyxdyRI7G", "S18lVjVQz", "rJ_n-jNQz", "S1RLEw4xf" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "Thank you for your detailed comments. We will address each of your major points in the sections below, followed by your remaining questions/comments.\n\n\nEmpirical / theoretical analysis of approximation quality\n=========================\n\nA detailed discussion, empirical study, and analysis of the main approximation assumption (independence of the a’s and g’s) used in K-FAC and its derivatives (including this work) is contained in the original K-FAC paper. However, these are not approximations bounds in the sense you likely mean. \n\nDue to the mathematically intractable nature of neural networks it is almost certainly impossible to provide such theoretical bounds. Moreover, for each of these approximating assumptions, it is quite likely that there exists some artificially constructed model and dataset pair where they would be strongly violated. And they are almost surely violated for real models and datasets as well, just to a lesser degree.\n\nAn empirical study of each of these approximations would be interesting, but there are very many of them and so this would be a large undertaking. We felt that this was outside of the scope of a conference paper, especially given that our manuscript was already on the long side. Instead, we decided to evaluate the quality of our approximations in the only sense in which matters in practice: how well they translate into optimization performance on real tasks.\n\n\nRE prioritization of tractability over approximation quality\n========================\n\nApproximations are often made for the sake of tractability, with little justification beyond how well their associated algorithms perform in practice. For example, the diagonal approximations that drive most practical second-order optimization schemes (like Adam/RMSprop) are made purely for practical reasons, with very little theoretical or empirical justification. And as one can see from the figures in the original K-FAC paper (see Figure 2 of Martens & Grosse, 2015), the true curvature matrix is highly non-diagonal in neural networks, so the diagonal approximation is indeed quite severe/inaccurate. \n\nWhile the approximations proposed in our paper are greater in sheer number, their sum total is still far less severe than a diagonal approximation. (Diagonal approximations take the components of the gradient to be independent, thereby completely giving up on trying to model its inter-component statistical structure.) \n\nIn designing our approximations we looked for the mildest possible ones that preserved the tractability properties we need. Ultimately, tractability has to be the overriding consideration when designing algorithms that can be used in practice. In one of your points you question our use of a linear-Gaussian model to describe the dependencies between the w_t’s. However, the only obvious alternative to this, which would preserve tractability (in the context of the other approximations being made), is to neglect the dependencies between the w_t’s, thus treating them as statistically independent. Should we give up on modeling these dependencies simply because the only tractable model we are aware of has no theoretical guarantees in terms of its accuracy?\n\nPerhaps better tractable approximations exist and could be the subject of future research. Indeed, we can’t prove that they don’t exist, and would be excited to learn about them. However, we feel that the onus should not be on us to provide such a proof. Our contribution is a constructive existence proof of a non-obvious approximation to the Fisher of an RNN, which is a) much less severe that existing approaches (e.g. diagonal approximations), is b) validated on real data, and is c) useful in practice. It feels like this should be good enough for a conference paper.\n\nAnother very important point to keep in mind is that the approximations need not be particularly accurate for them to be useful in optimization. Consider the analogy to statistical data modeling. People frequently use primitive statistical models (e.g. linear-Gaussian) to describe data distributions that their models cannot possibly ever capture faithfully. Nonetheless, these models have some predictive power and utility insofar as there are some aspects of the true underlying data-generated processes that can be described (however approximately) by such a simple model. Our situation is analogous. Our approximations, while they are clearly imperfect, allow us to capture enough of the statistical structure of the gradients that the resulting approximate Fisher still has some utility. They could be wildly inaccurate in absolute terms and still be useful for our purposes.\n\n(continued in the next reply)", "\nSummary of the paper\n-------------------------------\n\nThe authors extend the K-FAC method to RNNs. Due to the nature of BPTT, the approximation that the activations 'a' are independent from the gradients 'Ds' doesn't hold anymore and thus other approximations have to be made. They present 3 ways of approximating F, and show optimization results on 3 datasets, outperforming ADAM in both number of updates and computation time.\n\nClarity, Significance and Correctness\n--------------------------------------------------\n\nClarity: Above average. The mathematical notations is overly verbose, so it makes the paper harder to understand. Note that it is not the author's fault only, since they followed the notation of the other K-FAC papers.\nFor instance, the notations goes from 'E[xy^T]' to 'cov(x, y)' to 'V_{x,y}'. I don't think introducing the 'cov' notation helps with the understanding of the paper (unless they explicitly wanted to stress out that the covariance of the gradients of the outputs of the model are centered). Also the 'V' in equation (4) could be confused with the 'V' in the first equation. Moreover, for the gradients with respect to the activations, we go from 'dL/ds' to 'Ds' to 'g', and for the weights we go from 'dL/dW' to 'DW' to 'w'. Why not keeping the 'Ds' and 'Dw' notation throughout the paper, and defining Dx as vec(dL/dx)?\n\nSignificance: This paper aims at helping with the optimization of RNNs and is thus and important contribution for our community.\n\nCorrectness: The paper is technically correct.\n\nQuestions\n--------------\n\n1. In figure 1, how does it compare to Adam instead of SGD? I think it would be a more fair comparison since SGD is rarely used to train RNNs (as RMSprop and ADAM might help with the vanishing/exploding gradients problem). Also, does the SGD baseline has momentum (since your method does)?\n2. In all experiments, how do the validation / testing curves look like?\n3. How does it compare to different reparametrization techniques, such as Layer Normalization or Batch Normalization?\n\nPros\n------\n\n1. This paper completes the K-FAC family.\n2. It addresses the optimization of RNNs, which is an important research direction in our field.\n3. It shows different levels of approximations of the Fisher, with the corresponding performances.\n\nCons\n-------\n\n1. No validation / test curves for any experiments, which makes it hard to asses if one should use this method in practice or not.\n2. The notation is a bit verbose and can become confusing.\n3. Small experimental setting (only PTB and DNC).\n\nTypos\n--------\n\n1. Sec 1, par 5: \"a new family curvature\" -> \"a new family of curvature\"", "\nNotation: \n\nThanks for changing the notation, it is clearer in my opinion. \n\"Unfortunately, it is not feasible to define DZ = vec(dL/DZ)\": Oh right, sorry!\n\nQuestions:\n\n1. Thanks for the references, I definitively need to take a look. With the Adam curve, it is now clear that SGD is (or was) indeed the optimizer to use on this dataset.\n\n2. I totally agree with you and really like your distinction between \"optimization benchmarks\" and \"learning benchmarks\". However, I still think that adding the validation curves (as you did) is quite useful for people willing to use your method in a learning benchmark setup. Even if K-FAC might require a bit more regularization than the traditional SGD, it might still provide them some gains in training speed.\n\n3. It is also really nice to see that K-FAC works better than SGD with layer normalization. It is definitively a good argument in favor of K-FAC.\n\nGiven the elements added to the paper and the insightful answers to my questions, I will change my grade for the paper.", "This paper extends the Kronecker-factor Approximate Curvature (K-FAC) optimization method to the setting of recurrent neural networks. The K-FAC method is an approximate 2nd-order optimization method that builds a block diagonal approximation of the Fisher information matrix, where the block diagonal elements are Kronecker products of smaller matrices. \n\nIn order to approximate the Fisher information matrix for RNNs, the authors assume that the derivative of the loss function with respect to each weight matrix at each time step is independent of the length of the sequence, that these derivatives are temporally homogeneous, that the input and derivatives of the output are independent across every point in time, and that either the one-step cross-covariance of these derivatives is symmetric or that the training sequences are effectively infinite in length. Based on these assumptions, the authors show that the Fisher information can be reduced into a form in which the derivatives of the weight matrices can be approximated by a linear Gaussian graphical model and in which the approximate 2nd order method can be efficiently carried out. The authors compare their method to SGD on two language modeling tasks and against Adam for learning differentiable neural computers.\n\nThe paper is relatively clear, and the authors do a reasonable job of introducing related work of the original K-FAC algorithm as well as its extension to CNNs before systematically deriving their method for RNNs. The problem of extending the K-FAC algorithm is natural, and the steps taken in this paper seem natural yet also original and non-trivial. \n\nThe main issue that I have with this paper is the lack of theoretical justification or even intuition for the many approximations carried out in the course of approximating the Fisher information matrix. In many instances, it seemed like these approximations were made purely for convenience and tractability without much regard for (even approximate) correctness. This quality of this paper would be greatly strengthened if it had some bounds on approximation error or even empirical results testing the validity of the assumptions in the paper. Moreover, the experiments do not demonstrate levels of statistical significance in the results, so it is difficult to assert the practical significance of this work. \n\nSpecific comments and questions\nPage 2, \"r is is\". Typo.\nPage 2, \"DV\". I found the introduction of V without any explanation to be confusing.\nPage 2, \"P_{y|x}(\\theta)\". The relation between P_{y|x}(\\theta) and f(x,\\theta) is never explained.\nPage 3, \"common practice of computing the natural gradient as (F + \\lambda I) \\nabla h instead of F^{-1} \\nabla h\". I don't see how the former can serve as a replacement for the latter.\nPage 3, \"approximate g and a as statistically independent\". Even though K-FAC already exists, it would be good to explain why this assumption is reasonable, since similar assumptions are made for the work presented in this paper.\nPage 4, \"This new approximation, called \"KFC\", is derived by assuming....\". Same as previous comment. It would be good to briefly discuss why these assumptions are reasonable.\nPage 5, Independence of T and w_t's, temporal homogeneity of w_t's,, and independence between a_t's and g_t's. I can see why these are convenient assumptions, but why are they reasonable? Moreover, why is it further natural to assume that A and G are temporally homogeneous as well?\nPage 7, \"But insofar as the w_t's ... encode the relevant information contained in these external variables, they should be approximately Markovian\". I am not sure what this means.\nPage 7, \"The linear-Gaussian assumption meanwhile is a more severe one to make, but it seems necessary for there to be any hope that the required expectations remain tractable\". I am not sure that this is a good enough justification for such an idea, unless there are compelling approximation error bounds. \nPage 8, Option 1. In what situations is it reasonable to assume that V_1 is symmetric? \nPages 8-9, Option 2. What is a good finite sample size in which the assumption that the training sequences are infinitely long is reasonable in practice? Can the error |\\kappa(x) - \\zeta_T(x)| be translated into a statement on the approximation error?\nPage 9, \"V_1 = V_{1,0} = ...\". Typos (that appear to have been caught by the authors already).\nPage 9, \"The 2nd-order statistics ... are accumulated through an exponential moving average during training\". How sensitive is the performance of this method to the decay rate of the exponential moving average? \nPage 10, \"The additional computations required to get the approximate Fisher inverse from these statistics ... are performed asynchronously on the CPU's\". I find it a bit unfair to compare SGD to K-FAC in terms of wall clock time without also using the extra CPU's for SGD as well (e.g. via Hogwild or synchronous parallel SGD).\nPage 10, \"The hyperparameters of our approach...\". What is the sensitivity of the experimental results to these hyperparameters? Moreover, how sensitive are the results to initialization?\nPage 10, \"we found that each parameter update of our method required about 80% more wall-clock time than an SGD update\". How much of this is attributed to the fact that the statistics are computed asynchronously?\nPages 10-12, Experiments. There are no error bars in any of the plots, so it is impossible to ascertain the statistical significance of any of these results. \nPage 11: Figure 2. Where is the Adam batchsize 50 line in the left plot? Why did the Adam batchsize 200 line disappear halfway through the right plot?\n \n\n\n ", "In this paper, the authors present a second-order method that is specifically designed for RNNs. The paper overall is well-written and I enjoyed reading the paper. \n\nThe main idea of the paper is to extend the existing kronecker-factored algorithms to RNNs. In order to obtain a tractable formulation, the authors impose certain assumptions and provide detailed derivations. Even though the gain in the convergence speed is not very impressive and the algorithm is quite complicated and possibly not very accessible by deep learning practitioners, I still believe this is a novel and valuable contribution and will be of interest to the community. \n\nI only have some minor corrections:\n\n1) Sec 2.1: typo \"is is\"\n2) Sec 2.2: typo \"esstentiallybe\"\n3) Sec 2.2: (F+lambda I) --> should be inverse\n4) The authors should include a proper conclusion", "Answers to specific questions not addressed above:\n=======================\n\nPage 2, “DV”: V is a free variable used to define the D[…] notation. We have added a clarification of this in the text.\n\nPage 2, “p_{y|x}(\\theta)”: p is defined in relation to f and L via the equation -log p(y|x,\\theta) = -log r (y|f(x,\\theta)) = L(y, f(x,\\theta) near the start of Section 2.1. We will get rid of the “p” notation to simplify things.\n\nPage 3, \"common practice…”: Sorry, this was a typo. It should have read \"(F + \\lambda I)^{-1} \\nabla h instead of F^{-1} \\nabla h”\n \nPage 3, “approximate g and a”: This is the central approximation of the K-FAC approach and is discussed in the original paper (see Section 3.1 of Martens & Grosse [2015]). It is shown that the approximation is equivalent to neglecting the higher-order cumulants of the a’s and g’s, or equivalently assuming that they are Gaussian distributed. We will add a sentence or two pointing this out. \n\nAgain though, this justification is primarily a statistical way to interpret an approximation that is made for the sake of algorithmic tractability. It seems likely that it could violated to an arbitrarily large degree by specially constructed examples, which is a possibility that Martens & Grosse acknowledge in the original K-FAC paper. Despite this, it works well enough in practice to be a good alternative to diagonal approximations.\n\nPage 7, “But insofar as the w_t’s…”: This is saying that a process with hidden state will behave in an approximately Markovian way if the observed state contains most of the information of the hidden state. (If it contains *all* of the information of the hidden state then it is exactly Markovian.)\n\nPage 8, “Option 2…”: There is no single sequence length which will make this approximation accurate in practice. It will strongly depend on how close the temporal autocorrelation is to 1.\n\nThe expression measures error in the eigenvalues. This can be translated back to a bound on the error in F (induced by this particular approximation) by just pre and post-multiplying by U and U^\\top respectively. But this doesn’t actually do anything and just results in a more cluttered expression that is harder to interpret.\n\nThe purpose of this analysis was merely to establish the nature of the relationship between T, the temporal autocorrelations, and the approximation error (due to this particular part of the approximation), up to a proportionality constant.\n\nPage 9, “The 2nd-order statistics…”: We used the same setting of 0.95 for the decay constant in all of our experiments. This was the same value used in the previous papers on K-FAC. \n\nPage 10, “The additional computations…”: For large RNN like the ones we trained in our experiments, computing gradients on the CPU tends to be about 3-4 times slower than on the GPU. Thus we suspect that using the extra CPU resources for gradient computations would have provided only marginal improvement to SGD, especially when one accounts for the fact that SGD benefits considerably less from using larger minibatches than K-FAC does. \n\nAlso, the reason we performed the inverse computations on the CPUs for K-FAC is that that the GPU implementation of the required matrix-decompositions operations (inverse, SVD, eigen-decomposition, etc) are surprisingly slow compared to the CPU. This may be due to the more serial nature of such computations. We weren’t trying to give K-FAC an unfair advantage.\n\nPage 10, “The hyperparameters…”: Good settings of the hyperparameters are crucial for good performance, for both our method and baselines we compared to. The results are not particularly sensitive to the exact values, since performance varies as a relatively smooth continuous function of the hyperparameter settings (as it does with any reasonable method.)\n\nLikewise the networks must be initialized carefully for *any* of the optimization methods to do well. (Each method used the same initialization in our experiments.) \n\nPage 10-12, “Experiments.” Error bars are almost never included for optimization benchmarks of standard supervised neural network training. This is because the training curves tend to be very predictable and well behaved across multiple seeds. This is different from the situation in deep reinforcement learning for example, where the random and sparse nature of exploration introduces a lot of variability.\n\nPage 11, “Figure 2”: Sorry about this. Because Adam batch size 50 is about 4 times slower than the batch size 200 in terms of the per-update progress. That is why we do not see the Adam batch size 50 in the left plot is. The black line gets to 1.4 bits-per-character after 22,000 updates. And, the experiments for batch size 200 terminated early because we used the same total number of updates for each configuration. We will update the figures in the next revision with an extended run.\n", "The key distinction to understand here is the difference between \"optimization benchmarks\" and \"learning benchmarks\".\n\nAn optimization benchmark is concerned with the rate of empirical loss minimization, i.e. optimization performance. For an optimization benchmark to be valid it must use the same objective function for each optimizer. It is, by definition, not concerned with performance on any other objective than the one which is being optimized. (In some sense it cannot be without becoming an incoherent concept.)\n\nLearning benchmarks, meanwhile, test how well one can train a model that generalizes to the test set. Performance is measured using the test loss. In such benchmarks, the optimizer, the regularization, and even details of the model itself, can all be varied.\n\nTo properly assess the usefulness of K-FAC within machine learning we would have to run a comprehensive learning benchmark where, given a fixed dataset, the regularization (and possibly the model too) could be tuned for each optimizer. Due to the known interaction between optimization methods and regularization, the need to do this optimizer-specific tuning seems unavoidable for the test to be fair. Moreover, the standard models and regularization configs that we use in our experiments were already tuned (by their original authors) to give good generalization performance with SGD.\n\nSimply looking at the test curves after running an optimization benchmark is a very poor substitute for a proper learning benchmark. Indeed, it seems impossible to design a single experiment that can simultaneously function as both an optimization and learning benchmark, since the former requires the use of a fixed objective function, while the latter needs the objective to be varied.\n\nBecause this paper is about optimization we stuck to optimization benchmarks. While a comprehensive learning benchmark would certainly be valuable (not just to assess the usefulness of K-FAC, but other 2nd-order methods as well), we believe it is out of scope for this work.\n\n\nDuvenaud, David, Dougal Maclaurin, and Ryan Adams. \"Early stopping as nonparametric variational inference.\" Artificial Intelligence and Statistics. 2016.\n\nHardt, Moritz, Benjamin Recht, and Yoram Singer. \"Train faster, generalize better: Stability of stochastic gradient descent.\" arXiv preprint arXiv:1509.01240 (2015).\n\nKeskar, Nitish Shirish, and Richard Socher. \"Improving Generalization Performance by Switching from Adam to SGD.\" arXiv preprint arXiv:1712.07628 (2017).\n\nWilson, Ashia C., et al. \"The Marginal Value of Adaptive Gradient Methods in Machine Learning.\" arXiv preprint arXiv:1705.08292 (2017).\n\n\n3.\n\nIn our latest revision we have included additional benchmark experiments suggested by the reviewer with the Adam optimizer and layer-normalization. While Adam outperforms SGD in the first few epochs, SGD obtains a lower loss at the end of training. We found layer-normalization helps speed up Adam considerably, but it hurts the SGD performance. Such an observation is consistent with previous findings. In comparison, our proposed method significantly outperform both the Adam and the SGD baselines even with the help of layer-normalization. \n", "We've updated the paper again with the new experimental data (test scores, comparisons to layer-norm, etc).\n\nWe've also edited the review responses to reflect these updates.", "Thank you for your detailed review. See below for our response to your various questions and concerns.\n\nNotation\n-----------\n\nThe “V” in the first equation is just a free/arbitrary variable to define the D[...] notation. We will use a different symbol to avoid confusion, and clarify that it’s a free variable in the first equation.\n\nBecause the zero-centered nature of the second-order statistics are not crucial we will replace the use of covariances with uncentered 2nd-order moments, as you suggest.\n\nWe never actually use the notation dL/dZ anywhere except to define the DZ notation. Because of the way that gradient quantities appear in complex expressions in our paper (often multiple times in the same expression), this shortened notation seems necessary to avoid producing very long and ugly expressions that are hard to parse. Unfortunately, it is not feasible to define DZ = vec(dL/DZ), since we need to use the non-vectorized version at different points. \n\n\nQuestions\n------------\n\n1. \n\nSGD has become the goto optimizer for these PTB tasks due to its superior generalization properties (Merity et al, 2017; Wilson et al. 2017), which is why we used it in our experiments. But since our paper is not concerned with generalization (see our answer to your second question below) there is a good argument for using Adam as a second baseline, so we will include this in an upcoming revision of the manuscript.\n\nAlso, we would observe that diagonal methods like RMSProp / Adam likely won’t do anything to address the vanishing or exploding gradients problem in RNNs (as suggested in your comment). This is because the parameters are shared across time, and contributions from all the time-steps are added together before any preconditioning is applied. \n\nThe same argument also applies to a non-diagonal method like K-FAC. However, if the gradient contributions from different time-steps happen to land inside of distinct subspaces of the parameter space, then a non-diagonal method like K-FAC may still help with vanishing/exploding gradients by individually rescaling each of these contributions. (See Section 3.3.1 of J Martens’ thesis http://www.cs.toronto.edu/~jmartens/docs/thesis_phd_martens.pdf).\n\nMerity, Stephen, Nitish Shirish Keskar, and Richard Socher. \"Regularizing and optimizing LSTM language models.\" arXiv preprint arXiv:1708.02182 (2017).\n\nWilson, Ashia C., et al. \"The Marginal Value of Adaptive Gradient Methods in Machine Learning.\" arXiv preprint arXiv:1705.08292 (2017).\n\n\n2. \n\nIn our experiments K-FAC did overfit more than SGD. The final perplexity values on the PTB tasks were about 5-8 points higher. Please see Appendix D in the latest revision for the generalization performance.\n\nThe reasons why we didn’t present test performance in the existing version of the paper, and why we stand by this decision, are discussed below. \n\nThe tendency for SGD w/ early-stopping to self-regularize is well-documented, and there are many compelling theories about why this happens (e.g. Duvenaud et al., 2016 ; Hardt et al, 2015). It is also well-known that 2nd-order methods, including K-FAC and diagonal methods like Adam/RMSprop, don’t self-regularize nearly as much (e.g. Wilson et al, 2017; Keskar et al, 2017). \n\nBut just because a method like K-FAC doesn’t self-regularize as much as SGD, this doesn’t mean that it isn’t of practical utility. (Otherwise diagonal 2nd-order methods like Adam and RMSprop wouldn’t be as widely used as they are.) Implicit self-regularization of the form that SGD has can always be replaced by *explicit* regularization (i.e. modification of the loss) and/or model modifications. Moreover, in the online or large-data setting, where each example is processed only once, there is no question of generalization gap because the population loss is directly (stochastically) optimized. This online setting is encountered frequently in language modeling tasks, and our K-FAC method is particularly relevant for such tasks.\n\n(continued in next reply)", "We have updated the paper based on the suggestions of the reviewers.\n\nThis revision doesn't contain any of the planned updates/changes to the experimental results. These will come in the next revision. ", "Thanks for your comments. We have corrected the errors you pointed out and added back in the conclusion, which was originally cut for space considerations. These will appear in our next revision (to be posted soon).\n\nWith regards to convergence speed, we feel that the gains over SGD/Adam are significant. While wall-clock time wasn’t improved substantially in the DNC experiment (Figure 3), it was on the first two experiments on Penn-TreeBank (Figures 1 and 2). From those latter two figures one can clearly see that SGD/Adam slow down at a significantly higher loss than our method (almost to the point of plateauing).\n\nWhile we agree that the method is challenging to implement, we have a TensorFlow implementation ready for public release. We will make this available as soon as we can while respecting the anonymity of the reviewing process.", "RE Intuitive justifications for approximations\n======================\n\nSeveral of the key approximations we used were given intuitive justifications. For example, we justified the use of a chain-structure model for the w_t’s by pointing at that they are produced by a process (forward evaluation followed by back-prop) that has a similar sequential chain structure. We also provided intuitive justification and some preliminary analysis for Option 2.\n\nHowever several of the approximations were not given intuitive justifications, as you point out, and so we will add the following snippets of text to the respective sections. \n\n- Independence of T and the w_t's is a reasonable approximation assumption to make because 1) for many datasets T is constant (which formally implies independence), and 2) even when T varies substantially, shorter sequences will typically have similar statistical properties to longer ones (e.g. short paragraphs of text versus longer paragraphs).\n\n- Temporal homogeneity is a pretty mild approximation, and is analogous to the frequently used “steady-state assumption” from dynamical systems. Essentially, it is the assumption that the Markov chain defined by the system ``mixes\" and reaches its equilibrium distribution. If the system has any randomness, and its inputs reach steady-state, the steady-state assumption is quite accurate for states sufficiently far from the beginning of the sequence (which will be most of them).\n\n- V1 is symmetric iff \\hat{\\Psi} is symmetric. And as shown in the proof of Proposition 1 (see Appendix A.1) \\hat{\\Psi} has the interpretation of being the transition matrix of an LGGM which describes the evolution of “whitened” versions of the w_t’s (given by \\hat{w_t} = V_0^{-1/2} w_t). Linear dynamical systems with symmetric transition matrices arise frequently in machine learning and related areas (Huang et al., 2016; Hazan et al., 2017), particularly because of the algorithmic techniques they enable. Intuitively, a symmetric transition matrix allows allows one to model exponential decay of different basis components of the signal over time, but not rotations between these components (which are required to model sinusoidal/oscillating signals). \n\nHuang, Wenbing, et al. \"Sparse coding and dictionary learning with linear dynamical systems.\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016.\n\nHazan, Elad, Karan Singh, and Cyril Zhang. \"Learning linear dynamical systems via spectral filtering.\" Advances in Neural Information Processing Systems. 2017.\n\n(continued in next reply)", "We found a typo in Section 3.5.4 which may confuse the reviewers.\n\nNear the top of that section the equation should be:\n\nV_1 = V_{1, 0} = cov ( w_1, w_0) = cov ( \\Psi w_0 + \\epsilon_1, w_0) = \\Psi cov ( w_0, w_0) + cov ( \\epsilon_1, w_0) = \\Psi V_0 + 0 = \\Psi V_0. \n\nSorry for any confusion this may have caused." ]
[ -1, 7, -1, 5, 7, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "ryIhKEFxM", "iclr_2018_HyMTkQZAb", "SyubZTI7z", "iclr_2018_HyMTkQZAb", "iclr_2018_HyMTkQZAb", "rJ_n-jNQz", "BkyTlaL7G", "iclr_2018_HyMTkQZAb", "rk1lre5xG", "iclr_2018_HyMTkQZAb", "Sk-wrg9gM", "B1IfZi4mM", "iclr_2018_HyMTkQZAb" ]
iclr_2018_ByeqORgAW
Proximal Backpropagation
We propose proximal backpropagation (ProxProp) as a novel algorithm that takes implicit instead of explicit gradient steps to update the network parameters during neural network training. Our algorithm is motivated by the step size limitation of explicit gradient descent, which poses an impediment for optimization. ProxProp is developed from a general point of view on the backpropagation algorithm, currently the most common technique to train neural networks via stochastic gradient descent and variants thereof. Specifically, we show that backpropagation of a prediction error is equivalent to sequential gradient descent steps on a quadratic penalty energy, which comprises the network activations as variables of the optimization. We further analyze theoretical properties of ProxProp and in particular prove that the algorithm yields a descent direction in parameter space and can therefore be combined with a wide variety of convergent algorithms. Finally, we devise an efficient numerical implementation that integrates well with popular deep learning frameworks. We conclude by demonstrating promising numerical results and show that ProxProp can be effectively combined with common first order optimizers such as Adam.
accepted-poster-papers
Pros: + Clear, well-written paper that tackles an interesting problem. + Interesting potential connections to other approaches in the literature such as Carreira-Perpiñán and Wang, 2014 and Taylor et al., 2016. + Paper shows good understanding of the literature, has serious experiments, and does not overstate the results. Cons: - Theory only addresses gradient descent, not stochastic gradient descent. - Because the optimization process is similar to BFGS, it would make sense to have an empirical comparison against some second-order method, even though the proposed algorithm is more like standard backpropagation. This paper is a nice first step in an interesting direction, and belongs in ICLR if there is sufficient space.
train
[ "BkX4GPilz", "HJGe_n84f", "rJWf9IOgM", "rJX7hXKgG", "rk6DsGhWM", "rJ42Kf2-z", "By3PKf2-f", "B1vCOGhWG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Summary:\n\nUsing a penalty formulation of backpropagation introduced in a paper of Carreira-Perpinan and Wang (2014), the current submission proposes to minimize this formulation using explicit step for the update of the variables corresponding to the backward pass, but implicit steps for the update of the parameters of the network. The implicit steps have the advantage that the choice of step-size is replaced by a choice of a proximity coefficient, which the advantage that while too large step-size can increase the objective, any value of the proximity coefficient yields a proximal mapping guaranteed to decrease the objective.\nThe implicit are potentially one order of magnitude more costly than an explicit step since they require\nto solve a linear system, but can be solved (exactly or partially) using conjugate gradient steps. The experiments demonstrate that the proposed algorithm are competitive with standard backpropagation and potentially faster if code is optimized further. The experiments show also that in on of the considered case the generalization accuracy is better for the proposed method.\n\nSummary of the review: \n\nThe paper is well written, clear, tackles an interesting problem. \nBut, given that the method is solving a formulation that leverages second order information, it would seem reasonable to compare with existing techniques that leverage second order information to learn neural networks, namely BFGS, which has been studied for deep learning (see the references to Li and Fukushima (2001) and Ngiam et al (2011) below).\n\nReview:\n\nUsing an implicit step leads to a descent step in a direction which is different than the gradient step.\nBased on the experiment, the step in the implicit direction seems to decrease faster the objective, but the paper does not make an attempt to explain why. The authors must nonetheless have some intuition about this. Is it because the method can be understood as some form of block-coordinate Newton with momentum? It would be nice to have an even informal explanation.\n\nSince a sequence of similar linear systems have to be solved could a preconditioner be gradually be solved and updated from previous iterations, using for example a BFGS approximation of the Hessian or other similar technique. This could be a way to decrease the number of CG iterations that must done at each step. Or can this replaced by a single BFGS style step?\n\nThe proposed scheme is applicable to the batch setting when most deep network are learned using stochastic gradient type methods. What is the relevance/applicability of the method given this context?\n \nIn fact given that the proposed scheme applies in the batch case, it seems that other contenders that are very natural are applicable, including BFGS variants for the non-convex case (\n\nsee e.g. Li, D. H., & Fukushima, M. (2001). On the global convergence of the BFGS method for nonconvex unconstrained optimization problems. SIAM Journal on Optimization, 11(4), 1054-1064.\n\nand\n\nJ. Ngiam, A. Coates, A. Lahiri, B. Prochnow, Q. V. Le, and A. Y. Ng,\n“On optimization methods for deep learning,” in Proceedings of the 28th\nInternational Conference on Machine Learning, 2011, pp. 265–272.\n\n) \n\nor even a variant of BFGS which makes a block-diagonal approximation to the Hessian with one block per layer. To apply BFGS, one might have to replace the RELU function by a smooth counterpart..\n \nHow should one choose tau_theta?\n\nIn the experiments the authors compare with classical backpropagation, but they do not compare with \nthe explicit step of Carreira-Perpinan and Wang? This might be a relevant comparison to add to establish more clearly that it is the implicit step that yields the improvement.\n\n\n\n\n\nTypos or question related to notations, details etc:\n\nIn the description of algorithm 2: the pseudo-code does not specify that the implicit step is done with regularization coefficient tau_theta\n\nIn equation (10) is z_l=z_l^k or z_l^(k+1/2) (I assume the former).\n\n6th line of 5.1 theta_l is initialised uniformly in an interval -> could you explain why and/or provide a reference motivating this ?\n\n8th line of 5.1 you mention Nesterov momentum method -> a precise reference and precise equation to lift ambiguities might be helpful.\n\nIn section 5.2 the reference to Table 5.2 should be Table 1.\n", "I am convinced by the arguments put forward to explain why there is no comparison with the method of Carreira-Perpinan and Wang reported in the paper. It might be relevant to comment on this in the paper however.\n\nI understand that the formulation could be quite different than using second order information, and my point was not to say that they are similar in terms of functions. \nWhat I mean is that the cost of computing a descent step of the algorithm is comparable with the cost of some quasi-Newton methods (that would e.g. use block diagonal approximation of the Hessian) and on that ground the comparison would seem relevant. However, one could argue that the most important baseline is considered in the paper. \n\nIn conclusion the paper would be more compelling with more comparison and I agree with Reviewer 3 that the significance is therefore unclear, but the approach proposed remains sound and interesting.", "This work proposes to replace the gradient step for updating the network parameters to a proximal step (implicit gradient) so that a large stepsize can be taken. Then to make it fast, the implicit step is approximated using conjugate gradient method because the step is solving a quadratic problem. \n\nThe theoretical result of the ProxProp considers the full batch, and it can not be easily extended to the stochastic variant (mini-batch). The reason is that the gradient of proximal is evaluated at the future point, and different functions will have different future points. While for the explicit gradient, it is assessed at the current point, and it is an unbiased one. \n\nIn the numerical experiment, the parameter \\tau_\\theta is sensitive to the final solution. Therefore, how to choose this parameter is essential. Given a new dataset, how to determine it for a good performance. \n\nIn Fig 3. The full batch loss of Adam+ProxProp is higher than Adam+BackProp regarding time, which is different from Fig. 2. Also, the figure shows that the performance of Adam+BackProp is worst than Adam+ProxProp even though the training loss of Adam+BackProp is smaller that of Adam+ProxProp. Does it happen on this dataset only or it is the case for many datasets? ", "The paper uses a lesser-known interpretation of the gradient step of a composite function (i.e., via reverse mode automatic differentiation or backpropagation), and then replaces one of the steps with a proximal step. The proximal step requries the solution of a positive-definite linear system, so it is approximated using a few iterations of CG. The paper provides theory to show that their proximal variant (even with the CG approximations) can lead to convergent algorithms (and since practical algorithms are not necessarily globally convergent, most of the theory shows that the proximal variant has similar guarantees to a standard gradient step).\n\nOn reading the abstract and knowing quite a bit about proximal methods, I was initially skeptical, but I think the authors have done a good job of making their case. It is a well-written, very clear paper, and it has a good understanding of the literature, and does not overstate the results. The experiments are serious, and done using standard state-of-the-art tools and architectures. Overall, it is an interesting idea, and due to the current focus on neural nets, it is of interest even though it is not yet providing substantial improvements.\n\nThe main drawback of this paper is that there is no theory to suggest the ProxProp algorithm has better worst-case convergence guarantees, and that the experiments do not show a consistent benefit (in terms of time) of the method. On the one hand, I somewhat agree with the authors that \"while the running time is higher... we expect that it can be improved through further engineering efforts\", but on the other hand, the idea of nested algorithms (\"matrix-free\" or \"truncated Newton\") always has this issue. A very similar type of ideas comes up in constrained or proximal quasi-Newton methods, and I have seen many papers (or paper submissions) on this style of method (e.g., see the 2017 SIAM Review paper on FWI by Metivier et al. at https://doi.org/10.1137/16M1093239). In every case, the answer seems to be that it can work on *some problems* and for a few well chosen parameters, so I don't yet buy that ProxProp is going to make a huge savings on a wide-range of problems.\n\nIn brief: quality is high, clarity is high, originality is high, and significance is medium.\nPros: interesting idea, relevant theory provided, high-quality experiments\nCons: no evidence that this is a \"break-through\" idea\n\nMinor comments:\n\n- Theorems seemed reasonable and I have no reason to doubt their accuracy\n\n- No typos at all, which I find very unusual. Nice job!\n\n- In Algo 1, it would help to be more explicit about the updates (a), (b), (c), e.g., for (a), give a reference to eq (8), and for (b), reference equations (9,10). It's nice to have it very clear, since \"gradient step\" doesn't make it clear what the stepsize is, and if this is done in a \"Jacob-like\" or \"Gauss-Seidel-like\" fashion. (c) has no reference equation, does it?\n\n- Similarly, for Algo 2, add references. In particular, tie in the stepsizes tau and tau_theta here.\n\n- Motivation in section 4.1 was a bit iffy. A larger stepsize is not always better, and smaller is not worse. Minimizing a quadratic f(x) = .5||x||^2 will converge in one step with a step-size of 1 because this is well-conditioned; on the flip side, slow convergence comes from lack of strong convexity, or with strong convexity, ill-conditioning of the Hessian (like a stiff ODE).\n\n- The form of equation (6) was very nice, and you could also point out the connection with backward Euler for finite-difference methods. This was the initial setting of analysis for most of original results that rely on the proximal operator (e.g., Lions and Mercier 1970s).\n\n- Eq (9), this is done component-wise, i.e., Hadamard product, right?\n\n- About eq (12), even if softmax cross-entropy doesn't have a closed-form prox (and check the tables of Combettes and Pesquet), because it is separable (if I understand correctly) then it ought to be amenable to solving with a handful of Newton iterations which would be quite cheap.\n\nProx tables (see also the new edition of Bauschke and Combettes' book): P. L. Combettes and J.-C. Pesquet, \"Proximal splitting methods in signal processing,\" in: Fixed-Point Algorithms for Inverse Problems in Science and Engineering (2011) http://www4.ncsu.edu/~pcombet/prox.pdf\n\n- Below prop 4, discussing why not to make step (b) proximal, this was a bit vague to me. It would be nice to expand this.\n\n- Page 6 near the top, to apply the operator, in the fully-connected case, this is just a matrix multiply, right? and in a conv net, just a convolution? It would help the reader to be more explicit here.\n\n- Section 5.1, 2nd paragraph, did you swap tau_theta and tau, or am I just confused? The wording here was confusing.\n\n- Fig 2 was not that convincing since the figure with time showed that either usual BackProp or the exact ProxProp were best, so why care about the approximate ProxProp with a few CG iterations? The argument of better generalization is based on very limited experiments and without any explanation, so I find that a weak argument (and it just seems weird that inexact CG gives better generalization). The right figure would be nice to see with time on the x-axis as well.\n\n- Section 5.2, this was nice and contributed to my favorable opinion about the work. However, any kind of standard convergence theory for usual SGD requires the stepsize to change per iteration and decrease toward zero. I've heard of heuristics saying that a fixed stepsize is best and then you just make sure to stop the algorithm a bit early before it diverges or behaves wildly -- is that true here?\n\n- Final section of 5.3, about the validation accuracy, and the accuracy on the test set after 50 epochs. I am confused why these are different numbers. Is it just because 50 epochs wasn't enough to reach convergence, while 300 seconds was? And why limit to 50 epochs then? Basically, what's the difference between the bottom two plots in Fig 3 (other than scaling the x-axis by time/epoch), and why does ProxProp achieve better accuracy only in the right figure?", "We thank the reviewers for their constructive feedback. We have posted individual replies below the reviewers' comments and have uploaded a revised version of the PDF. The changes are marked in blue color.", "We agree that the theoretical results in our paper do not address the stochastic setting directly. Our results do, however, show that the proposed method yields the descent in a variable (uniformly bounded) metric, which allows to prove convergence even in a stochastic setting, if typical assumptions about the vanilla stochastic gradient are made. We refer for instance to the Assumptions 4.3 in https://arxiv.org/pdf/1606.04838.pdf as well as to the subsequent convergence analysis therein (Theorem 4.9). The practical setup in which most deep learning algorithms are used is, however, quite different from the setting used for convergence analysis. For example, rectified linear units or max pooling functions are not differentiable. We therefore focused our attention on the numerical evaluation and were able to demonstrate convergence behavior comparable to the state-of-the-art in a stochastic setting for common neural network architectures. Despite the bias, our descent direction leads to faster optimization progress w.r.t. to epochs than the classical gradient and is competitive w.r.t. runtime. \nWe have included a remark about the convergence theory in a stochastic setting based on the above reference in the revised version of our paper. \n\nRegarding the numerical experiments, we found that the convolutional neural network is not very sensitive to the choice of tau_theta; the fully-connected network is more sensitive. The parameter can be chosen with the same hyperparameter methods one might use to find a suitable learning rate, e.g. a grid search. Intuitively, the parameter interpolates between a gradient step (small tau_theta) and an exact layer-wise minimization step (large tau_theta).\nFor a fair comparison to BackProp (i.e. same number of free hyperparameters), we set either tau or tau_theta fixed to 1, and only tuned one of them. We expect that by tuning both parameters, the performance of the proposed method can be further improved.\n\nFig. 2 and Fig. 3 show experiments for different architectures on the same dataset (CIFAR-10) and the results cannot directly be compared. Furthermore, it is not unusual that a model with higher training error has a better validation accuracy, i.e. generalizes better. We do in general not expect a definitive relation between training and validation curves for different datasets/architectures.\n\nWe hope that we could clarify the bias aspect of our algorithm in the stochastic setting and kindly ask you to consider our additional comments in your rating. Please let us know if you have any further questions.", "We agree that the magnitude of the step size on its own does not determine the convergence speed. While we stated the largest eigenvalue of the Hessian in the CIFAR-10 data as an exemplary restriction of the explicit step size (e.g. in a one layer network), it is also true that the Hessian is very ill-conditioned: In fact, the smallest eigenvalue differs from the largest one by about 7 orders of magnitude! Similar to some stiff ODEs, we believe that implicit steps behave favorable in such a case. In particular, implicit steps never become unstable (on the respective (sub-)problem). \n\nIn section 5.1 we indeed picked \\tau=1 and tau_\\theta = 0.05 as for fully-connected networks this worked better than fixing tau_\\theta = 1 and tuning \\tau. We have fixed one of the parameters to have the same number of hyperparameters as for BackProp and expect a tuning of both parameters to further lead to an improved performance. \n\nThere are several motivations to consider an inexact solve with CG beyond the cost of the linear solve. For an exact solve one has to explicitly construct the problem matrix. This is readily available for fully-connected nets, but needs to be computed for convolutional layers, which might be costly in memory and compute time. Additionally, from a practical point of view one would like to leverage existing implementations without additional coding overhead. Both aspects can be exploited by providing the forward/backward operation of your favorite autodiff library as an abstract linear operator to the CG solver.\n\nFor this paper, we concentrated working out the effect of explicit vs. implicit gradient steps, and did intentionally not mix dynamic step size effects with these observations.\n\nThe validation accuracies are computed on the validation set, i.e., a set that is not considered for training, but used for tuning the hyperparameters. This is distinct from a held back test set on which we just computed the final accuracy. The bottom two plots in Fig. 4 indeed only differ by the scaling of the x-axis. However, since BackProp is faster than our ProxProp variants per iteration, the plot against time contains data for 300s training time for every method. Consequently, the lower right plot shows more than 50 epochs of the Adam + BackProp algorithm. \n\nPlease let us know if you have any further questions.", "While you are right that Eq. (13) suggests that the algorithm leverages second-order information for the layer-wise(!) quadratic subproblem, ProxProp is much closer to classical backpropagation than to a (quasi-)Newton method of the whole network (the limit-case \\tau_\\theta -> 0, \\tau -> \\infty in the Equation in Prop. 2 recovers BackProp). \nThe Hessian matrix consists of second-order derivatives, while our metric is purely formed of the forward pass activations. Consider for example a layer at depth L (somewhere in the middle of the network). Then the Hessian (and any decent approximation) would depend on the network components at depth larger than L (in particular the final loss). Consequently, this Hessian of the overall energy changes when components at depth larger than L are modified. However, our metric at layer L is not affected by this modification. Hence the approach is quite distinct from quasi-Newton methods such as BFGS. \nWe have therefore focused on comparing our algorithm with first-order methods that are conceptually closer and are at the same time considered current state-of-the-art.\n\nOur intuition is that ProxProp decreases the energy faster due to the advantage of implicit minimization steps over explicit steps as discussed in section 4.1. If you consider eq. (13), then in this metric the layer-wise quadratic subproblem is better conditioned. Intuitively, in this metric it becomes easier to make the quadratic fit for the last term in eq. (3). Other lines of work are partially motivated by the same issue, e.g. the commonly used BatchNormalization.\n\nWe did not further go into preconditioners for the linear system as very few CG iterations already suffice for a good solution. Note that we warmstart the CG solver and expect the solution to be ‘close’ because of the proximal term. We have focused on the conjugate gradient approximation (also because of its abstract implementation advantages), but considering other approximations to the quadratic subproblem could be an interesting direction.\n\nIt is correct that our theorems address the full batch setting. Since our method, however, still yields a descent in a different (variable) metric, the techniques discussed in https://arxiv.org/pdf/1606.04838.pdf section 4.1 are applicable to extend the analysis even to a stochastic setting, if one may assume a sufficiently friendly energy. Since in practice common neural nets have non-smooth energies, we have focused our paper on numerically demonstrating results comparable to the state-of-the-art by evaluating the algorithm in a stochastic mini-batch setting. We have added a remark and the above reference about extending the convergence analysis to the revised version of our paper. \n\nIn our experiments, tau_theta is an additional hyperparameter that can be chosen just as the learning rate, e.g. via a hyperparameter grid search. However, we noticed that performance of our convolutional network architecture is not very sensitive to tau_theta.\n\nAs stated after eq. (3), the method by Carreira-Perpinan and Wang is very different from our approach and also from explicit steps on the penalty functional. They perform block-coordinate minimization steps and also do not perform any forward passes. Nevertheless, we experimented with the MATLAB implementation kindly provided by the authors, but found that the numerical performance is far from efficient implementations of current state-of-the-art optimizers. We therefore didn’t see additional value - conceptually and numerically - of including this comparison in our paper. \n\nWe chose the standard weight initialization of PyTorch and intentionally did not further tune this hyperparameter to avoid confounding effects.\n\nWe used the Nesterov momentum method as implemented in PyTorch (http://pytorch.org/docs/master/optim.html#torch.optim.SGD) and will add a remark. \n\nIn conclusion, we hope that we could clarify why we did not compare with BFGS style methods. Given your otherwise very positive review, we would appreciate if you reconsidered your rating. Please let us know if you have any further questions.\n" ]
[ 6, -1, 5, 7, -1, -1, -1, -1 ]
[ 4, -1, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_ByeqORgAW", "B1vCOGhWG", "iclr_2018_ByeqORgAW", "iclr_2018_ByeqORgAW", "iclr_2018_ByeqORgAW", "rJWf9IOgM", "rJX7hXKgG", "BkX4GPilz" ]
iclr_2018_rkLyJl-0-
Neumann Optimizer: A Practical Optimization Algorithm for Deep Neural Networks
Progress in deep learning is slowed by the days or weeks it takes to train large models. The natural solution of using more hardware is limited by diminishing returns, and leads to inefficient use of additional resources. In this paper, we present a large batch, stochastic optimization algorithm that is both faster than widely used algorithms for fixed amounts of computation, and also scales up substantially better as more computational resources become available. Our algorithm implicitly computes the inverse Hessian of each mini-batch to produce descent directions; we do so without either an explicit approximation to the Hessian or Hessian-vector products. We demonstrate the effectiveness of our algorithm by successfully training large ImageNet models (InceptionV3, ResnetV1-50, ResnetV1-101 and InceptionResnetV2) with mini-batch sizes of up to 32000 with no loss in validation error relative to current baselines, and no increase in the total number of steps. At smaller mini-batch sizes, our optimizer improves the validation error in these models by 0.8-0.9\%. Alternatively, we can trade off this accuracy to reduce the number of training steps needed by roughly 10-30\%. Our work is practical and easily usable by others -- only one hyperparameter (learning rate) needs tuning, and furthermore, the algorithm is as computationally cheap as the commonly used Adam optimizer.
accepted-poster-papers
Pros: + Clearly written paper. + Easily implemented algorithm that appears to have excellent scaling properties and can even improve on validation error in some cases. + Thorough evaluation against the state of the art. Cons: - No theoretical guarantees for the algorithm. This paper belongs in ICLR if there is enough space.
train
[ "ByPYAMtgG", "Hy5t5WIxG", "BkEmOSvef", "SyVWZD6QM", "ByNx4IpXf", "Hkg9QITXG", "BJXgmLamM", "SkFuMIpXM", "ByNdWUp7M", "HJLT-LpXG", "rJc1-IpmG", "rkXFpDlzf", "HJ1hYPI0-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "public" ]
[ "The paper proposes a new algorithm, where they claim to use Hessian implicitly and are using a motivation from power-series. In general, I like the paper.\n\nTo me, Algorithm 1 looks like some kind of proximal-point type algorithm. Algorithm 2 is more heuristic approach, with a couple of parameters to tune it. Given the fact that there is convergence analysis or similar theoretical results, I would expect to have much more numerical experiments. E.g. there is no results of Algorithm 1. I know it serves as a motivation, but it would be nice to see how it works.\n\nOtherwise, the paper is clearly written.\nThe topic is important, but I am a bit afraid of significance. One thing what I do not understand is, that why they did not compare with Adam? (they mention Adam algorithm soo many times, that it should be compared to).\n\nI am also not sure, how sensitive the results are for different datasets? Algorithm 2 really needs so many parameters (not just learning rate). How \\alpha, \\beta, \\gamma, \\mu, \\eta, K influence the speed? how sensitive is the algorithm for different choices of those parameters?\n\n\n ", " \nThis paper presents a new 2nd-order algorithm that implicitly uses curvature information, and it shows the intuition behind the approximation schemes in the algorithms and also validates the heuristics in various experiments. The method involves using Neumann Series and Richardson iteration to avoid Hessian-vector product in second order method for NN. In the actual performance, the paper presents both practical efficiency and better generalization error in different deep neural networks for image classification tasks, and the authors also show differences according to different settings, e.g., Batch Size, Regularization. The numerical examples are relatively clear and easy to figure out details.\n\n1. While the paper presents the algorithm as an optimization algorithm, although it gets better learning performance, it would be interesting to see how well it is as an optimizer. For example, one simple experiment would be showing how it works for convex problems, e.g., logistic regression. Realistic DNN systems are very complex, and evaluating the method in a simple setting would help a lot in determining what if anything is novel about the method.\n\n2. Also, for deep learning problems, it would be more convincing to see how different initialization can affect the performances. \n\n3. Although the authors present their algorithm as a second order method at beginning, the final algorithm is kind of like a complex momentum SGD with limited memory. Rather than simply throwing out a new method with a new name, it would be helpful to understand what the steps of this method are implicitly doing. Please explain more about this.\n\n4. It said that the algorithm is hyperparameter free except for learning rate. However, it is hard to see why there is no need to tune other hyperparameters, e.g., Cubic Regularizer, Repulsive Regularizer. The effect/sensitivity of hyperparameters for second order methods are quite different than hyperparameters for first order methods, and it is of interest to know how hyperparameters for implicit second order methods perform.\n\n5. For Section 4.2, the well know benefit by using large batch size to train models is that it could reduce training time and epochs. However, from Table 3, there is no such phenomenon. Please explain.\n\n", "Summary: \nThe paper proposes Neumman optimizer, which makes some adjustments to the idealized Neumman algorithm to improve performance and stability in training. The paper also provides the effectiveness of the algorithm by training ImageNet models (Inception-V3, Resnet-50, Resnet-101, and Inception-Resnet-V2). \n \nComments:\nI really appreciate the author(s) by providing experiments using real models on the ImageNet dataset. The algorithm seems to be easily used in practice. \n\nI do not have many comments for this paper since it focuses only in practical view without theory guarantee rigorously. \n\nAs you mention in the paper that the algorithm uses the same amount of computation and memory as Adam optimizer, but could you please provide the reason why you only compare Neumann Optimizer with Baseline RMSProp but not with Adam? As we know, Adam is currently very well-known algorithm to train DNN. Do you think it would be interesting if you could compare the efficiency of Neumann optimizer with Adam? I understand that you are trying to improve the existing results with their optimizer, but this paper also introduces new algorithm. \n\nThe question is that, with the given architectures and dataset, what algorithm should people consider to use between Neumann optimizer and Adam? Why should people use Neumann optimizer but not Adam, which is already very well-known? If Neumann optimizer can surpass Adam on ImageNet, I think your algorithm will be widely used after being published. \n \nMinor comments:\nPage 3, in eq. (3): missing “-“ sign\nPage 3, in eq. (6): missing “transpose” on \\nabla \\hat{f}\nPage 4, first equation: O(|| \\eta*mu_t ||^2)\nPage 5, in eq. (9): m_{k-1}\n", "Thank you for adding more experiments. \n\nIn my opinion, it is hard to judge your paper since you do not have any theoretical guarantee rigorously. But it seems that your algorithm is promising. Therefore, I increased the rating score for giving it a chance to be published. I hope the practitioners will try to use it and see if there is any drawback. \n", "We thank all the reviewers for their feedback. Before we address individual comments, we would like to mention some key themes in this paper that seem to have been lost mainly due to our presentation in the experiments section. \n\n(1) Training deep nets fast (in wall time/parameter updates) without affecting validation performance is important. Previous attempts to scale up, using large batch size and parallelization, hit limits which we avoid. For example, using 500 workers computing gradients in parallel, we can train Resnet-V1-50 to 76.5% accuracy in a little less than 2 hours. In contrast, in Goyal et al. “Accurate, large minibatch SGD: Training Imagenet in 1 hour.”, the maximum batch size was 8000 (equivalent to 250 workers), and in You et al. “Scaling SGD batch size to 32k for imagenet training”, there is a substantial 0.4-0.7% degradation in final model performance.\n\n(2) Our method actually achieves better validation performance (~1% better) compared to the published best performance on image models in multiple architectures.\n", "We have made the following changes in the new version of the paper that we uploaded. \n\nWe have added some new experiments \n\n(1) Comparison to Adam in Figure 1,\n(2) Multiple Initializations in Appendix D, and \n(3) A Stochastic Convex Problem in Appendix B, along with small edits suggested by reviewers.\n", "Thanks for your interest and detailed feedback, Boris. We’ve incorporated most of your feedback, and hope to answer some of your questions below:\n\n1. We’ve added the small calculation for this in Section 3.1.\t\n\n2. A couple of things are going on here:\ni) We allow the mini-batch to vary in algorithm 2. This is a pretty significant change (we like to think of solving a stochastic bootstrap style subproblem instead of deterministic ones).\nii) We change the notation to offset the w_t (so that w_t in Algorithm 2 actually correspond to w_t + \\mu m_t in Algorithm 1). This is a pure notational change, and has no effect on the iteration -- we could also have done the same thing for Algorithm 1 (i.e., we could have unrolled m_t as the sum of gradients).\n\n3. It seems somewhat insensitive to period of the resets, but the resets are necessary, especially at the start.\n\n4. The coefficients we have for m_{t-1} and d_t aren’t a convex combination, and additionally, we subtract an extra \\eta d_t from the update in Line 11 (this subtraction is correct, and somewhat surprising...you accidentally identified it as a typo below). It’s somewhat difficult to reinterpret as momentum.\n\n5. We have not tried on large models without batch normalization. Since most convolutional architectures include batch norm, we had not thought to have experiments along this axis.\n6. By default, all the models we used include weight decay -- so Figure 5 (the ablation experiments) should give you an idea of what happens if you use weight decay and not cubic + repulsive.\n\nTypos:\nWe have all except (5) and (6) -- thank you! (5) and (6) are actually correct -- it definitely looks a little strange, but what we’ve done is to keep track of offset variables w_t + \\mu m_t.\n", "Thank you AnonReviewer3 for your thoughts and comments: we address your comments below and hope to clear up one misconception (caused by poor labelling of Table 3):\n\n1. We have added an experiment in Appendix B to show the results on a synthetic logistic regression problem. We compared the Neumann optimizer with SGD, Adam and a Newton algorithm for varying batch sizes. Our method outperforms SGD and Adam consistently, and while Newton’s method descends to a better loss, it comes at a steep per-step cost. We believe there are other large batch methods like Nesterov and SVRG that might get to lower losses than our method. However, none of these algorithms perform well on training a deep neural net. \n\n2. We've included an Appendix D with a new experiment illustrating that different initializations and trajectories of optimization all give the same quality model output (for the Inception V3 model).\n\n3. We're not quite sure what the reviewer is looking for here: it seems that Section 2 gives a derivation of the method: the method is implicitly inverting the Hessian (which is convexified after regularization) of a mini-batch. Our algorithm crucially differs from standard momentum in that gradient evaluation occurs at a different point from the current iterate (in Algorithm 1), and we are not applying an exponential decay (a standard momentum update would blow up if you did this).\n\n4. We agree that it is of interest to further study the sensitivity to hyperparameters. The results that we have hold for ImageNet, but also for CIFAR-10 and CIFAR-100 with no change in hyperparameters, so we think that the results are likely to carry over to most modern CNN architectures on image datasets -- the hyperparameter choice will likely work out of the box (much like the beta_1, beta_2 and epsilon parameters in Adam). We agree that there appears to be quite a few hyperparameters, but \\alpha and \\beta are regularization coefficients, so they have to be roughly scaled to the loss; \\gamma is a moving average coefficient and never needs to be changed; \\mu is dependent only on time, not the model; finally, training is quite insensitive to K (as mentioned in Section 3.2). Thus, the only hyperparameter that needs to be specified is the learning rate \\eta, and that does determine the speed of optimization.\n\n5. The epochs listed in Table 3 are total epochs (i.e., total sum of all samples seen by all workers), so using twice as many workers is in fact twice as fast (we've updated the table to clarify this). We're a little concerned that we were not clear on the significance of the experimental results: our algorithm scales up to a batch size of 32000 (beating state-of-the-art for large batch training), and we obtain linear speedups across this regime i.e., we can run 500 workers, in 1/10th the time that it takes the usual 50 worker baseline. We think of this as the major contribution of our work.\n", "Thank you AnonReviewer2 for your comments. Here are our responses:\n\nWe have added a number of new experiments, including (1) Solving a stochastic convex optimization problem (where the Neumann optimizer is far better than SGD or Adam), (2) Comparisons with Adam on Inception-V3 (see below) and (3) Multiple runs of the Neumann algorithm on Inception-V3 showing that the previous experiments are reproducible.\n\nTo the comment about running Algorithm 1: we’ve run it on stochastic convex problems before, where it performs much better than either SGD or Adam. On deep neural nets, our earlier experience with similar “two-loop” algorithms (i.e., freeze the mini-batch, and perform substantial inner-loop computation) lead us to the conclusion that Algorithm 1 would most likely not perform very well at training deep neural nets. The main difficulty is that the inner loop iterations “overfit” to the mini-batch. As you mentioned, this is meant to purely motivational for Algorithm 2. \n\nAdam achieves similar (or worse) results to the RMSprop baselines (Figure 1): in comparison to our Neumann optimizer, the training is slower, the output model is lower quality, and the optimizer scales poorly. When training with Adam, we observed instability with default parameters (especially, epsilon). We changed it to 0.01 and 1.0 and have two runs which show dramatically different results. Our initial reason for not including comparisons to Adam was that we wanted to use standard models and training parameters (i.e., the Inception and Resnet papers use RMSprop).\n\nWe think that the significance in our paper lies in the strong experimental results:\n1. Significantly improved accuracy in output models (using a small number of workers) over published baselines -- i.e., just switching over to our optimizer will increase accuracy by 0.8-0.9%.\n2. Excellent scaling behaviour (even using a very large number of workers).\nFor example, our experimental results for (2) are strictly stronger than those in the literature for large batch training.\n\nThe results that we have hold for ImageNet, but also for CIFAR-10 and CIFAR-100 with no change in hyperparameters, so we think that the results are likely to carry over to most modern CNN architectures on image datasets -- the hyperparameter choice will likely work out of the box (much like the beta_1, beta_2 and epsilon parameters in Adam). We agree that there appears to be quite a few hyperparameters, but \\alpha and \\beta are regularization coefficients, so they have to be roughly scaled to the loss; \\gamma is a moving average coefficient and never needs to be changed; \\mu is dependent only on time, not the model; finally, training is quite insensitive to K (as mentioned in Section 3.2). Thus, the only hyperparameter that needs to be specified is the learning rate \\eta, and that does determine the speed of optimization.\n", "Thank you AnonReviewer1 for your feedback and comments.\n\nWe ran a new set of experiments comparing Adam, RMSprop and Neumann (Figure 1). Adam achieves similar (or worse) results to the RMSprop baselines: in comparison to our Neumann optimizer, the training is slower, the output model is lower quality, and the optimizer scales poorly. When training with Adam, we observed instability with default parameters (especially, epsilon). We changed it to 0.01 and 1.0 and have two runs which show dramatically different results. Our initial reason for not including comparisons to Adam was that we wanted to use standard models and training parameters (i.e., the Inception and Resnet papers use RMSprop).\n\nWe hope that practitioners will consider Neumann over Adam for the following reasons:\n- Significantly higher quality output models when training using few GPUs.\n- Ability to scale up to vastly more GPUs/TPUs, and overall decreased training time.\n\nWe’ve incorporated your minor comments -- thanks again!\n", "On the Resnet-V1, each P100 had 32 examples, so 32000 corresponds to 1000 GPUs. This is updated in Table 3 now.\n\nTo your question about small batches -- we ran the algorithm in asynchronous mode ( mini-batches of 32, with 50 workers doing separate mini-batches); the final output was quite a bit worse in terms of test accuracy (76.8% instead of 79.2%). It’s not clear whether it’s the Neumann algorithm with batch size 32 or the async that causes the degradation though. So at the least algorithm doesn’t blow up with small batches, but we haven’t explored this setting enough to say anything conclusive.\n", "How many P100 did you need to use in order to fit so large batches? \nIs the algorithm unstable in low batch size regimes?", "Thanks very much for this outstanding paper! It is very interesting, both from the theoretical and practical points of view. \nWe have several questions:\n1.\tPage 5: The transition from Eq. 7 to eq. 9 intuitively makes sense, but we would appreciate more rigorous derivation (maybe as Appendix B?)\n2.\tPage 6: can you please clarify the transition from Alg. 1 to Alg. 2. ? \n* Alg. 1 line 5 uses fixed w_t and fixed mini-batch. \n* Alg 2, line 9-11 uses different w_t and different batches (similar to regular SGD). \nWhat are the assumptions / requirements which make it possible to use different w_t?\n3.\tPage 6, Alg. 2: is periodic reset of m_t necessary? \nWhat will be the impact on performance if we don’t reset m_t and avoid K at all?\n4.\tPage 6, Alg 2: Is Alg. 2 w/o regularization equivalent to regular SGD with adaptive momentum?\n5.\tPage 8, section 4.2: Did you try to use Neumann optimizer for training networks w/o batch norm (e.g. Alexnet or Googlenet) with large batch?\n6.\tPage 8, section 4.3 “Regularization”: Did you try to use L2-regularizer (weight decay) instead of “cubic + repulsive term”? \n\nTypos:\n1.\tPage 3 , eq. 3: ‘-“ sign is missing\n2.\tPage 3 , after eq. 6: variable z is not defined (is z:= (w-w_t)/ \\nu ? )\n3.\tPage 4 , first equation (line 4): should be O(|\\nu m_t|^ 2 ?\n4.\tPage 5, eq.9: last term should be m_{k-1}\n5.\tPage 6, Alg. 2, line 11: should be w_t=w_{t-1} + m_t ?\n6.\tPage 6, Alg. 2, line 13: should be return w_T?\n7.\tPage 7: can you set a right reference to data augmentation strategy? \n\nThanks again for an excellent paper!\n" ]
[ 6, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rkLyJl-0-", "iclr_2018_rkLyJl-0-", "iclr_2018_rkLyJl-0-", "HJLT-LpXG", "iclr_2018_rkLyJl-0-", "iclr_2018_rkLyJl-0-", "HJ1hYPI0-", "Hy5t5WIxG", "ByPYAMtgG", "BkEmOSvef", "rkXFpDlzf", "iclr_2018_rkLyJl-0-", "iclr_2018_rkLyJl-0-" ]
iclr_2018_rJ33wwxRb
SGD Learns Over-parameterized Networks that Provably Generalize on Linearly Separable Data
Neural networks exhibit good generalization behavior in the over-parameterized regime, where the number of network parameters exceeds the number of observations. Nonetheless, current generalization bounds for neural networks fail to explain this phenomenon. In an attempt to bridge this gap, we study the problem of learning a two-layer over-parameterized neural network, when the data is generated by a linearly separable function. In the case where the network has Leaky ReLU activations, we provide both optimization and generalization guarantees for over-parameterized networks. Specifically, we prove convergence rates of SGD to a global minimum and provide generalization guarantees for this global minimum that are independent of the network size. Therefore, our result clearly shows that the use of SGD for optimization both finds a global minimum, and avoids overfitting despite the high capacity of the model. This is the first theoretical demonstration that SGD can avoid overfitting, when learning over-specified neural network classifiers.
accepted-poster-papers
This is a high quality paper, clearly written, highly original, and clearly significant. The paper gives a complete analysis of SGD in a two layer network where the second layer does not undergo training and the data are linearly separable. Experimental results confirm the theoretical suggestion that the second layer can be trained provided the weights don't change sign and remain bounded. The authors address the major concerns of the reviewers (namely, whether these results are indicative given the assumptions). This line of work seems very promising.
train
[ "HJ9LXfvlz", "rJV8Y8ulf", "BJBRHQkWf", "Bkn9F8dfz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "Paper studies an interesting phenomenon of overparameterised models being able to learn well-generalising solutions. It focuses on a setting with three crucial simplifications:\n- data is linearly separable\n- model is 1-hidden layer feed forward network with homogenous activations\n- **only input-hidden layer weights** are trained, while the hidden-output layer's weights are fixed to be (v, v, v, ..., v, -v, -v, -v, ..., -v) (in particular -- (1,1,...,1,-1,-1,...,-1))\nWhile the last assumption does not limit the expressiveness of the model in any way, as homogenous activations have the property of f(ax)=af(x) (for positive a) and so for any unconstrained model in the second layer, we can \"propagate\" its weights back into first layer and obtain functionally equivalent network. However, learning dynamics of a model of form \n z(x) = SUM( g(Wx+b) ) - SUM( g(Vx+c) ) + d\nand \"standard\" neural model\n z(x) = Vg(Wx+b)+c\ncan be completely different.\nConsequently, while the results are very interesting, claiming their applicability to the deep models is (at this point) far fetched. In particular, abstract suggests no simplifications are being made, which does not correspond to actual result in the paper. The results themselves are interesting, but due to the above restriction it is not clear whether it sheds any light on neural nets, or simply described a behaviour of very specific, non-standard shallow model.\n\nI am happy to revisit my current rating given authors rephrase the paper so that the simplifications being made are clear both in abstract and in the text, and that (at least empirically) it does not affect learning in practice. In other words - all the experiments in the paper follow the assumption made, if authors claim is that the restriction introduced does not matter, but make proofs too technical - at least experimental section should show this. If the claims do not hold empirically without the assumptions made, then the assumptions are not realistic and cannot be used for explaining the behaviour of models we are interested in.\n\nPros:\n- tackling a hard problem of overparametrised models, without introducing common unrealistic assumptions of activations independence\n- very nice result of \"phase change\" dependend on the size of hidden layer in section 7\n\nCons:\n- simplification with non-trainable second layer is currently not well studied in the paper; and while not affecting expressive power - it is something that can change learning dynamics completely\n\n# After the update\n\nAuthors addressed my concerns by:\n- making simplification assumption clearer in the text\n- adding empirical evaluation without the assumption\n- weakening the assumptions\n\nI find these modifications satisfactory and rating has been updated accordingly. \n", "This paper shows that on linearly separable data, SGD on a overparametrized network (one hidden layer, with leaky ReLU activations) can still lean a classifier that provably generalizes. The assumption on data and structure of network is a bit strong, but this is the first result that achieves a number of desirable properties\n``1. Works for overparametrized network\n2. Finds global optimal solution for a non-convex network.\n3. Has generalization guarantees (and generalization is related to the SGD algorithm).\n4. Number of samples need not depend on the number of neurons. \n\nThere have been several papers achieving 1 and 2 (with much weaker assumptions), but they do not have 3 and 4. The proof of the optimization part is very similar to the proof of perceptron algorithm, and really relies on linear separability. The proof of generalization is based on a compression argument, where if an algorithm does not take many nonzero steps, then it must have good generalization. Ideally, one would also want to see a result where overparametrization actually helps (in the main result the whole data can be learned by a linear classifier). This is somewhat achieved when the activation is replaced with standard ReLU, where the paper showed with a small number of hidden units the algorithm is likely to get stuck at a local minima, but with enough hidden units the algorithm is likely to converge (but even in this case, the data is still linearly separable and can be learned just by a perceptron). \n\nThe main concern about the paper is the possibility of generalizing the result. The algorithm part seems to heavily rely on the linear separable assumption. The generalization part relies on not making many non-zero updates, which is not really true in realistic settings (where the data is accessed in multiple passes) [After author response: Yes in the linearly separable case with hinge loss it is quite possible that the number of updates is sublinear. However what I meant here is that with more complicated data and different loss functions it is hard to believe that this can still hold.]. The related work section is also a bit unfair to some of the other generalization results (e.g. Bartlett et al. Neyshabur et al.): those results work on more general network settings, and it's not completely clear that they cannot be related to the algorithm because they rely on certain solution specific quantities (such as spectral/Frobenius norms of the weight matrices) and it could be possible that SGD tends to find a solution with small norm (which can be proved in linear setting and might also be provable for the setting of this paper) [This is addressed in the author response].\n\nOverall, even though the assumptions might be a bit strong, I think this is an interesting result working towards a good direction and should be accepted.", "Summary:\nThis paper considers the problem of classifying linearly separable data with a two layer \\alpha- Leaky ReLU network, in the over-parametrized setting with 2k hidden units. The algorithm used for training is SGD which minimizes the hinge loss error over the training data. The parameters in the top layer are fixed in advance and only the parameters in the hidden layer are updated using SGD. First result shows that the loss function does not have any sub-optimal local minima. Later, for the above method, the paper gives a bound proportional to ||w*||^2/\\alpha^2, on the number of non-zero updates made by the algorithm (similar to perceptron analysis), before converging to a global minima - w*. Using this a generalization error bound independent of number of hidden units is presented. Later the paper studies ReLU networks and shows that loss in this case can have sub-optimal local minima. \n\nComments:\n\nThis paper considers a simpler setting to study why SGD is successful in recovering solutions that generalize well even though the neural networks used are typically over-parametrized. While the paper considers a simpler setting of classifying linearly separable data and training only the hidden layer, it nevertheless provides a useful insight on the role of SGD in recovering solutions that generalize well (independent of number of hidden units 'k'). \n\nOne confusing aspect in the paper is the optimization and generalization results hold for any global minima w* of the L_s(w). There is a step missing of taking the minimum over all such w*, which will give the tightest bounds for SGD, and it will be useful to clear this up in the paper. \n\nMore importantly I am curious how close the updates are when, 1)SGD is updating only the hidden units and 2) SGD is updating both the layers. Simple intuition suggests SGD might update the top layer \"more\" that the hidden layer as the gradients tend to decay down the layers. It is useful to discuss this in the paper and may be have some experiments on linearly separable data but with updates in both layers.", "We thank the reviewers for their helpful feedback. The main concern that was raised by the reviewers is whether these results generalize to a realistic neural network training process.\nSpecifically, in the submission we have analyzed a variant of SGD which updates only the first layer of the network, while keeping the weights of the second layer fixed. AnonReviewer2 correctly notes that in practice, the second layer is also updated, and asks to what degree our results hold in this case. To address this concern, we revise the text as follows:\n1. We clearly state our assumptions in both the abstract and the paper itself (see Section 5.3). \n2. We conduct the same experiments as in the paper, but with both layers trained. We empirically show that training both layers has similar training and generalization performance as training the first layer (Figure 2).\n3. We show that the main theoretical result still holds even when the second layer weights are updated, as long as they do not change signs during the training process, and their absolute values are bounded from below and from above. \n4. We conduct experiments similar to the setting in (2) above, but now we choose a constant step size such that the condition in (3) above holds. Namely, we ensure that the weights of the last layer do not change their sign, and are correctly bounded. The performance of SGD in this case is similar to previous experiments and is in line with our theoretical findings.\n\nThe above show that although the dynamics of the problem indeed change when updating the second layer, our results and conclusions still hold. A complete theoretical analysis of the two layer case is left for future work.\n\nRegarding the linear separability assumption, this is a realistic setting and this assumption allows us to show for the first time a complete analysis of optimization and generalization for over-parameterized neural networks. We are not aware of any other result of this kind under different realistic assumptions.\nAs for the proposition that SGD tends to find solutions with small norm in our problem, we are not aware of any existing results that imply that this is indeed the case, though this may be an interesting problem to study in the future. We have rephrased our notes on other generalization results in the related work section, addressing AnonReviewer1’s remark.\nAnonReviewer1 mentioned that in practice there should be many non-zero updates since the data is accessed multiple times. However, we note that we considered the hinge loss, which vanishes for points that are classified with a margin. Therefore, it is possible that with multiple passes over the data there are only a few non-zero updates.\nFinally, AnonReviewer3 notes that we can optimize our bound with respect to w^*. This is true, as in the vanilla Perceptron, the best w* is the one with the largest margin. \n" ]
[ 7, 7, 8, -1 ]
[ 3, 3, 4, -1 ]
[ "iclr_2018_rJ33wwxRb", "iclr_2018_rJ33wwxRb", "iclr_2018_rJ33wwxRb", "iclr_2018_rJ33wwxRb" ]
iclr_2018_Skz_WfbCZ
A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks
We present a generalization bound for feedforward neural networks in terms of the product of the spectral norm of the layers and the Frobenius norm of the weights. The generalization bound is derived using a PAC-Bayes analysis.
accepted-poster-papers
This is a strong paper presenting a very clean proof of a result that is similar, though now incomparable to one due to Bartlett et al. These bounds (and Bartlett's) are among the most promising norm-based bounds for NNs. I would simply add that the citation of Dziugaite and Roy (2017) could be improved. There work also connects sharpness (or flatness) with generalization via the PAC-Bayes framework, and moreover, there bounds are nonvacuous. Are the bounds in this paper nonvacuous, say, on MNIST for 60,000 training data, for the network learned by SGD? If not, how close do they get to 1.0?
train
[ "SJehrIf1f", "rkn5xHFlf", "Hyi_2MTxz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors prove a generalization guarantee for deep\nneural networks with ReLU activations, in terms of margins of the\nclassifications and norms of the weight matrices. They compare this\nbound with a similar recent bound proved by Bartlett, et al. While,\nstrictly speaking, the bounds are incomparable in strength, the\nauthors of the submission make a convincing case that their new bound\nmakes stronger guarantees under some interesting conditions.\n\nThe analysis is elegant. It uses some existing tools, but brings them\nto bear in an important new context, with substantive new ideas needed.\nThe mathematical writing is excellent.\n\nVery nice paper.\n\nI guess that networks including convolutional layers are covered by\ntheir analysis. It feels to me that these tend to be sparse, but that\ntheir analysis still my provides some additional leverage for such\nlayers. Some explicit discussion of convolutional layers may be\nhelpful. ", "This paper provides a new generalization bound for feed forward networks based on a PAC-Bayesian analysis. The generalization bound depends on the spectral norm of the layers and the Frobenius norm of the weights. The resulting generalization bound is similar (though not comparable) to a recent result of Bartlett et al (2017), however the technique is different since this submission uses PAC-Bayesian analysis. The resulting proof is more simple and streamlined compared to that of Bartlett et al (2017).\n\nThe paper is well presented, the result is explained and compared to other results, and the proofs seem correct. The result is not particularly different from previous ones, but the different proof technique might be a good enough reason to accept this paper. \n\n\n\n\nTypos: Several citations are unparenthesized when they should be. Also, after equation (6) there is a reference command that is not compiled properly.\n\n", "This paper combines a simple PAC-Bayes argument with a simple perturbation analysis (Lemma 2) to get a margin based generalization error bound for ReLU neural networks (Theorem 1) which depends on the product of the spectral norms of the layer parameters as well as their Frobenius norm. The main contribution of the paper is the simple proof technique to derive Theorem 1, much simpler than the one use in the very interesting work [Bartlett et al. 2017] (appearing at NIPS 2017) which got an analogous bound but with a dependence on the l1-norm of the layers instead of the Frobenius norm. The authors make a useful comparison between these bounds in Section 3 showing that none is dominating the others, but still analyzing their properties in terms of structural properties of the weight matrices.\n\nI enjoyed reading this paper. One could think that it makes a somewhat incremental contribution with respect to the more complete work (both theory and practice) from [Bartlett et al. 2017]. Nevertheless, the simplicity and elegance of the proof as well as the result might be useful for the community to get progress on the theoretical analysis of NNs.\n\nThe paper is well written, though I make some suggestions for the camera ready version below to improve clarity.\n\nI verified most of the math.\n\n== Detailed suggestions ==\n\n1) The authors should specify in the abstract and in the introduction that they are analyzing feedforward neural networks *with ReLU activation functions* so that the current context of the result is more transparent. It is quite unclear how one could generalize the Theorem 1 to arbitrary activation functions phi given the crucial use of the homogeneity of the ReLU at the beginning of p.4. Though the proof of Lemma 2 only appears to be using the 1-Lipschitzness property of phi as well as phi(0) =0. (Unless they can generalize further; I also suggest that they explicitly state in the (interesting) Lemma 2 that it is for the ReLU activations (like they did in Theorem 1)).\n\n2) A footnote (or citation) could be useful to give a hint on how the inequality 1/e beta^(d-1) <= tilde{beta}^(d-1) <= e beta^(d-1) is proven from the property |beta-tilde{beta}|<= 1/d beta (middle of p.4).\n\n3) Equation (3) -- put the missing 2 subscript for the l2 norm of |f_(w+u)(x) - f_w(x)|_2 on the LHS (for clarity).\n\n4) One extra line of derivation would be helpful for the reader to rederive the bound|w|^2/2sigma^2 <= O(...) just above equation (4). I.e. first doing the expansion keeping the beta terms and Frobenius norm sum, and then going directly to the current O(...) term.\n\n5) bottom of p.4: use hat{L}_gamma = 1 instead of L_gamma =1 for more clarity.\n\n6) Top of p.5: the sentence \"Since we need tilde{beta} to satisfy (...)\" is currently awkwardly stated. I suggest instead to say that \"|tilde{beta}- beta| <= 1/d (gamma/2B)^(1/d) is a sufficient condition to have the needed condition |tilde{beta}-beta| <= 1/d beta over this range, thus we can use a cover of size dm^(1/2d).\"\n\n7) Typo below (6): citetbarlett2017...\n\n8) Last paragraph p.5: \"Recalling that W_i is *at most* a hxh matrix\" (as your result do not require constant size layers and covers the rectangular case). \n" ]
[ 9, 6, 7 ]
[ 4, 3, 4 ]
[ "iclr_2018_Skz_WfbCZ", "iclr_2018_Skz_WfbCZ", "iclr_2018_Skz_WfbCZ" ]
iclr_2018_r1iuQjxCZ
On the importance of single directions for generalization
Despite their ability to memorize large datasets, deep neural networks often achieve good generalization performance. However, the differences between the learned solutions of networks which generalize and those which do not remain unclear. Additionally, the tuning properties of single directions (defined as the activation of a single unit or some linear combination of units in response to some input) have been highlighted, but their importance has not been evaluated. Here, we connect these lines of inquiry to demonstrate that a network’s reliance on single directions is a good predictor of its generalization performance, across networks trained on datasets with different fractions of corrupted labels, across ensembles of networks trained on datasets with unmodified labels, across different hyper- parameters, and over the course of training. While dropout only regularizes this quantity up to a point, batch normalization implicitly discourages single direction reliance, in part by decreasing the class selectivity of individual units. Finally, we find that class selectivity is a poor predictor of task importance, suggesting not only that networks which generalize well minimize their dependence on individual units by reducing their selectivity, but also that individually selective units may not be necessary for strong network performance.
accepted-poster-papers
The paper contributes to a body of empirical work towards understanding generalization in deep learning. They do this through a battery of experiments studying "single directions" or selectivity of small groups of neurons. The reviewers that have actively participated agree that the revision is of high quality, impact, originality, and significance. The issue of a lack of prescriptiveness was raised by one reviewer. I agree with the majority that this is not necessary, but nevertheless, the revision makes some suggestions. I urge the authors to express the appropriate amount of uncertainty regarding any prescriptions that have not been as thoroughly vetted!
train
[ "H1gh0U_lG", "SyGCUouxf", "r1On1W5xf", "H1qU4p5GM", "HyoMFT9fz", "Hk6Twpcfz", "S1TldacMM", "HyKcH6czz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "\nSummary:\n- nets that rely on single directions are probably overfitting\n- batch norm helps not having large single directions\n- high class selectivity of single units is a bad measure to find \"important\" neurons that help a NN generalize.\n\nThe experiments that this paper does are quite interesting, somewhat confirming intuitions that the community had, and bringing new insights into generalization. The presentation is good overall, but many minor improvements could help with readability.\n\n\nRemarks:\n- The first thing you should say in this paper is what you mean by \"single direction\", at least an intuition, to be refined later. The second sentence of section 2 could easily be plugged in your abstract.\n- You should already mention in section 2.1 that you are using ReLUs, otherwise clamping to 0 might take a different sense.\n- considering the lack of page limit at ICLR, making *all* your figures bigger would be beneficial to readability.\n- Figure 2's y values drop rapidly as a function of x, maybe make x have a log scale or something that zooms in near 0 would help readability.\n- Figure 3b's discrete regimes is very weird, did you actually look at how much these clusters converged to the same solution in parameter space?\n- Figure 4a is nice, but an additional figure zooming in on the first 2 epochs would be really great, because that AUC curve goes up really fast in the beginning.\n- Arpit et al. find that there is more cross-class information being shared for true labels than random labels. Considering you find that low class selectivity is an indicator of good generalization, would it make sense to look at \"cross-class selectivity\"? If a neuron learns a feature shared by 2 or more classes, then it has this interesting property of offering a discrimination potential for multiple classes at the same time, rather than just 1, making it more \"useful\" potentially, maybe less adversary prone?\n- You say in the figure captions that you use random orderings of the features to perform ablation, but nowhere in the main text (which would be nice).\n\n", "This is an \"analyze why\" style of paper: the authors attempt to explain the relationship between some network property (in this case, \"reliance on single directions\"), and a desired performance metric (in this case, generalization ability). The authors quantify a variety of related ways to measure \"reliance on single directions\" and show that the more reliant on a single directions a given network is, the less well it generalizes. \n\nClarity: The paper is fairly clearly written. Sometimes key details are in the footnotes (e.g. see footnote 3) -- not sure why -- but on the whole, I think the followed the paper reasonably well. \n\nQuality: The work makes a good-faith attempt to be fairly systematic -- e.g evaluating several different types of network structures, with reasonable numbers of random initializations, and also illustrates the main point in several different comparatively independent-seeming ways. I feel fairly confident that the results are basically right within the somewhat limited domain that the authors explore. \n\nOriginality: This work is one in a series of papers about the topic of trying to understand what leads to good generalization in deep neural networks. I don't know that the concept of \"reliance on a single direction\" seems especially novel to me, but on the other hand, I can't think of another paper that precisely investigates this notion the way it is done here. \n\nSignificance: The work touches on some important issues. I think the demonstration that the existence of strongly class-selective neurons is not a good correlate for generalization is interesting. This point illustrates something that has made me a bit uncomfortable with the trend toward \"interpretable machine learning\" that has been arising recently: in many of those results, it is shown that some fraction of the units at various levels of a trained deepnet have optimal driving stimuli that seem somewhat interpretable, with the implication that the existence of such units is an important correlate of network performance. There has even been some claims that better-performing networks have more \"single-direction\" interpretable units [1]. The fact that the current results seem directly in contradiction to that line of work is interesting, and the connections to batch normalization and dropout are for the same reason interesting. However, I wish the authors had grappled more directly with the apparent contradiction with (e.g.) [1]. There is probably a kind of tradeoff here. The closer the training dataset is to what is being tested for \"generalization\", the more likely that having single-direction units is useful; and vice-versa. I guess the big question is: what types of generalization are actually demanded / desired in real deployed machine learning systems (or in the brain)? How does those cases compare with the toy examples analyzed here? The paper doesn't go far enough in really addressing these questions, but it is sort of beginning to make an effort. \n\nHowever, for me the main failing of the paper is that it's fairly descriptive without being that prescriptive. Does using their metric of reliance on a single direction, as a regularizer in and of itself, add anything above any beyond existing regularizers (e.g. batch normalization or dropout)? It doesn't seem like they tried. This seems to me the key question to understanding the significance of their results. Is \"reliance on single direction\" actually a good regularizer as such, especially for \"real\" problems like (e.g.) training a deep Convnet on (e.g.) ImageNet or some other challenging dataset? Would penalizing for this quantity improve the generalization of a network trained on ImageNet to other visual datasets (e.g. MS-COCO)? If so, this would be a very significant result and would make me really care about their idea of \"reliance on a singe direction\". If such results do not hold, it seems to me like one more theoretical possibility that would bite the dust when tested at scale. \n\n[1] http://netdissect.csail.mit.edu/final-network-dissection.pdf", "article summary: \nThe authors use ablation analyses to evaluate the reliance on single coordinate-aligned directions in activation space (i.e. the activation of single units or feature maps) as a function of memorization. They find that the performance of networks that memorize more are also more affected by ablations. This result holds even for identical networks trained on identical data. The dynamics of this reliance on single directions suggest that it could be used as a criterion for early stopping. The authors discuss this observation in relation to dropout and batch normalization. Although dropout is an effective regularizer to prevent memorization of random labels, it does not prevent over-reliance on single directions. Batch normalization does appear to reduce the reliance on single directions, providing an alternative explanation for the effectiveness of batch normalization. Networks trained without batch normalization also demonstrated a significantly higher amount of class selectivity in individual units compared to networks trained without batch normalization. Highly selective units were found to be no more important than units that were not selective to a particular class. These results suggest that highly selective units may actually be harmful to network performance. \n\n* Quality: The paper presents thorough and careful empirical analyses to support their claims.\n* Clarity: The paper is very clear and well-organized. Sufficient detail is provided to reproduce the results.\n* Originality: This work is one of many recent papers trying to understand generalization in deep networks. Their description of the activation space of networks that generalize compared to those that memorize is novel. The authors throughly relate their findings to related work on generalization, regularization, and pruning. However, the authors may wish to relate their findings to recent reports in neuroscience observing similar phenomena (see below).\n* Significance: The paper provides valuable insight that helps to relate existing theories about generalization in deep networks. The insights of this paper will have a large impact on regularization, early stopping, generalization, and methods used to explain neural networks. \n\nPros:\n* Observations are replicated for several network architectures and datasets. \n* Observations are very clearly contextualized with respect to several active areas of deep learning research.\nCons:\n* The class selectivity measure does not capture all class-related information that a unit may pass on. \n\nComments:\n* Regarding the class selectivity of single units, there is a growing body of literature in neurophysiology and neuroimaging describing similar observations where the interpretation has been that a primary role of any neural pathway is to “denoise” or cancel out the “distractor” rather than just amplifying the “signal” of interest. \n * Untuned But Not Irrelevant: The Role of Untuned Neurons In Sensory Information Coding, https://www.biorxiv.org/content/early/2017/09/21/134379\n * Correlated variability modifies working memory fidelity in primate prefrontal neuronal ensembles https://www.ncbi.nlm.nih.gov/pubmed/28275096\n * On the interpretation of weight vectors of linear models in multivariate neuroimaging http://www.sciencedirect.com/science/article/pii/S1053811913010914\n * see also LEARNING HOW TO EXPLAIN NEURAL NETWORKS https://openreview.net/forum?id=Hkn7CBaTW\n* Regarding the intuition in section 3.1, \"The minimal description length of the model should be larger for the memorizing network than for the structure- finding network. As a result, the memorizing network should use more of its capacity than the structure-finding network, and by extension, more single directions”. Does reliance on single directions not also imply a local encoding scheme? We know that for a fixed number of units, a distributed representation will be able to encode a larger number of unique items than a local one. Therefore if this behaviour was the result of needing to use up more of the capacity of the network, wouldn’t you expect to observe more distributed representations? \n\nMinor issues:\n* In the first sentence of section 2.3, you say you analyzed three models and then you only list two. It seems you forgot to include ResNet trained on ImageNet.", "We wish to thank the reviewers for their thoughtful and thorough reviews. In particular, we are glad that the reviewers found our paper to be \"an important piece of the generalization puzzle,\" that \"the work touches on some important issues,\" and that the experiments are \"quite interesting,\" \"bringing new insights into generalization.\" We are also glad that the reviewers found that the paper contains \"thorough and careful empirical analyses,\" \"is very clear and well-organized,\" and that the \"presentation is good overall.\"\n\nTo address the reviewers' comments we have performed several additional experiments, including additional figures expanding on the prescriptive implications of our work and detailing the relationship between mutual information, batch normalization, and unit importance. We have also made a number of changes to the text which we feel have significantly improved its clarity. For detailed descriptions of the changes we have made, please see our responses to individual reviewers below. As a result of these changes, our paper is now a little more than nine pages long. Due in large part to the reviewers’ constructive feedback, we believe that our paper has been substantially strengthened. ", "We thank the reviewer for their kind comments and helpful feedback. We have incorporated the reviewer’s suggestions into the manuscript, and feel that they have substantially improved the clarity of the work. We have also performed additional experiments to address the importance of units with information about multiple classes, as the reviewer suggests. Details of these changes are below: \n\n\"The first thing you should say in this paper is what you mean by 'single direction.'\"\n\nWe have now defined ‘single directions’ in the abstract as the reviewer suggests, as well as adding an additional definition in the Introduction. We agree that this improves the clarity of the paper substantially. \n\n\"You should already mention in section 2.1 that you are using ReLUs.\"\n\nWe have moved section 2.3 to section 2.1, thereby highlighting that we are using ReLU’s from the start, as the reviewer suggests.\n\n\"considering the lack of page limit at ICLR, making *all* your figures bigger would be beneficial to readability.\"\n\nWe were perhaps overly concerned about the ICLR 8-page soft limit in the first draft. We have increased the size of all the figures, as the reviewer suggests, and indeed, this improves the presentation of the paper.\n\n\"Figure 2's y values drop rapidly as a function of x, maybe make x have a log scale or something that zooms in near 0 would help readability.\"\n\nWe have now re-plotted Figure 2 using a log scale for the x-axis. We feel it has substantially improved the figure. We thank the reviewer for the great suggestion!\n\n\"Figure 3b's discrete regimes is very weird, did you actually look at how much these clusters converged to the same solution in parameter space?\"\n\nWe absolutely agree that these discrete regimes are very weird, and fully intend to chase down the cause, and more generally, evaluate empirical convergence properties of multiple networks with the same topology but different random seeds in future work. However, an initial investigation into the causes of these regimes suggests that the answer is not obvious, and we believe that this question is beyond the scope of the present work.\n\n\"Arpit et al. find that there is more cross-class information being shared for true labels than random labels. Considering you find that low class selectivity is an indicator of good generalization, would it make sense to look at \"cross-class selectivity\"? If a neuron learns a feature shared by 2 or more classes, then it has this interesting property of offering a discrimination potential for multiple classes at the same time, rather than just 1, making it more \"useful\" potentially, maybe less adversary prone?\"\n\nWe agree with the reviewer, and indeed, we had included a discussion of the downsides of class selectivity in section titled ‘Quantifying class selectivity’. While class selectivity absolutely ignores units with information about multiple classes, it has been used extensively in neuroscience to find neurons with strong tuning properties (e.g., the cat neurons prominently featured in previous deep learning analyses). In contrast, a metric such as mutual information should highlight units that are informative about multiple classes (with ‘cross-class selectivity’), but not necessarily units that are obviously interpretable.\n\nHowever, we agree that it would be worthwhile to assess the relationship between cross-class selectivity (as measured by mutual information) and importance. To this end, we have performed a series of additional experiments using mutual information (Fig. 6b; A4; Section A.5). We found that while mutual information was slightly more predictive of unit importance than class selectivity it is still not a good predictor of unit importance (Fig. A4, p.15). Interestingly, while we had previously shown that batch normalization decreases class selectivity, we found that batch normalization actually increases mutual information (Fig. 6b, p.7). This result suggests that batch normalization encourages representations that are distributed across units as opposed to representations in which information about single classes is concentrated in single units. We have added text discussing these results in sections 2.3 (p.3) and 3.4 (p.7).\n\n\"You say in the figure captions that you use random orderings of the features to perform ablation, but nowhere in the main text (which would be nice).\"\n\nWe have now included a statement in the main text saying that each ablation curve contains multiple random orderings (p.4, first incomplete paragraph).", "We thank the reviewer for their constructive feedback and their thorough reading of our paper. We have performed additional experiments (to show that the insights of this work can be used prescriptively) and provided additional discussion to work towards addressing the concerns the reviewer has raised. We have provided detailed responses to these comments as well as pointers to changes in the paper below:\n\n\"Sometimes key details are in the footnotes...\"\n\nWe initially put these details in footnotes to stay below the soft page limit. We have now moved all footnotes containing key details into the main text as the reviewer has requested.\n\n\"Originality: This work is one in a series of papers about the topic of trying to understand what leads to good generalization in deep neural networks. I don't know that the concept of \"reliance on a single direction\" seems especially novel to me, but on the other hand, I can't think of another paper that precisely investigates this notion the way it is done here.\"\n\nAs we discuss in both the introduction and related work sections of our paper, the concept of single direction reliance is related to previous theoretical work such as flat minima. However, to our knowledge, single direction reliance has never been empirically tested explicitly. Nonetheless, if the reviewer would be willing to point us in the direction of any related papers that we may have omitted from our manuscript, we would greatly appreciate it as we want to ensure that our discussion of prior work is as complete as possible.\n\n\"There has even been some claims that better-performing networks have more \"single-direction\" interpretable units [1]. The fact that the current results seem directly in contradiction to that line of work is interesting, and the connections to batch normalization and dropout are for the same reason interesting. However, I wish the authors had grappled more directly with the apparent contradiction with (e.g.) [1].\"\n\nWe have included an additional paragraph in the related work section (Section 4, p.9, third complete paragraph) comparing our work more extensively to the work of Bau et al. [1]. We believe that Bau et al. is extremely interesting work, and we note that, in many cases, our results are largely consistent with what Bau et al. observed; for example, we both found a relationship between selectivity and depth. However, we do acknowledge that they observed a correlation between network performance and the number of concept-selective units (Fig. 12 in Bau et al.). We believe that there are three potential explanations for this discrepancy:\n\n (1) As we note at the end of Section 2.3, class selectivity and feature selectivity (akin to the concept selectivity used in Bau et al.) may exhibit different properties. \n\n (2) Bau et al. compare networks with different numbers of filters (e.g., AlexNet, GoogleNet, VGG, and ResNet-152s), but measure the absolute number of unique detectors. It is possible that the number of unique detectors in better performing networks, such as ResNets, is simply a function of these networks having more filters. \n\n (3) Finally, both Bau et al. and our work observed a relationship between selectivity and depth (see Fig. 5 in Bau et al., and Fig. A2 in our manuscript). As Bau et al. compared the number of unique detectors across networks with substantially different depths, the increase in the number of unique detectors may have been due to the different depths of these networks. In line with this observation (as well as point 2 above), we note that in Fig. 12 in Bau et al., which plots the number of unique detectors as a function of accuracy on the action40 dataset, there appears to be little relationship when comparing only across points from the same model architecture. \n\n\"‘The closer the training dataset is to what is being tested for \"generalization\", the more likely that having single-direction units is useful; and vice-versa. I guess the big question is: what types of generalization are actually demanded / desired in real deployed machine learning systems (or in the brain)?\"\n\nWe have now included an additional paragraph in the Discussion section (p.9 last incomplete paragraph) addressing the distinction between different types of generalization based on the overlap between the train and test distributions. We believe that understanding how single direction reliance varies based on this overlap is an extremely interesting question although we feel it is beyond the scope of the present work. \n\n[1] David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. Network Dissection: Quantifying Interpretability of Deep Visual Representations. 2017. doi: 10.1109/CVPR.2017. 354. URL http://arxiv.org/abs/1704.05796. \n\n\n\n\n", "\"However, for me the main failing of the paper is that it's fairly descriptive without being that prescriptive. Does using their metric of reliance on a single direction, as a regularizer in and of itself, add anything above and beyond existing regularizers (e.g. batch normalization or dropout)?\"\n\nThough we would like to note that the primary goal of this work is to understand what factors lead to good generalization performance rather than to engineer a new model, we agree with the reviewer that a demonstration that the insights from our work can be used to directly improve model performance would be extremely valuable. However, all of the most obvious methods to regularize single direction reliance seem to reduce to dropout or one of its close variants. This is not to say that we believe there is no such regularizer -- it is merely to say that it is not obviously apparent. We have added a sentence in the Discussion to this effect (p.9, last complete paragraph). \n\nNonetheless, we do note that the insights from our work can be used prescriptively to indirectly improve models, as they provide a way to assess generalization performance without the need for a held-out validation set. In the original draft, we explored this in Fig. 4a-b as a means for early stopping. To expand on the potential for the method in this direction, we have added an additional experiment, in which we show that single direction reliance can be used as an effective method for hyperparameter selection as well (Fig. 4c, p.5 last complete paragraph). We believe that this approach may prove extremely useful, especially in situations in which labeled data is rare. ", "First off, we would like to thank the reviewer for the kind review and the helpful feedback, especially with respect to class selectivity and the relationship to neuroscience. We have provided detailed responses to these comments as well as pointers to changes in the paper below:\n\n\"The class selectivity measure does not capture all class-related information that a unit may pass on.\"\n\nWe agree with the reviewer, and indeed, we had included a discussion of the downsides of class selectivity in section titled ‘Quantifying class selectivity.’ While class selectivity absolutely ignores units with information about multiple classes, it has been used extensively in neuroscience to find neurons with strong tuning properties (e.g., the cat neurons prominently featured in previous deep learning analyses). In contrast, a metric such as mutual information should highlight units that are informative about multiple classes, but not necessarily units that are obviously interpretable.\n\nHowever, we agree that it would be worthwhile to assess the relationship between multi-class selectivity (as measured by mutual information) and importance. To this end, we have performed a series of additional experiments using mutual information (Fig. 6b; A4; Section A.5). We found that while mutual information was slightly more predictive of unit importance than class selectivity, it is still not a good predictor of unit importance (Fig. A4, p.15). Interestingly, while we had previously shown that batch normalization decreases class selectivity, we found that batch normalization actually increases mutual information (Fig. 6b, p.7). This result suggests that batch normalization encourages representations that are distributed across units as opposed to representations in which information about single classes is concentrated in single units. We have added text discussing these results in sections 2.3 (p.3) and 3.4 (p.7).\n\n\"... the authors may wish to relate their findings to recent reports in neuroscience ...\"\n\nWe are strong advocates of the idea that methods and ideas from neuroscience are useful for understanding machine learning models, and so, we have also included an additional paragraph in our ‘related work’ section (p.8, first complete paragraph) contextualizing our work in recent neuroscience developments regarding robustness to noise, distributed representations, and correlated variability, including references that the reviewer has provided and several other neuroscience papers that influenced our work. \n\n\"In the first sentence of section 2.3, you say you analyzed three models and then you only list two. It seems you forgot to include ResNet trained on ImageNet.\"\n\nGreat catch! We have resolved this now.\n" ]
[ 7, 5, 9, -1, -1, -1, -1, -1 ]
[ 3, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1iuQjxCZ", "iclr_2018_r1iuQjxCZ", "iclr_2018_r1iuQjxCZ", "iclr_2018_r1iuQjxCZ", "H1gh0U_lG", "SyGCUouxf", "Hk6Twpcfz", "r1On1W5xf" ]
iclr_2018_r1q7n9gAb
The Implicit Bias of Gradient Descent on Separable Data
We show that gradient descent on an unregularized logistic regression problem, for almost all separable datasets, converges to the same direction as the max-margin solution. The result generalizes also to other monotone decreasing loss functions with an infimum at infinity, and we also discuss a multi-class generalizations to the cross entropy loss. Furthermore, we show this convergence is very slow, and only logarithmic in the convergence of the loss itself. This can help explain the benefit of continuing to optimize the logistic or cross-entropy loss even after the training error is zero and the training loss is extremely small, and, as we show, even if the validation loss increases. Our methodology can also aid in understanding implicit regularization in more complex models and with other optimization methods.
accepted-poster-papers
The paper is tackling an important open problem. AnonReviewer3 identified some technical issues that led them to rate the manuscript 5 (i.e., just below the acceptance threshold). Many of these issues are resolved by the reviewer in their review, and the author response makes it clear that these fixes are indeed correct. However, other issues that the reviewer raises are not provided with solutions. The authors address these points, but in one case at least (regarding w_infinity), I find the new text somewhat hand-waivy. Regardless, I'm inclined to accept the paper because the issues seem to be straightforward. Ultimately, the authors are responsible for the correctness of the results.
train
[ "S1jezarxG", "HyBrwGweG", "HkS9oWtef", "SymFQAPfz", "ByMhM0wGz", "HJxIf0wGz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper offers a formal proof that gradient descent on the logistic\nloss converges very slowly to the hard SVM solution in the case where\nthe data are linearly separable. This result should be viewed in the\ncontext of recent attempts at trying to understand the generalization\nability of neural networks, which have turned to trying to understand\nthe implicit regularization bias that comes from the choice of\noptimizer. Since we do not even understand the regularization bias of\noptimizers for the simpler case of linear models, I consider the paper's\ntopic very interesting and timely.\n\nThe overall discussion of the paper is well written, but on a more\ndetailed level the paper gives an unpolished impression, and has many\ntechnical issues. Although I suspect that most (or even all) of these\nissues can be resolved, they interfere with checking the correctness of\nthe results. Unfortunately, in its current state I therefore do not\nconsider the paper ready for publication.\n\n\nTechnical Issues:\n\nThe statement of Lemma 5 has a trivial part and for the other part the\nproof is incorrect: Let x_u = ||nabla L(w(u))||^2.\n - Then the statement sum_{u=0}^t x_u < infinity is trivial, because\n it follows directly from ||nabla L(w(u))||^2 < infinity for all u. I\n would expect the intended statement to be sum_{u=0}^infinity x_u <\n infinity, which actually follows from the proof of the lemma.\n - The proof of the claim that t*x_t -> 0 is incorrect: sum_{u=0}^t x_u\n < infinity does not in itself imply that t*x_t -> 0, as claimed. For\n instance, we might have x_t = 1/i^2 when t=2^i for i = 1,2,... and\n x_t = 0 for all other t.\n\nDefinition of tilde{w} in Theorem 4:\n - Why would tilde{w} be unique? In particular, if the support vectors\n do not span the space, because all data lie in the same\n lower-dimensional hyperplane, then this is not the case.\n - The KKT conditions do not rule out the case that \\hat{w}^top x_n =\n 1, but alpha_n = 0 (i.e. a support vector that touches the margin,\n but does not exert force against it). Such n are then included in\n cal{S}, but lead to problems in (2.7), because they would require\n tilde{w}^top x_n = infinity, which is not possible.\n\nIn the proof of Lemma 6, case 2. at the bottom of p.14:\n - After the first inequality, C_0^2 t^{-1.5 epsilon_+} should be \n C_0^2 t^{-epsilon_+}\n - After the second inequality the part between brackets is missing an\n additional term C_0^2 t^{-\\epsilon_+}.\n - In addition, the label (1) should be on the previous inequality and\n it should be mentioned that e^{-x} <= 1-x+x^2 is applied for x >= 0\n (otherwise it might be false).\nIn the proof of Lemma 6, case 2 in the middle of p.15:\n - In the line of inequality (1) there is a t^{-epsilon_-} missing. In\n the next line there is a factor t^{-epsilon_-} too much.\n - In addition, the inequality e^x >= 1 + x holds for all x, so no need\n to mention that x > 0.\n\nIn Lemma 1:\n - claim (3) should be lim_{t \\to \\infty} w(t)^\\top x_n = infinity\n - In the proof: w(t)^top x_n > 0 only holds for large enough t.\n\nRemarks:\n\np.4 The claim that \"we can expect the population (or test)\nmisclassification error of w(t) to improve\" because \"the margin of w(t)\nkeeps improving\" is worded a little too strongly, because it presumes\nthat the maximum margin solution will always have the best\ngeneralization error.\n\nIn the proof sketch (p.3):\n - Why does the fact that the limit is dominated by gradients that are\n a linear combination of support vectors imply that w_infinity will\n also be a non-negative linear combination of support vectors?\n - \"converges to some limit\". Mention that you call this limit\n w_infinity\n\n\nMinor Issues:\n\nIn (2.4): add \"for all n\".\n\np.10, footnote: Shouldn't \"P_1 = X_s X_s^+\" be something like \"P_1 =\n(X_s^top X_s)^+\"?\n\nA.9: ell should be ell'\n\nThe paper needs a round of copy editing. For instance:\n - top of p.4: \"where tilde{w} A is the unique\"\n - p.10: \"the solution tilde{w} to TO eq. A.2\"\n - p.10: \"might BOT be unique\"\n - p.10: \"penrose-moorse pseudo inverse\" -> \"Moore-Penrose\n pseudoinverse\"\n \nIn the bibliography, Kingma and Ba is cited twice, with different years.\n", "Paper focuses on characterising behaviour of the log loss minimisation on the linearly separable data. As we know, optimisation like this does not converge in a strict mathematical sense, as the norm of the model will grow to infinity. However, one can still hope for a convergence of normalised solution (or equivalently - convergence in term of separator angle, rather than parametrisation). This paper shows that indeed, log-loss (and some other similar losses), minimised with gradient descent, leads to convergence (in the above sense) to the max-margin solution. On one hand it is an interesting property of model we train in practice, and on the other - provides nice link between two separate learning theories.\n\nPros:\n- easy to follow line of argument\n- very interesting result of mapping \"solution\" of unregularised logistic regression (under gradient descent optimisation) onto hard max margin one\n\nCons:\n- it is not clear in the abstract, and beginning of the paper what \"convergence\" means, as in the strict sense logistic regression optimisation never converges on separable data. It would be beneficial for the clarity if authors define what they mean by convergence (normalised weight vector, angle, whichever path seems most natural) as early in the paper as possible.", "(a) Significance\nThe main contribution of this paper is to characterize the implicit bias introduced by gradient descent on separable data. The authors show the exact form of this bias (L_2 maximum margin separator), which is independent of the initialization and step size. The corresponding slow convergence rate explains the phenomenon that the predictor can continue to improve even when the training loss is already small. The result of this paper can inspire the study of the implicit bias introduced by gradient descent variants or other optimization methods, such as coordinate descent. In addition, the proposed analytic framework seems promising since it may be extended to analyze other models, like neural networks.\n\n(b) Originality\nThis is the first work to give the detailed characterizations of the implicit bias of gradient descent on separable data. The proposed assumptions are reasonable, but it seems to limit to the loss function with exponential tail. I’m curious whether the result in this paper can be applied to other loss functions, such as hinge loss.\n\n(c) Clarity & Quality \nThe presentation of this paper is OK. However, there are some places can be improved in this paper. For example, in Lemma 1, results (3) and (4) can be combined together. It is better for the authors to use another section to illustrate experimental settings instead of writing them in the caption of Figure 3.1. \n\nMinor comments: \n1. In Lemma 1 (4), w^T(t)->w(t)^T\n2. In the proof of Lemma 1, it’s better to use vector 0 for the gradient L(w)\n3. In Theorem 4, the authors should specify eta\n4. In appendix A, page 11, beta is double used\n5. In appendix D, equation (D.5) has an extra period\n", "We thank the reviewer for acknowledging the significance of our results, and for investing significant efforts in improving the quality of this manuscript. We uploaded a revised version in which all the reviewer comments were addressed, and the appendix was further polished. Notably,\n\n[Lemma 5 in appdendix]\n\n- Indeed, the upper limit of the sum over x_u should be 'infinity' instead of 't'.\n\n- It should be 'x_t -> 0', not 't*x_t -> 0'.\n\n[Definition of tilde{w} Theorem 4]\n\n- tilde{w} is indeed unique, given the initial conditions. We clarified this in Theorem 4 and its proof.\n\n- alpha_n=0 for the support vectors is only true for a measure zero of all datasets (we added a proof of this in appendix F). Thus, we clarified in the revision that our results hold for almost every dataset (and so, they are true with probability 1 for any data drawn from a continuous-valued distribution).\n\n[Why does the fact that the limit is dominated by gradients that are a linear combination of support vectors imply that w_infinity will also be a non-negative linear combination of support vectors?]\n\nWe clarified in the revision: “...The negative gradient would then asymptotically become a non-negative linear combination of support vectors. The limit w_{\\infinity} will then be dominated by these gradients, since any initial conditions become negligible as ||w(t)||->infinity (from Lemma 1)”.", "We thank the reviewer for the positive review and for the helpful comment. We uploaded a revised version in which clarified in the abstract that the weights converge “in direction” to the L2 max margin solution.", "We thank the reviewer for the positive review and for the helpful comments. We uploaded a revised version in which all the reviewer comments were addressed.\n\n[“I’m curious whether the result in this paper can be applied to other loss functions, such as hinge loss.”]\n\nWe believe our results could be extended to many other types of loss functions (in fact, we are currently working on such extensions). However, for the hinge loss (without regularization), gradient descent on separable data can converge to a finite solution which is not to the max margin vector. For example, if there is a single data point x=(1,0), and we start with a weight vector w=(2,2), the hinge loss and its gradient are both equal to zero. Therefore, no weight updates are performed, and we do not converge to the direction of the L2 max margin classifier: w=(1,0).\n\n[“It is better for the authors to use another section to illustrate experimental settings instead of writing them in the caption of Figure 3.1. “]\n\nWe felt it is easier to read if all details are summarized in the figure, and wanted to save space to fit the main paper into 8 pages. However, we can change this if required." ]
[ 5, 7, 8, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1 ]
[ "iclr_2018_r1q7n9gAb", "iclr_2018_r1q7n9gAb", "iclr_2018_r1q7n9gAb", "S1jezarxG", "HyBrwGweG", "HkS9oWtef" ]
iclr_2018_ByQpn1ZA-
Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step
Generative adversarial networks (GANs) are a family of generative models that do not minimize a single training criterion. Unlike other generative models, the data distribution is learned via a game between a generator (the generative model) and a discriminator (a teacher providing training signal) that each minimize their own cost. GANs are designed to reach a Nash equilibrium at which each player cannot reduce their cost without changing the other players’ parameters. One useful approach for the theory of GANs is to show that a divergence between the training distribution and the model distribution obtains its minimum value at equilibrium. Several recent research directions have been motivated by the idea that this divergence is the primary guide for the learning process and that every step of learning should decrease the divergence. We show that this view is overly restrictive. During GAN training, the discriminator provides learning signal in situations where the gradients of the divergences between distributions would not be useful. We provide empirical counterexamples to the view of GAN training as divergence minimization. Specifically, we demonstrate that GANs are able to learn distributions in situations where the divergence minimization point of view predicts they would fail. We also show that gradient penalties motivated from the divergence minimization perspective are equally helpful when applied in other contexts in which the divergence minimization perspective does not predict they would be helpful. This contributes to a growing body of evidence that GAN training may be more usefully viewed as approaching Nash equilibria via trajectories that do not necessarily minimize a specific divergence at each step.
accepted-poster-papers
AnonReviewers 2 and AnonReviewer 3 rated the paper highly, with AR3 even upgrading their score. AnonReviewer1 was less generous: " Overall, it is a good empirical study, raising a healthy set of questions. In this regard, the paper is worth accepting. However, I am still uncomfortable with the lack of answers and given that the revision does not include the additional discussion and experiments promised in the rebuttal, I will stay with my evaluation." The authors have promised to produce the discussion and new experiments. Given the nature of both (1: the discussion is already outline in the response and 2: the experiments are straightforward to run), I'm inclined to accept the paper because it represents a solid body of empirical work.
train
[ "r12l4YDBz", "H1iaR_vSf", "SkqpgbLgM", "B1RLg-DNz", "BJAGbdrVf", "Sks895Fxz", "SkiI_Bixz", "HyqX2aMGG", "H1oisTzMf", "HkfYcazGz", "HksiG0DA-", "BySwLOUkM", "BkUrF7rJG", "ryDZ5ySJz", "HytAS7W1z", "H1N7QxeJM" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "author", "public", "author", "author", "public" ]
[ "Bugs were fixed yesterday and a flood of old emails were sent. But rest assured, I've been reading your comments actively. Thanks for your message.", "Yesterday, some of the authors got an e-mail asking us to respond ASAP to a comment from the AC. When we visited the webpage, we could not see a comment, even when logged in. We had earlier posted a reply to a similar comment \n\nWe contacted the ICLR program chairs yesterday and were told not to worry about it, that the AC had seen our reply in the meantime.\n\nToday we got another message saying to reply to the AC asap, quoting a comment that we're not able to see.\n\nI contacted the ICLR program chairs a second time and they say that they can't see the messages from the AC to me in the openreview system and that this might be a phishing attempt.\n\nAs a further comment, the messages from the AC did not reach all authors. Specifically, they did not reach the first author, who uploaded the submission, but instead went to a collaborator in a different country and time zone.\n\nIf the AC really has been trying to get in touch with us, I want to make it clear that we were trying to respond as quickly as possible, but we've been hampered by openreview bugs and miscommunications with the program chairs.\n\nSince the e-mail we received today actually fully quotes the comment that we're not able to see, we actually are able to respond. The comment asks for us to provide more experiments and discussions promised in the rebuttal. We actually did reply to a similar comment earlier explaining the situation, but apparently the AC can't see our reply, presumably due to an openreview bug. Our reply is: the 1st author will upload the latest revision today. It will include the requested discussions, but one of the requested experiments is still running. The experiments are computationally expensive and can't be accelerated without reducing their accuracy. The experiment will definitely be ready for the final copy deadline.", "The submission describes an empirical study regarding the training performance\nof GANs; more specifically, it aims to present empirical evidence that the\ntheory of divergence minimization is more a tool to understand the outcome of\ntraining (i.e. Nash equillibrium) than a necessary condition to be enforce\nduring training itself.\n\nThe work focuses on studying \"non-saturating\" GANs, using the modified generator\nobjective function proposed by Goodfellow et al. in their seminal GAN paper, and\naims to show increased capabilities of this variant, compared to the \"standard\"\nminimax formulation. Since most theory around divergence minimization is based\non the unmodified loss function for generator G, the experiments carried out in\nthe submission might yield somewhat surprising results compared the theory.\n\nIf I may summarize the key takeaways from Sections 5.4 and 6, they are:\n- GAN training remains difficult and good results are not guaranteed (2nd bullet\n point);\n- Gradient penalties work in all settings, but why is not completely clear;\n- NS-GANs + GPs seems to be best sample-generating combination, and faster than\n WGAN-GP.\n- Some of the used metrics can detect mode collapse.\n\nThe submission's (counter-)claims are served by example (cf. Figure 2, or Figure\n3 description, last sentence), and mostly relate to statements made in the WGAN\npaper (Arjovsky et al., 2017).\n\nAs a purely empirical study, it poses more new and open questions on GAN\noptimization than it is able to answer; providing theoretical answers is\ndeferred to future studies. This is not necessarily a bad thing, since the\nextensive experiments (both \"toy\" and \"real\") are well-designed, convincing and\ncomprehensible. Novel combinations of GAN formulations (non-saturating with\ngradient penalties) are evaluated to disentangle the effects of formulation\nchanges.\n\nOverall, this work is providing useful experimental insights, clearly motivating\nfurther study.\n", "Thanks again for your review! Before the rebuttal process concludes, do you have any outstanding questions regarding our revision? We ensure this includes specific practical suggestions for GAN-training to guide the community. In regards to your point about theoretical results, we hope our paper serves to encourage future theoretical research compatible with our observed empirical results. We believe this paper tests a prevailing theoretical understanding of GAN training as directly as possible and that these observations may help validate or invalidate later theoretical models.", "All the figures for synthetic experiments are now updated to use the Frechet distance between Gaussians, instead of l2 distance. Thank you for your suggestion!", "Quality: The authors study non-saturating GANs and the effect of two penalized gradient approaches. The authors consider a number of thought experiments to demonstrate their observations and validate these on real data experiments. \n\nClarity: The paper is well-written and clear. The authors could be more concise when reporting results. I would suggest keeping the main results in the main body and move extended results to an appendix.\n\nOriginality: The authors demonstrate experimentally that there is a benefit of using non-saturating GANs. More specifically, the provide empirical evidence that they can fit problems where Jensen-Shannon divergence fails. They also show experimentally that penalized gradients stabilize the learning process.\n\nSignificance: The problems the authors consider is worth exploring further. The authors describe their finding in the appropriate level of details and demonstrate their findings experimentally. However, publishing this work is in my opinion premature for the following reasons:\n\n- The authors do not provide further evidence of why non-saturating GANs perform better or under which mathematical conditions (non-saturating) GANs will be able to handle cases where distribution manifolds do not overlap;\n- The authors show empirically the positive effect of penalized gradients, but do not provide an explanation grounded in theory;\n- The authors do not provide practical recommendations how to set-up GANs and not that these findings did not lead to a bullet-proof recipe to train them.\n\n", "This paper answers recent critiques about ``standard GAN'' that were recently formulated to motivate variants based on other losses, in particular using ideas from optimal transport. It makes main points\n1) ``standard GAN'' is an ill-defined term that may refer to two different learning criteria, with different properties\n2) though the non-saturating variant (see Eq. 3) of ``standard GAN'' may converge towards a minimum of the Jensen-Shannon divergence, it does not mean that the minimization process follows gradients of the Jensen-Shannon divergence (and conversely, following gradient paths of the Jensen-Shannon divergence may not converge towards a minimum, but this was rather the point of the previous critiques about ``standard GAN''). \n3) the penalization strategies introduced for ``non-standard GAN'' with specific motivations, may also apply successfully to the ``standard GAN'', improving robustness, thereby helping to set hyperparameters.\nNote that item 2) is relevant in many other setups in the deep learning framework and is often overlooked.\n\nOverall, I believe that the paper provides enough material to substantiate these claims, even if the message could be better delivered. In particular, the writing is sometimes ambiguous (e.g. in Section 2.3, the reader who did not follow the recent developments on the subject on arXiv will have difficulties to rebuild the cross-references between authors, acronyms and formulae). The answers to the critiques referenced in the \n paper are convincing, though I must admit that I don't know how crucial it is to answer these critics, since it is difficult to assess wether they reached or will reach a large audience.\n\nDetails:\n- p. 4 please do not qualify KL as a distance metric \n- Section 4.3: \"Every GAN variant was trained for 200000 iterations, and 5 discriminator updates were done for each generator update\" is ambiguous: what is exactly meant by \"iteration\" (and sometimes step elsewhere)? \n- Section 4.3: the performance measure is not relevant regarding distributions. The l2 distance is somewhat OK for means, but it makes little sense for covariance matrices. ", "Thanks for the detailed and thorough review! \n\nWe have now updated the paper with a practical considerations section as well as updated the conclusion to reflect some of your take aways, such as:\n\n- GAN training remains difficult and good results are not guaranteed;\n- Gradient penalties work in all settings, but why is not completely clear;\n- NS-GANs + GPs seems to be best sample-generating combination, and faster than WGAN-GP.\n- Some of the used metrics can detect mode collapse. \n\n", "Thank you for your review. We hope to have addressed most of your concerns below:\n\n* We don't believe that the paper is premature for the following reasons:\n - Gradient penalties are helpful to stabilize GAN training, regardless of the cost function. This is also supported by \n another paper (https://arxiv.org/pdf/1705.07215v4.pdf). \n - Gradient penalties are a cost effective way to improve the performance of a GAN. Compared to Wasserstein \n GAN, in which one needs to do 5 discriminator updates per generator update, DRAGAN-NS and GAN-GP still do 1 \n discriminator update per generator update.\n - Using multiple metrics can provide a better overview of how an algorithm is performing, as opposed to just using \n inception score.\n We will include an additional section in the paper that includes this discussion.\n\n* Our empirical approach to the paper is not a disregard for the importance of theory, but rather a push for an encompassing theory which is inline with the experimental results in our paper. We prove empirically that the exported regularization techniques work outside their proposed scopes, thus showing that a different theoretical justification is needed. In addition, we show that a theoretical view of GAN training as divergence minimization is incompatible with empirical results. Specifically, the NS-GAN through GAN training can converge on data distributions that gradient updates on the underlying equilibrium divergence would not. We wish to encourage the research community to continue to explore theories compatible with these observations.\n\n* Please also see the takeaways of AnonReviewer3: “As a purely empirical study, it poses more new and open questions on GAN optimization than it is able to answer; providing theoretical answers is deferred to future studies. This is not necessarily a bad thing, since the extensive experiments (both \"toy\" and \"real\") are well-designed, convincing and comprehensible.\"\n\n* To make the paper easier to read, we will move more results to the appendix.\n\n* Regarding the theory of gradient penalties, this is something we do not have a handle on currently. We show here that gradient penalties work better independently of the theoretical justification they were introduced with. Perhaps a future avenue of work would be to see if these gradient penalties are related to work which tries to analyze and stabilize GANs by looking at the properties of the Jacobian of the vector field associated with the game (see https://arxiv.org/pdf/1705.10461.pdf, https://arxiv.org/abs/1706.04156)\n\n* To clarify when NS-GAN will not work, we will perform experiments which change the number of updates in the discriminator, and see how that affects performance of model. We note however that for the toy data experiments (Section 4) we performed 5 discriminator updates per generator update.\n", "Thank you for the review, your comments made the paper more accessible and improves our experiment evaluations on toy data.\n\n* We will replace the l2 distance between the covariance matrices with the Frechet Distance between two Gaussians as used in Heusel et al. (2017) and update our figures accordingly. \n* We will clarify the statement regarding the KL, together with the difference between step and iteration.\n* We will update section 2.3 to ensure that it is more accessible to a wider audience.\n", " In the paper of \"TOWARDS PRINCIPLED METHODS FOR TRAINING GENERATIVE ADVERSARIAL NETWORKS\" by Arjovsky, they show two results:\n 1. Lemma 1 shows that if the dimension of Z is less than the dimension of X, then g(Z) will be a set of measure 0 in X. This implies that it is almost impossible to generate samples that are similar to true data samples. \n 2. Theorem 2.6 shows that with the non-saturating loss function for the generator, the gradient is of generator has infinite expectation and variance. It implies that using non-saturating loss function is not stable. \n\n In the paper, the authors show that the non-saturating GAN can learn a high dimensional distribution even though the noise is 1-D. This finding seems to be not aligned with the arguments in [Arjovsky 2017]. I would appreciate if the authors could give more intuitive ideas to explain the relation between the experiment results and the theoretical arguments in Arjovsky 2017. Thanks!", "Thanks for the clarification. \n\nAnd yes, but just to again reiterate, we are not suggesting that you apply the DRAGAN gradient penalty inside the 1D uniform [-1, 1] region. Enforcing the gradient norm to be 1 inside here would fail as you described. You should have no issue fitting this training distribution if you only apply the DRAGAN gradient penalty on the *boundaries* of the data distribution, i.e. -1 + delta^i and +1 + delta^j.", " Dear authors, \n I was using the code shared on the github link in the DRAGAN paper (https://arxiv.org/pdf/1705.07215v1.pdf). Sorry, there was a typo in my last message. Both the discriminator and the generator have 2 layers. The conventional GAN and WGAN works even without regularization. \n\n I strongly agree with the authors that it is very interesting to investigate something that empirically works but is not fully known in theory. I think that is why deep learning is so attractive to so many people including myself. On the other hand, I also think we should investigate something that makes sense. Decades ago, we already established the \"universal approximation\" theorem for neural networks. We know that it can fundamentally fits any continuous function on a compact set. If we know it cannot fit certain functions, say a purely linear network, we would not even start training it to fit high-dimensional complicated functions. \n\nI think my argument is based on this philosophy. We know that the DRAGAN regularization is actually wrong for the original GAN in some cases, because gradient norm should be 0 at the optimal point (for example the uniform [-1,1] example). It may have good results for some applications in training images, however, for some applications (say, autonomous driving) we cannot take any risk in applying an algorithm that does not work in come corner cases. In this case, I would devote more time to investigate other methods that are fundamentally correct, for example DRAGAN regularization on WGAN. \n\n", "Thanks for the comment, Leon.\n\nAs a quick validation of your architectures and your training code, did you first confirm that you were able to fit the [-1,1] uniform distribution with standard GAN or with improved WGAN? Also, when you say that your discriminator has one layer, I’m assuming you mean it has one hidden layer and is capable of producing non-linear decision boundaries? An affine discriminator would be insufficient.\n\nTo your point about the theoretical correctness of the penalty, the v1 DRAGAN paper (https://arxiv.org/pdf/1705.07215v1.pdf) “How to Train Your DRAGAN” first introduces this regularization penalty onto the *original* GAN discriminator objective (defined as the minimax GAN variant in our paper) as seen in Algorithm 1. However, this paper actually had an error that their noisy data was not even centered on the original data manifold! Despite this bug, DRAGAN still succeeded in producing better samples. \n\nThe regularization is not necessarily 'fundamentally wrong'. Instead, it is very counterintuitive that it works, given our current level of theoretical understanding. That means that the empirical results showing that it works are more interesting. Empirical results are mostly useful to science when they are surprising. If we experimented with a method that theory predicts should work and it worked, we would not have learned anything. Our results are surprising because we experimented with a method that the theory does not predict should work and yet it worked. This suggests that the theory is at best incomplete and needs to be revised.\n\nFor your particular experimental issue, the regularization should be applied in a region *around* the real-data manifold, not over the entire real-data region. If data manifold is 1D, you should not be applying the DRAGAN penalty throughout the entire region of [-1,1], only at the boundaries. In higher dimensions, these perturbations will almost always be off-manifold. ", "Thanks for you comments, Xu. In response,\n\n1. Lemma 1 does indeed show that g(z) will be a set of measure 0 in X for dim(Z) < dim(X), however, this does not necessarily imply that it’s impossible to generate samples matching the data manifold. The authors are simply stating that it is plausible that the manifold that the data lies on and the manifold of points produced by the generator are disjoint in X. This would imply a perfect discriminator may exist between the manifolds. Further, if one tried to bring these manifolds together by minimizing a JS-divergence, the gradients would be meaningless. This motivated the authors' later development of a softer distance measure and the Wasserstein GAN. \n2. Theorem 2.6 assumes that the noise of D and the gradient of D are decorrelated, which may be too strong of an assumption. The authors acknowledge this and then show empirical gradient norms while training DCGAN, which grow with training iterations. However, in practice, one typically does not train the Discriminator for so many iterations and thus one may avoid the extreme variance cautioned with this theorem.\n", "Dear authors, \n\nWe did a simple exercise is to generate a [-1,1] uniform distribution from a Gaussian distribution using GAN with DRAGAN regularization. However, it does not work. What we observed is that D(x) converges to a function with a hump and therefore all the generated samples are concentrated on a small region, instead of uniform distribution. We adopt a sample code from github. The generator has 2 layers and the discriminator has 1 layer. The lambda is 10.\n\nThe reason is that the regularization term pushes the function to have some slope at the data support, which results in the hump shape. Therefore, the generated samples are mostly concentrated in the region with large D(x).\n\nWe see that some regularizations make sense mathematically:\n- The gradient norm penalty makes sense for WGAN, because the authors in the paper show that \"The optimal critic has unit gradient norm almost everywhere under Pr and Pg\". \n- The application of DRAGAN regularization to WGAN also makes sense because it shows in the paper that there is minor difference between the unit norm argument and the actual application of WGAN-GP, therefore it only applies the gradient penalty in the neighborhood of the data samples. The same holds for the paper of \"On the regularization of Wasserstein GANs\". \n- The regularization term in the paper of \"Stabilizing Training of Generative Adversarial Networks through Regularization\" makes sense because by Taylor expansion, the noise perturbation at the input approximately adds a regularization term at the objective function. And the training with noise is justified theoretically in the paper of \"Towards principled methods for training generative adversarial networks\". \n\nHowever, the application of DRAGAN regularization to the original GAN needs justification. For the original GAN, the optimal D(x) is 1/2 on the data support and hence its gradient is zero. The DRAGAN regularization, however, pushes the gradient norm to 1, which makes the training converge to a wrong value. If we know the regularization is fundamentally and mathematically wrong, why do we investigate its performance?\n\n" ]
[ -1, -1, 8, -1, -1, 4, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 5, -1, -1, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "H1iaR_vSf", "iclr_2018_ByQpn1ZA-", "iclr_2018_ByQpn1ZA-", "H1oisTzMf", "SkiI_Bixz", "iclr_2018_ByQpn1ZA-", "iclr_2018_ByQpn1ZA-", "SkqpgbLgM", "Sks895Fxz", "SkiI_Bixz", "iclr_2018_ByQpn1ZA-", "BkUrF7rJG", "ryDZ5ySJz", "H1N7QxeJM", "HksiG0DA-", "iclr_2018_ByQpn1ZA-" ]
iclr_2018_S1uxsye0Z
Adaptive Dropout with Rademacher Complexity Regularization
We propose a novel framework to adaptively adjust the dropout rates for the deep neural network based on a Rademacher complexity bound. The state-of-the-art deep learning algorithms impose dropout strategy to prevent feature co-adaptation. However, choosing the dropout rates remains an art of heuristics or relies on empirical grid-search over some hyperparameter space. In this work, we show the network Rademacher complexity is bounded by a function related to the dropout rate vectors and the weight coefficient matrices. Subsequently, we impose this bound as a regularizer and provide a theoretical justified way to trade-off between model complexity and representation power. Therefore, the dropout rates and the empirical loss are unified into the same objective function, which is then optimized using the block coordinate descent algorithm. We discover that the adaptively adjusted dropout rates converge to some interesting distributions that reveal meaningful patterns.Experiments on the task of image and document classification also show our method achieves better performance compared to the state-of the-art dropout algorithms.
accepted-poster-papers
The reviewers agreed that the work addresses an important problem. There was disagreement as to the correctness of the arguments in the paper: one of these reviewers was eventually convinced. The other pointed out another two issue in their final post, but it seems that 1. the first is easily adopted and does not affect the correctness of the experiments and 2. the second was fixed in the second revision. Ideally these would be rechecked by the third reviewer, but ultimately the correctness of the work is the authors' responsibility. Some related work (by McAllister) was pointed out late in the process. I encourage the authors to take this related work seriously in any revisions. It deserves more than two sentences.
val
[ "Bk15lpF1G", "HyUUnfKHz", "By4ny9FHG", "r1cQ9zYSf", "rkZCWLOHf", "HJkIwe_Sz", "BkYWorDrf", "rJlVd7PrM", "rJ3xKc8BM", "r1wH68UHz", "Syx39Bsgf", "BksvXe8rG", "BJefd2BBf", "HJHht9rHM", "rJYg3KLEf", "HkOUXRFlz", "S1CZ6BnQM", "ry1sRp-ff", "SJkzxRWzz", "S18-Tp-zz" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper studies the adjustment of dropout rates which is a useful tool to prevent the overfitting of deep neural networks. The authors derive a generalization error bound in terms of dropout rates. Based on this, the authors propose a regularization framework to adaptively select dropout rates. Experimental results are also given to verify the theory.\n\nMajor comments:\n(1) The Empirical Rademacher complexity is not defined. For completeness, it would be better to define it at least in the appendix.\n(2) I can not follow the inequality (5). Especially, according to the main text, f^L is a vector-valued function . Therefore, it is not clear to me the meaning of \\sum\\sigma_if^L(x_i,w) in (5).\n(3) I can also not see clearly the third equality in (9). Note that f^l is a vector-valued function. It is not clear to me how it is related to a summation over j there.\n(4) There is a linear dependency on the number of classes in Theorem 3.1. Is it possible to further improve this dependency?\n\nMinor comments:\n(1) Section 4: 1e-3,1e-4,1e-5 is not consistent with 1e^{-3}, 1e^{-4},1e^{-5}\n(2) Abstract: there should be a space before \"Experiments\".\n(3) It would be better to give more details (e.g., page, section) in citing a book in the proof of Theorem 3.1\n\nSummary:\nThe mathematical analysis in the present version is not rigorous. The authors should improve the mathematical analysis.\n\n----------------------------\nAfter Rebuttal:\nThank you for revising the paper. I think there are still some possible problems. \nLet us consider eq (12) in the appendix on the contraction property of Rademacher complexity (RC).\n(1) Since you consider a variant of RC with absolute value inside the supermum, to my best knowledge, the contraction property (12) should involve an additional factor of 2, see, e.g., Theorem 12 of \"Rademacher and Gaussian Complexities: Risk Bounds and Structural Results\" by Bartlett and Mendelson. Since you need to apply this contraction property L times, there should be a factor of 2^L in the error bound. This make the bound not appealing for neural networks with a moderate L.\n(2) Second, the function g involves an expectation w.r.t. r before the activation function. I am not sure whether this existence of expectation w.r.t. r would make the contraction property applicable in this case.", "We added David McAllister’s paper in our latest revision. \n\n1. The PAC-Bayes theorem holds for all “priors”. The Rademacher bound holds for all \"hypothesis\" in the class. We do not assume there is any probability measure over the hypothesis class. But we agree adding priors often gives us convenience in the proof.\n\n2. Thanks for the great suggestion. Extension of David’s bound to different dropout rates is definitely worth trying.\n\n############################################################################\n\nThe issue pointed out by reviewer 3 is valid. Though we fixed the issue, it leads to a slight change of the bound when q > 1. As a result, some of the experiments with the setting q > 1 need to be modified and rerun. \n\nAt this point we are considering withdraw and resubmit. \n\nThanks very much for your time and effort. \n\n############################################################################\nWe updated the experiments and now they are consistent with the updated theorem. Thanks for your patience and understanding. \n", "Thanks for the revision addressing my concerns on the deduction. Also, feel regretful that the change of arguments leads to a different bound.", "You are right. After the update, the upper bound blows a little bit from \\|\\theta\\|_q to \\|\\theta\\|_1^{1/q}. The claim of the theorem now only holds when q = 1. When q>1, we need to adjust the bound. We updated our draft to make sure the theory part is sound.\n\nSince we also have some experiments on q = 2 and q=\\infty, this suggests part of our experiments needs to be modified and rerun. At this point we are considering withdraw and resubmit later.", "Thank you for considering the comments in the revision. Here are some further comments:\n\nin eq (12), the last two equations are the same and one can be removed.\nin eq (14), it seems that \\|\\theta\\|_q should be \\|\\theta\\|_1^{1/q}? Please check the deductions:\nE\\|r\\|_q = E [ [\\sum_i|r_i|^q]^{1/q} ] = E [ [\\sum_i r_i ]^{1/q} ] \\leq [\\sum_i E r_i]^{1/q} = \\|\\theta\\|_1^{1/q}", "Thanks very much for pointing out the issues. We are sorry in the last rebuttal we missed your point. Both issues you suggested are valid.\n\nFor your first concern:\nYes there is an absolute value operator inside, so this requires a factor of 2. Suppose we have L layers, there should be a factor of 2^L as you correctly pointed out. \nFortunately, in our empirical evaluation L is fixed, so 2^L is a constant. In this way it won’t affect the experimental observation.\n\nFor your second concern:\nYou are right, the original procedure has issues due to the existence of E inside \\sup. We fixed the issue in our latest revision. The final claim in the theorem stays the same (except there is an extra 2^L term as you suggested earlier). Please check our updated proof.\n\nWe sincerely thank reviewer 3 for the great contribution and efforts. \n", "Re: 1. The PAC-Bayes theorems hold for all \"priors\". Each \"prior\" gives you a different bound. So it is not right to say that David \"assumes\" a prior. There is no assumption. In the application of PAC-Bayes bounds, priors are often chosen for convenience to yield tractable KL divergence terms. This is the case in David's bounds.\n\nRe: 2. I suspect the extension of David's bound to different dropout rates is straightforward.", "Thanks for the response. In eq (12), the left-hand side is E \\sup |E \\phi(f)|, while the right-hand side is E \\sup |f|.\nHere E denotes the expectation. Firstly, there is an absolute value operator here which requires a factor of 2 in the application of contraction property. Secondly, an expectation is inside the absolute value operator, so, as far as i can see, the standard contraction property can not be applied in this way . ", "(it looks like my previous reply did not appear, so trying again)\nThanks for correcting my mistakes on both the L1 norm and the data dependence. It seems plausible in that adding Rad is justifiable. I think 6.3 is important in the chain of reasoning. Right now, it is suggestive but does not sufficiently justify the approach. It would be valuable to verify that Rad give a reasonable dependence both in terms of n and d as a regularizer. Secondly, it still suggests that bounds of Rad seems to be common regularizers, rather than that Rad itself is good.\n\nHowever, given that it is intuitive to optimize a generalization bound, and my previous misunderstanding, I changed my rating to weak accept.", "Thanks so much for pointing out this fantastic work. Much appreciated. We apologize that we were not aware of the work by David McAllister before. We believe the work is definitely related to ours. We will add a reference in our next revision.\n\nThe work by David provides a bound on the expected loss from a nice but different point of view. Some differences from our work:\n1. The PAC-Bayesian bound assumes a distribution over the hypothesis class. The Rademacher bound we proved does not make that assumption. \n2. The bound David proved assumes one universal dropout rate so \\alpha in their paper is a scalar. To tune the retain rate for each individual neuron, the retain probabilities we used in our bound are vectors. That is, we assume different neurons may have different dropout rates.\n", "==Main comments\n\nThe authors connect dropout parameters to a bound of the Rademacher complexity (Rad) of the network. While it is great to see deep learning techniques inspired by learning theory, I think the paper makes too many leaps and the Rad story is ultimately unconvincing. Perhaps it is better to start with the resulting regularizer, and the interesting direct optimization of dropout parameters. In its current form, the following leaps problematic and were not addressed in the paper:\n\n1) Why is is adding Rad as a regularizer reasonable? Rad is usually hard to compute, and most useful for bounding the generalization error. It would be interesting if it also turns out to be a good regularizer, but the authors do not say why nor cite anything. Like the VC dimension, Rad itself depends on the model class, and cannot be directly optimized. Even if you can somehow optimize over the model class, these quantities give very loose bounds, and do not equal to generalization error. For example, I feel even just adding the actual generalization error bound is more natural. Would it make sense to just add Rad to the objective in this way for a linear model?\n\n2) Why is it reasonable to go from a regularizer based on RC to a loose bound of Rad? The actual resulting regularizer turns out to be a weight penalty but this seems to be a rather loose bound that might not have too much to do with Rad anymore. There should be some analysis on how loose this bound is, and if this looseness matter at all. \n\nThe empirical results themselves seem reasonable, but the results are not actually better than simpler methods in the corresponding tasks, the interpretation is less confident. Afterall, it seems that the proposed method had several parameters that were turned, where the analogous parameters are not present in the competing methods. And the per unit dropout rates are themselves additional parameters, but are they actually good use of parameters?\n\n==Minor comments\n\nThe optimization is perhaps also not quite right, since this requires taking the gradient of the dropout parameter in the original objective. While the authors point out that one can use the mean, but that is more problematic for the gradient than for normal forward predictions. The gradient used for regular learning is not based on the mean prediction, but rather the samples.\n\ntiny columns surrounding figures are ugly and hard to read\n", "Are the authors aware of the work by David McAllister using PAC-Bayes bounds to analyze dropout? Last I saw, it was not mentioned in the paper. IT seems like important related work. Could the authors, very quickly (!), comment as to the relationship and explain what, if any, changes they would make to address this gap in related work?", "Thanks very much for the comments. We would like to clarify some misunderstandings:\n\nQ: for L1, Rad is the infinity norm, which is not the one you wanted.\nA: section 6.3, for L1, Rad contains the 1 norm (B_1). The L_infty norm is on the samples, just like the max norm shown in our bound. Note here B1 instead of L_infty is the bound on the model parameters.\n\nQ: Rad only depends on the hypothesis class and not on how much data your have and properties of the data\nA: This is wrong. Not only does Rademacher complexity depend on the hypothesis class, but it also depends on the sample distribution because it is taking the expectation of the empirical Rademacher complexity over all samples of size n.\nOn the other hand, note what we proved is the upper bound for the EMPIRICAL Rademacher complexity rather than the Rademacher complexity itself. That is why it has the dependency on the sample size. \nBy measure concentration the EMPIRICAL Rademacher complexity is used to bound the Rademacher complexity.\n\nQ: the hypothesis is now: bounds of Rad makes good regularizers, instead of Rademacher makes good regularizers.\nA: As stated in section 6.3, the regularizer used in ridge regression as well as Lasso can be interpreted as terms related to the upper bound of the empirical Rademacher complexity. Rademacher itself may make a good regularizer, but it is simply too complex to optimize and evaluate. That’s why we used the upper bound instead. \nSimilar ways of approximation have also been used in the numeric optimization community, where if the objective is too hard to optimize, one may choose to optimize its convex envelop instead. \n", "Not motivating why adding Rademacher itself is in my opinion the biggest weakness in the paper. \nWhile section 6.3 is a good step (though I think it belong to the main paper given the story), it still seems inadequate. For one, the hypothesis is now: bounds of Rad makes good regularizers, instead of Rademacher makes good regularizers. Secondly, for L1, Rad is the infinity norm, which is not the one you wanted.\nLastly, Rad only depends on the hypothesis class and not on how much data your have and properties of the data, which are clearly important for picking regularizers in practice (or their strength through cross validation), which suggests it might not be justifiable.\n\nI still think the paper is borderline and weak reject unless this issue can be addressed convincingly.", "Thanks very much for your careful examination. We do appreciate it.\n(1) If you look at the final Rademacher complexity bound we are proving, it has no absolute value inside the supremum. The contraction lemma is applied to the Rademacher complexity without absolute value. That is why equation (7) comes after the contraction. We understand this is confusing. We will make it clear in the next version.\n(2) As you mentioned, if we take expectation with respect to r, then f^L is not a function of r any more. Actually in our definition, the final prediction function f^L is a deterministic function (since we take the expectation w.r.t. r).", "An important contribution. The paper is well written. Some questions that needs to be better answered are listed here.\n1. The theorem is difficult to decipher. Some remarks needs to be included explaining the terms on the right and what they mean with respect to learnability or complexity. \n2. How does the regularization term in eq (2) relate to the existing (currently used) norm based regularizers in deep network learning? It may be straight forward but some small simulation/plots explaining this is important. \n3. Apart from the accuracy results, the change in computational time for working with eq (2), rather than using existing state-of-the-art deep network optimization needs to be reported? How does this change vary with respect to dataset and network size (beyond the description of scaled regularization in section 4)?\n4. Confidence intervals needs to be computed for the retain-rates (reported as a function of epoch). This is critical both to evaluate the stability of regularizers as well as whether the bound from theorem is strong. \n5. Did the evaluations show some patterns on the retain rates across different layers? It seems from Figure 3,4 that retain rates in lower layers are more closer to 1 and they decrease to 0.5 as depth increases. Is this a general pattern? \n6. It has been long known that dropout relates to non-negative weighted averaging of partially learned neural networks and dropout rate of 0.5 provides best dymanics. The evaluations say that clearly 0.5 for all units/layers us not correct. What does this mean in terms of network architecture? Is it that some layers are easy to average (nothing is learned there, so dropped networks have small variance), while some other layers are sensitive? \n7. What are some simple guidelines for choosing the values of p and q? Again it appears p=q=2 is the best, but need confidence intervals here to say anything substantial. ", "Update list:\n\n1. added a subsection 6.5 to the appendix to empirically demonstrate the relations between the stochastic objective and the deterministic approximation. (minor comments from Reviewer#2)\n2. added a subsection 6.6 to the appendix to empirically show the stability of the dropout rate convergence. (suggestion 4 from Reviewer #1)\n3. added one paragraph of descriptions of the terms used in our bound following the theorem 3.1 as suggested by Reviewer #1 (Q1). \n4. added two cases when our bounds are tight. (the last paragraph in section 3.1) This responds to the second concern raised by Review #2.\n5. added the definition of the empirical Rademacher complexity to section 3.1 as suggested by Reviwer #3.\n6. added a paragraph about the different notations used for vectors and scalars in subsection 6.1. (second paragraph of the proof) This is to respond the questions 2 and 3 raised by Reviewer #3.\n7. fixed all the typos pointed out by Reviewer #3. (comment 1 and 2 from Reviewer #3)\n8. added references of pages and chapters suggested by Reviewer #3.\n9. fixed the tiny column issues mentioned by Review #2 \n10. for the first concern raised by Review #2, we suggest reading section 6.3 in our appendix.\n\n", "Thanks very much for your encouraging comments and helpful suggestions. \n\n1. The upper bound suggests that layers affect the complexity in a multiplicative way. An extreme case as we described in the last paragraph of section 3.2 is, if the dropout retain rates for one layer are all zeros, then the empirical Rademacher complexity for the whole network is zero since the network is doing random guess for predictions. In this case the bound is tight. We will put more descriptions about the terms in our bound.\n\n2. This is an interesting suggestion. Norm based regularizers currently used are imposed on the weights of each layer without considering the retain rates and the regularization is done on each layer independently. We suggest organizing them in a systematic way. \n\n3. In terms of running time, the proposed framework takes one additional backpropagation compared to its standard deep network counterpart. In practice, we find the running time per epoch after introducing the regularizer is approximately 1.6 to 1.9 times that of the current standard deep network. \n\n4: Thanks for the great suggestions. There are some potential issues with drawing the confidence intervals for the retain rate of a particular neuron. For example, permuting the neurons does not change the network structure but it may lead to some identifiability issues. Instead to demo the stability of the algorithm we may add a plot showing the histograms of the theta with different initializations. \n\n5. This is an excellent question! In fact, we are conducting additional evaluations to verify this pattern. We had some preliminary empirical observations that, as the layer goes higher, fewer neurons get high retain rates . This is somewhat consistent with the fact that people tend to set the number of neurons smaller for higher layers. We still need more experiments to tell if this is a general pattern.\n\n6. This is another great question. It is also related to an on-going follow-up work we are currently investigating as stated in the conclusion and future work section of our paper. If we use the setting of p=\\infty and q=1, the L1 norm regularizer may produce sparse retain rates. Subsequently, we could prune the corresponding neuron. Therefore we could use the algorithm as a way to determine the number of neurons used in hidden layers, i.e., we can use the regularizer to tune the network architecture. Similarly, if we use p=1 and q=\\infty, then we can expect sparse coefficients on W due to the property of the L1 norm, in this way the regularizer can also be used to prune the internal neural connections. \n\n7. Currently we do not have any theory for choosing p and q. As we stated above, one way is to choose p and q based on the sparsity desire. If we would like to impose sparsity on the number of neurons to fire,we may set q=1 to promote sparse retain rates. On the other hand, if we would like to impose sparsity on the number of internal connections, i.e., have a sparse coefficient matrix W, we may set p=1 instead.", "Thanks very much for your review and comments. \n\nAbout your major comments \n\n(1):\nThanks for your suggestion, we will include the definition of empirical Rademacher complexity in our revision.\n\n(2) and (3):\nAs we stated in the first paragraph of our proof 6.1, we treat the functions fed into the neurons of the l-th layer as one class of functions. Therefore, f^L(x;W) is a vector as you correctly pointed out, but f^L(x;w) is a scalar. So each dimension of f^L(x;W) is viewed as one instance coming from the same function class f^L(x;w). Similar ways of proof have been adopted in Wan et al. (2013). We are sorry about the confusion. We will add more descriptions about it to make that clear in our revision. \n\n(4)\nIt is a good question. The dependency on the number of classes comes from the contraction lemma. However, what we proved is only a weak bound on the Rademacher complexity. We are still working on further tightening the bound. For now, we are not sure if we can reduce the dependency on the number of classes to sub-linear. We hope this work will also open additional research directions and future extensions to the community. You are always welcome to add to it.\n\nAbout your minor comments:\n\n(1) (2) Thanks for the careful examination. We will fix the typos in the next version.\n\n(3) Thanks for the comments. \nContraction lemma (Shalev-Shwartz & Ben-David, 2014) is a variant of the Lemma 26.9 located on page 381, Chapter 26.\nLemma 26.11 in Shalev-Shwartz & Ben-David (2014) is located on page 383, Chapter 26.2. \nWe will add the chapters and pages to the proof. \n", "Thanks very much for your valuable comments and helpful suggestions. \n\nQ: Why adding Rad as a regularizer reasonable? Why is it reasonable to go from a regularizer based on RC to a loose bound of Rad?\nA: These are great questions. We agree we do not have a rigorous way to prove adding an approximate upper bound to the objective can lead to any theoretical guarantee as you correctly pointed out. The theorem of the upper bound in the paper is rigorous but why adding the upper bound to the objective can help is heuristic and empirical.\n\nOn the other hand, adding an approximate term that is related to the upper bound of the Rademacher complexity is not something new. For example, the squared L2 norm regularizer used in the ridge regression, though there are explanations such as Bayesian priors, can be interpreted as a term related to the upper bound of the Rademacher complexity of linear classes . People are already using it. Similarly, the L1 regularizer used in LASSO can also be interpreted as a term related to the Rademacher complexity bound. We have put a section in the Appendix (Section 6.3) to somewhat justify it in a heuristic way. \n\nQ: The actual resulting regularizer turns out to be… rather loose bound…\nA: We agree that the bound proved in the paper could be a bit loose. Still in some extreme cases it is tight. For example, as we indicated in the paragraph before Section 3.3, if the retain rates in one layer are all zeros, the model always makes random guess for prediction. In this case the empirical Rademacher complexity is zero and our bound is tight. In general, even if the bound is loose, it still gives some justification on the norms used in today’s neural network regularizations. Additionally, it leads to a systematic way of weighting the norms as well as the retain rates.\n\nMinor comments:\nQ: While the authors point out that one can use the mean, but that is more problematic for the gradient than for normal forward predictions. After all, the gradient used for regular learning is not based on the mean prediction, but rather the samples.\nA: This is an excellent question. As we stated in Section 3.3, “this is an approximation to the true f^L(x;W, θ)”. Using the mean is purely an approximation used for the sake of optimization efficiency. By design we should use the samples. However empirically we found that optimizing based on the mean (instead of the actual sampling) still leads to a decrease of the objective. We will add additional figures to better illustrate the point in our next revision.\n\nQ: tiny columns surrounding figures are ugly and hard to read\nA: Thanks for the suggestion. We will fix it in our revision.\n\nQ: dropout rate is perhaps more common than retain rate\nA: We use the retain rate instead just to make the upper bound look less messy. " ]
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, 7, -1, -1, -1, -1 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, 5, -1, -1, -1, -1 ]
[ "iclr_2018_S1uxsye0Z", "BkYWorDrf", "r1cQ9zYSf", "rkZCWLOHf", "HJkIwe_Sz", "rJlVd7PrM", "r1wH68UHz", "rJYg3KLEf", "BJefd2BBf", "BksvXe8rG", "iclr_2018_S1uxsye0Z", "iclr_2018_S1uxsye0Z", "HJHht9rHM", "S18-Tp-zz", "Bk15lpF1G", "iclr_2018_S1uxsye0Z", "iclr_2018_S1uxsye0Z", "HkOUXRFlz", "Bk15lpF1G", "Syx39Bsgf" ]
iclr_2018_BJij4yg0Z
A Bayesian Perspective on Generalization and Stochastic Gradient Descent
We consider two questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well? Our work responds to \citet{zhang2016understanding}, who showed deep neural networks can easily memorize randomly labeled training data, despite generalizing well on real labels of the same inputs. We show that the same phenomenon occurs in small linear models. These observations are explained by the Bayesian evidence, which penalizes sharp minima but is invariant to model parameterization. We also demonstrate that, when one holds the learning rate fixed, there is an optimum batch size which maximizes the test set accuracy. We propose that the noise introduced by small mini-batches drives the parameters towards minima whose evidence is large. Interpreting stochastic gradient descent as a stochastic differential equation, we identify the ``noise scale" g=ϵ(NB−1)≈ϵN/B, where ϵ is the learning rate, N the training set size and B the batch size. Consequently the optimum batch size is proportional to both the learning rate and the size of the training set, Bopt∝ϵN. We verify these predictions empirically.
accepted-poster-papers
I'm inclined to recommend accepting this paper, although it is borderline given the strong dissenting opinion. The revisions have addressed many of the concerns about quality, clarity, and significance. The paper gives an end to end explanation in Bayesian terms of generalization in neural networks using SGD. However, it is my opinion that Bayesian statistics is not, at present, a theory that can be used to explain why a learning algorithm works. The Bayesian theory is too optimistic: you introduce a prior and model and then trust both implicitly. Relative to any particular prior and model (likelihood), the Bayesian posterior is the optimal summary of the data, but if either part is misspecified, then the Bayesian posterior carries no optimality guarantee. The prior is chosen for convenience here. And the model (a neural network feeding into cross entropy) is clearly misspecified. However, there are ways to sidestep both these issues using a frequentist theory closely related to Bayes, which can explain generalization. Indeed, you cite a recent such paper by Dzugate and Roy who use PAC-Bayes. However, you citation is disappointingly misleading: a reader would never know that these authors are also responding to Zhang, have already proposed to explain "broad minima" in (PAC-)Bayesian terms, and then even get nonvacuous bounds. (The connection between PAC-bayes and marginl likelihood is explained by Germain et al. "PAC-Bayesian Theory Meets Bayesian Inference"). Dzugate et al don't propose to explain why SGD finds such "good" minima. So I would say, your work provides the missing half of their argument. This work deserves more prominent placement and shouldn't be buried on page 5. Indeed, it should appear in the introduction and a proper description of the relationship should be given.
train
[ "SJAadCKxz", "By74tbqxf", "HkoLoX5lf", "ry5JIu5GG", "ryszMFGWG", "BkhiKJ1zG", "ry8dvMBZf", "SyQvIePbz", "SJC1KfSWf", "HyvtmFzWM", "S1XazYfZG", "SJUvMa-bf", "HJiDf7WZf", "HJBT-QZWf", "r1tVifbbz", "H1YF5RlyM", "HyDLn-pRZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "author", "author", "public", "public", "official_reviewer", "public", "public", "public", "author", "public" ]
[ "The paper takes a recent paper of Zhang et al 2016 as the starting point to investigate the generalization capabilities of models trained by stochastic gradient descent. The main contribution are scaling rules that relate the batch size k used in SGD with the learning rate \\epsilon, most notably \\epsilon/k = const for optimal scaling.\n\nFirst of all, I have to say that the paper is very much focussed on the aforementioned paper, its experiments as well as its (partially speculative) claims. This, in my opinion, is a biased and limited starting point, which ignores much of the literature in learning theory. \n\nChapter 2 provides a sort of a mini-tutorial to (Bayesian) model selection based on standard Bayes factors. I find this of limited usefulness. First of all, I find the execution poor in the details: \n(i) Why is \\omega limited to a scalar? Nothing major really depends on that. Later the presentation switches to a more general case. \n(ii) What is a one-hot label? \"One-hot\" is the encoding of a categorical label. \n(iii) In which way is a Gaussian prior uncorrelated, if there is just a scalar random variable? \n(iv) How can one maximize a probability density function? \n(v) Why is an incorrect \"pseudo\"-set notation used instead of the correct vectorial one? \n(vi) \"Exponentially large\", \"reasonably prior\" model etc. is very vague terminology\n(vii) No real credit is given for the Laplace approximation presented up to Eq. 10. For instance, why not refer to the seminal paper by Kass & Raferty? Why spend so much time on a step-by-step derivation anyway, as this is all \"classic\" and has been carried out many times before (in a cleaner write-up)? \n(viii) \"P denotes the number of model parameters\" (I guess it should be a small p? hard to decipher)\n(ix) Usually, one should think of the Laplace approximation and the resulting Bayes factors more in terms of a \"volume\" of parameters close to the MAP estimate, which is what the matrix determinant expresses, more than any specific direction of \"curvature\". \n\nChapter 3 constructs a simple example with synthetic data to demonstrate the effect of Bayes factors. I feel the discussion to be too much obsessed by the claims made in Zhang et al 2016 and in no way suprising. In fact, the \"toy\" example is so much of a \"toy\" that I am not sure what to make of it. Statistics has for decades successfully used criteria for model selection, so what is this example supposed to proof (to whom?).\n\nChapter 4 takes the work of Mandt et al as a starting point to understand how SGD with constant step size effectively can be thought of as gradient descent with noise, the amplitude of which is controlled by the step size and the mini-batch size. Here, the main goal is to use evidence-based arguments to distinguish good from poor local minima. There is some experimental evidence presented on how to resolve the tradeoff between too much noise (underfitting) and too little (overfitting).\n\nChapter 5 takes a stochastic differential equation as a starting point. I see several issues:\n(i) It seems that you are not doing much with a SDE, as you diredctly jump to the discretized version (and ignore discussions of it's discretization). So maybe one should not feature the term SDE so prominently.\n(ii) While it is commonly done, it would be nice to get some insights on why a Gaussian approx. is a good assumption. Maybe you can verify this experimentally (as much of the paper consists of experimental findings)\n(iii) Eq. 13. Maybe you want this form to indicate a direction you want to move towards, by I find adding and subtracting the gradient in itself not a very interesting manner of illustartion.\n(iv) I am not sure in whoch way g is \"measured\", but I guess you are determining it by comparing coefficients. \n(v) I am confused by the B_opt \\propto \\eps statement. It seems you are scaling to mini-batrch gradient to be in expectation equal to the full gradient (not normalized by N), e.g. it scales ~N. Now, if we think of a mini-batch as being a batched version of single pattern updates, then clearly the effective step length should scale with the batch size, which - because of the batch size normalization with N/B - means \\epsilon needs to scale with B. Maybe there is something deeper going on here, but it is not obvious to me.\n(vi) The argument why B ~ N is not clear to me. Is there one or are just making a conjecture?\n\nBottom line: The paper may contribute to the current discussion of the Zhang et al 2016 paper, but I feel it does not make a significant contribution to the state of knowledge in machine learning. On top of that, I feel the execution of the paper leaves much to be desired. \n", "Summary:\nThis paper presents a very interesting perspective on why deep neural networks may generalize well, in spite of their high capacity (Zhang et al, 2017). It does so from the perspective of \"Bayesian model comparison\", where two models are compared based on their \"marginal likelihood\" (aka, their \"evidence\" --- the expected probability of the training data under the model, when parameters are drawn from the prior). It first shows that a simple weakly regularized (linear) logistic regression model over 200 dimensional data can perfectly memorize a random training set with 200 points, while also generalizing well when the class labels are not random (eg, when a simple linear model explains the class labels); this provides a much simpler example of a model generalizing well in spite of high capacity, relative to the experiments presented by Zhang et al (2017). It shows that in this very simple setting, the \"evidence\" of a model correlates well with the test accuracy, and thus could explain this phenomena (evidence is low for model trained on random data, but high for model trained on real data).\n\nThe paper goes on to show that if the evidence is approximated using a second order Taylor expansion of the cost function around a minimia $w_0$, then the evidence is controlled by the cost at the minimum, and by the logarithm of the ratio of the curvature at the minimum compared to the regularization constant (eg, standard deviation of gaussian prior). Thus, Bayesian evidence prefers minima that are both deep and broad. This provides a way of comparing models in a way which is independent of the model parametrization (unfortunately, however, computing the evidence is intractable for large networks). The paper then discusses how SGD can be seen as an algorithmic way of finding minima with large \"evidence\" --- the \"noise\" in the gradient estimation helps the model avoid \"sharp\" minima, while the gradient helps the model find \"deep\" minima. The paper shows that SGD can be understood using stochastic differential equations, where the noise scale is approximately aN/((1-m)B) (a = learning rate, N = size of training set, B = batch size, m = momentum). It argues that because there should be an optimal noise scale (which maximizes test performance), the batch size should be taken proportional to the learning rate, as well as the training set size, and proportional to 1/(1-m). These scaling rules are confirmed experimentally (DNN trained on MNIST). Thus, this Bayesian perspective can also help explain the observation that models trained with smaller batch sizes (noisier gradient estimates) often generalize better than those with larger batch sizes (Kesker et al, 2016). These scaling rules provide guidance on how to increase the batch size, which is desirable for increasing the parralelism of SGD training.\n\nReview:\nQuality: The quality of the work is high. Experiments and analysis are both presented clearly.\n\nClarity: The paper is relatively clear, though some of the connections between the different parts of the paper felt unclear to me:\n1) It would be nice if the paper were to explain, from a theoretical perspective, why large evidence should correspond to better generalization, or provide an overview of the work which has shown this (eg, Rissanen, 1983).\n2) Could margin-based generalization bounds explain the superior generalization performance of the linear model trained on random vs. non-random data? It seems to me that the model trained on meaningful data should have a larger margin.\n3) The connection between the work on Bayesian evidence, and the work on SGD, felt very informal. The link seems to be purely intuitive (SGD should converge to minima with high evidence, because its updates are noisy). Can this be formalized? There is a footnote on page 7 regarding Bayesian posterior sampling -- I think this should be brought into the body of the paper, and explained in more detail.\n4) The paper does not give any background on stochastic differential equations, and why there should be an optimal noise scale 'g', which remains constant during the stochastic process, for converging to a minima with high evidene. Are there any theoretical results which can be leveraged from the stochastic processes literature? For example, are there results which prove anything regarding the convergence of a stochastic process under different amounts of noise?\n5) It was unclear to me why momentum was used in the MNIST experiments. This seems to complicate the experimental setting. Does the generalization gap not appear when no momentum is used? Also, why is the same learning rate used for both small and large batch training for Figures 3 and 4? If the learning rate were optimized together with batch size (eg, keeping aN/B constant), would the generalization gap still appear? Figure 5a seems to suggest that it would not appear (peaks appear to all have the same test accuracy).\n6) It was unclear to me whether the analysis of SGD as a stochastic differential equation with noise scale aN/((1-m)B) was a contribution of this paper. It would be good if it were made clearer which part of the mathematical analysis in sections 2 and 5 are original.\n7) Some small feedback: The notation $< x_i > = 0$ and $< x_i^2 > = 1$ is not explained. Is each feature being normalized to be zero mean, unit variance, or is each training example being normalized?\n\nOriginality: The works seems to be relatively original combination of ideas from Bayesian evidence, to deep neural network research. However, I am not familiar enough with the literature on Bayesian evidence, or the literature on sharp/broad minima, and their generalization properties, to be able to confidently say how original this work is.\n\nSignificance: I believe that this work is quite significant in two different ways:\n1) \"Bayesian evidence\" provides a nice way of understanding why neural nets might generalize well, which could lead to further theoretical contributions.\n2) The scaling rules described in section 5 could help practitioners use much larger batch sizes during training, by simultaneously increasing the learning rate, the training set size, and/or the momentum parameter. This could help parallelize neural network training considerably.\n\nSome things which could limit the significance of the work:\n1) The paper does not provide a way of measuring the (approximate) evidence of a model. It simply says it is prohibitively expensive to compute for large models. Can the \"Gaussian approximation\" to the evidence (equation 10) be approximated efficiently for large neural networks?\n2) The paper does not prove that SGD converges to models of high evidence, or formally relate the noise scale 'g' to the quality of the converged model, or relate the evidence of the model to its generalization performance.\n\nOverall, I feel the strengths of the paper outweight its weaknesses. I think that the paper would be made stronger and clearer if the questions I raised above are addressed prior to publication.", "This paper builds on Zhang et al. (2016) (Understanding deep learning requires rethinking generalization). Firstly, it shows experimentally that the same effects appear even for simple models such as linear regression. It also shows that the phenomenon that sharp minima lead to worse result can be explained by Bayesian evidence. Secondly, it views SGD with different settings as introducing different levels of noises that favors different minima. With both theoretical and experimental analysis, it suggests the optimal batch-size given learning rate and training data size. The paper is well written and provides excellent insights. \n\nPros:\n1. Very well written paper with good theoretical and experimental analysis.\n2. It provides useful insights of model behaviors which are attractive to a large group of people in the community. \n3. The result of optimal batch size setting is useful to wide range of learning methods.\n\nCons and mainly questions:\n1. Missing related work. \nOne important contribution of the paper is about optimal batch sizes, but related work in this direction is not discussed. There are many related works concerning adaptive batch sizes, such as [1] (a summary in section 3.2 of [2]). \n\n2. It will be great if the author could provide some discussions with respect to the analysis of information bottleneck [3] which also discuss the generalization ability of the model. \n\n3. The result of the optimal mini-batch size depends on the training data size. How about real online learning with streaming data where the total number of data points are unknown?\n\n4. The results are reported mostly concerning the training iterations, not the CPU time such as in figure 3. It will be fair/interesting to see the result for CPU time where small batch maybe favored more. \n\n\n[1] Balles, Lukas, Javier Romero, and Philipp Hennig. \"Coupling Adaptive Batch Sizes with Learning Rates.\" arXiv preprint arXiv:1612.05086 (2016).\n[2] Zhang, Cheng, Judith Butepage, Hedvig Kjellstrom, and Stephan Mandt. \"Advances in Variational Inference.\" arXiv preprint arXiv:1711.05597 (2017).\n[3] Tishby, Naftali, and Noga Zaslavsky. \"Deep learning and the information bottleneck principle.\" In Information Theory Workshop (ITW), 2015 IEEE, pp. 1-5. IEEE, 2015.\n\n—————-\nUpdate: I lowered my rating considering other ppl s review and comments. ", "We have uploaded an updated version of the manuscript, which we believe significantly strengthens the paper. As well as fixing some minor issues raised by the reviewers, the main changes are:\n\n1) To respond to the comments of reviewer 3, we have replaced the synthetic data experiments in section 3 with real data experiments on MNIST. The conclusions of the section are unchanged; we observe the exact same phenomenon observed by Zhang et al. in a linear model, and show that this phenomenon can be understood via the Bayesian evidence. We also present a brief discussion of the training set margin when trained on random/informative labels, as requested by reviewer 1.\n\n2) We have introduced a new section to the appendix discussing Bayesian posterior sampling, which provides a simple case where noise provably drives the parameters towards broad minima whose evidence is large. We have edited section 4 to emphasize the connection between these results and SGD more clearly. We also added another section to the appendix briefly discussing the Gaussian approximation to minibatch gradient noise.\n\n3) We edited the introduction to highlight our contributions more clearly. We have also highlighted that the claims of Zhang et al. regarding learning theory are disputed. These claims are not important to our work, which understands the memorization phenomenon Zhang et al. presented and resolves the debate surrounding sharp minima and model parameterization.\n\n4) We have included a brief discussion of the stochastic differential equation discretization error in section 5. We also emphasize the potential practical applications of the training set size scaling rule. We clarify the novelty of our treatment compared to previous works.\n\n5) We have added additional citations to the text. However we found that many of the papers posted anonymously following reviewer 2’s highly positive initial score (9) were not relevant to our work, as they studied specific bayesian posterior sampling methods, whereas our work provides bayesian insights on SGD itself. We discuss this further in the comments below.\n\nWe would also like to comment that Reviewer 3’s primary criticism of our work is “the paper is very much focussed” on Zhang et al.’s rethinking generalization paper. This is simply not the case; our work makes a number of novel contributions. Indeed the bulk of the text is devoted to our discussion of SGD, the optimal batch size and the scaling rules. Many papers have responded to Zhang et al.’s findings, and reviewers 1 and 2 both felt we made an important contribution to this debate. ", "This comment has been redacted for violating the ICLR 2018 anonymity policy.", "We thank the referee for their positive assessment of our work.\nRegarding the originality of our work, we believe our paper makes three main contributions:\n\n1) We show that well-established Bayesian principles can resolve a number of active debates in the deep learning community (generalization/sharp minima/reparameterization/SGD batch size).\n2) We derive a novel closed form expression for the “noise scale” of the SGD, which holds even when the covariance matrix between the gradients of different parameters is non-stationary. We exploit this expression to predict three scaling rules between the batch size, learning rate, training set size and momentum coefficient.\n3) We verified these scaling rules empirically, and we believe they will prove extremely useful to ML practitioners. We note that while the B ~ a/(1-m) scaling rules enable us to achieve large batch training without hyper-parameter tuning, the B ~ N rule is equally valuable; since it enables us to retrain production models on new training data without retuning the batch size.\n\nWith current tools, the Gaussian approximation to the evidence cannot be estimated efficiently in deep networks. However this is not in fact a major limitation, since in practical scenarios we will always rely on test sets. What is important is to build an intuition for the factors which control generalization, and to use this intuition to improve the accuracy of our models. This is why we presented generalization and the SGD in a single paper; the intuition we gained from the approximate Bayesian evidence resolves the sharp minima debate, and the trade off between the depth and breadth of minima in the evidence explains how we should tune SGD hyper-parameters to maximize the test set accuracy.\n\nUnfortunately, one cannot derive a stationary distribution for the SGD in the infinite time limit unless one imposes the unrealistic assumption that the gradient covariances are stationary. As a result, it is very challenging to formally prove that SGD converges to minima with large evidence. However our argument runs as follows: if one were to assume the covariances were stationary, one could derive a stationary distribution for the SGD in the infinite limit [1] and one could formally prove that SGD converges to models of large evidence at an optimal noise scale. While we cannot formally prove this for the general case, we expect noise to have similar effects, and this matches what we observe empirically. \n\nTo respond to the remaining comments:\n1) We will add appropriate citations/discussion to the text.\n2) We have explored the average margin of models trained on random and informative labels. We find that the margin is ~50% larger when trained on random labels in our experiments. However unlike the evidence, the margin is not strongly correlated with either the test cross-entropy or the test accuracy.\n3/4) See above. More specifically, the SGD would generate Bayesian posterior samples if the covariance matrix were isotropic and stationary. In this case one can prove formally that there is an optimal noise scale which biases SGD towards minima with large evidence. Our observation is that this optimal noise scale persists empirically even though the true covariance matrix is anisotropic and non-stationary. We will discuss this connection in more depth when we update the manuscript.\n5) Yes, if we kept aN/B constant then the test set accuracy would be constant and there would not be a generalization gap (until the learning rate a is too large, as seen in figure 5a); this is a key result of the paper and the meaning of the scaling rules. We will edit the text to make this clearer. The optimal batch size only arises when one holds the learning rate constant, equivalently there would be an optimal learning rate at constant batch size. We used SGD with momentum since it is more popular than conventional SGD in practice. The results without momentum are the same (both optimal noise scale and the scaling rules).\n6) The derivation in section 2 was first performed by Mackay in 1992. We include it here because many researchers have not seen it, and because it is central to the remainder of the paper. In addition, we demonstrate the Bayesian evidence penalizes sharp minima but is invariant to model parameterization, resolving the objections of [2]. The derivation in section 5 is original. While our treatment is similar to [1], they assume that the covariance matrix is stationary; we show that this assumption is not necessary, since the covariance matrix cancels out when one equates the SGD to the SDE.\n7) We apologize for this. We normalized the expected length of the training examples, not the features.\n\n[1] Mandt et al., Stochastic Gradient Descent as Approximate Bayesian Inference, ICML 2017\n[2] Dinh et al., Sharp minima can generalize for deep nets, ICML 2017", "We would like to respond to the comments from anonymous commenter(s) which followed your review. We feel that these comments are misleading and misrepresent the contributions of this work; which we believe are significant. \n\nWe have reviewed the previous papers suggested by the commenter(s). Two are already cited in our paper [1,2]. One was released on arXiv after we submitted [3]. We are happy to cite [4] when we update the manuscript, which proposes merging SGD with control theory. The other suggested papers [5-7] are not relevant, since they discuss posterior sampling methods which use stochastic gradients, not the SGD. \n\nWe emphasize that none of these papers predicted or observed an optimal batch size (at constant learning rate). Additionally we are the first to derive the three scaling rules. Our treatment of the SGD is most similar to [2], however their analysis only holds near local minima where the gradient covariance matrix between parameters is stationary. By contrast our derivation applies throughout training for both stationary and non-stationary covariance matrices. This is important since [8] found that the benefits of noise are most pronounced at the start of training. Furthermore, our results have important practical applications:\n\ni) We show that tuning the batch size can lead to surprisingly large improvements in test accuracy.\nii) The scaling rules enable us to achieve large batch training without reducing the test set accuracy and without additional hyper-parameter tuning.\niii) They also predict how the batch size should be changed over time as more training data is collected (the most common reason to retrain a model is that one has collected more training data).\niv) Finally, the scaling rules enable us to compare training runs performed on different hardware with different batch sizes/learning rates/momentum coefficients.\n\nWe have submitted two papers to ICLR this year. The second paper is an extension of this work, and we make this clear in the introduction of the second paper. It is up to the reviewers of this second paper to judge if the extension is sufficient to merit acceptance. \n\n[1] M. Welling and Y. W. Teh., Bayesian learning via stochastic gradient langevin dynamics, In ICML 2011.\n[2] Mandt S et al., Stochastic Gradient Descent as Approximate Bayesian Inference, In ICML 2017.\n[3] Pratik Chaudhari and Stefano Soatto, Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks, arXiv preprint\n[4] Qianxiao Li et al., Stochastic modified equations and adaptive stochastic gradient algorithms, ICML 2017 (arxiv 2015)\n[5] C. Chen et al., Bridging the gap between stochastic gradient MCMC and stochastic optimization, In AISTATS 2016.\n[6] T. Chen et al., Stochastic gradient Hamiltonian Monte Carlo, In ICML 2014\n[7] W. Mou et al., Generalization Bounds of SGLD for Non-convex Learning: Two Theoretical Viewpoints, arXiv preprint\n[8] Keskar et al., On large-batch training for deep learning: Generalization gap and sharp minima, ICLR 2017", "We are very grateful for the helpful comments provided, however we feel that the score attached to this review does not recognize the many significant contributions of our work. The bulk of the specific comments raised are relatively minor and easily addressed. We would like to encourage the reviewer to reconsider. \n\nOur work demonstrates that a number of active debates in deep learning can be resolved by well-established Bayesian principles.\n\n1) Zhang et al. showed deep networks which generalize well on real training data can still memorize random labellings of the same inputs. In section 3 we observe exactly the same phenomenon in an over-parameterized linear model, and demonstrate that this phenomenon is easily and quantitatively explained by Bayesian model comparison. Taken together, these results demonstrate that deep learning does not require rethinking generalization. \n\nWe recognize that Zhang et al.'s claims regarding learning theory are disputed and we will make this clear when we update the manuscript. However the paper received the best paper award and inspired many follow-up works. Since the reviewer was unhappy with our synthetic data experiments, we will replace these by a linear model trained to distinguish real '0's and '1's from the MNIST dataset. We have already run these experiments and the results are identical.\n\n2) No previous author has explicitly demonstrated that there is an optimum batch size (at constant learning rate) and this clarifies earlier work by Keskar et al. (ICLR 2017). We argue that these results are easily understood as arising from the competition between the depth and breadth of minima in the Bayesian evidence.\n\n3) It is not the batch size itself which controls this competition; it is the magnitude of random fluctuations in the SGD dynamics. We derive a novel closed form expression for this “noise scale” which holds even when the gradient covariances between parameters are anisotropic and non-stationary. This closed form expression predicts three scaling rules, which we verify empirically. These scaling rules can be used to increase the optimal batch size without reducing the test accuracy; enabling parallelized training with large mini-batches. \n\nIn response to the remaining specific comments: \nSection 2) \nWe state in the opening paragraph that our derivation “closely [follows] the seminal work of David MacKay (1992)”. We are happy to cite Kass and Raferty (1995). We know this Laplacian treatment is “classic”. We include it because it is central to all of our observations which follow, and because most researchers in deep learning have never seen it. \n\ni) Mackay first considers a single scalar parameter, we are following his approach. It is easy to replace the 2nd derivative by the Hessian at the end.\nii) The cross entropy measures a distance between label distributions. By one-hot label we mean that the example has a single unique label.\niii) This is a typo and we will remove it.\niv) The minimum of the cost function corresponds to the maximum of the posterior. We will make this clearer.\nv) We are happy to change this as requested.\nvi) We will edit the text to clarify. The evidence ratio will grow exponentially in the number of training examples, while the contribution from the prior is constant.\nvii) We gave credit to Mackay, who we believe was the first to apply Bayesian model comparison to neural networks.\nviii) We are happy to change this to small p.\nix) We interpret the evidence in terms of curvature to show Bayesian principles resolve the current debate in deep learning regarding sharp minima. Crucially, the Bayesian evidence is invariant to model parameterization, resolving the objections of Dinh et al. (ICML 2017). We believe this clarification is important.\n\nSection 5) \ni) The effect of discretization is clear from figure 5a, for which there is no significant discretization error until the learning rate ~ 3 (ie the peak test accuracy is constant as we increase the learning rate). After this point the test accuracy falls rapidly. We will discuss this discretization error more explicitly when we update the manuscript. We used the SDE to derive the three scaling rules.\nii) We will add a discussion of the Gaussian approximation to the appendix. However we note that we already verified all of our key practical predictions empirically.\niii) We feel this makes the following SDE derivation clearer.\niv) In equation 15, we equate the variance of a gradient update, to the variance of the SDE, integrated over the duration of the update. Rearranging, one obtains g = \\eps N / B.\nv) In equation 13, we normalize the gradient update by the number of training examples N. The scaling rules apply to the mean gradient, not the summed gradient.\nvi) Bayesian arguments explicitly support the B ~ N scaling rule. For instance, in Langevin posterior sampling, the magnitude of the noise added to the gradient update is inversely proportional to N. We will make this connection clearer.", "We thank the referee again for their positive assessment of our work. In response to the comments and questions raised in the official review:\n\n1) We did not cite works on adaptive batch sizes, since we considered the simpler case of constant SGD (constant learning rate/batch size). However the reviewer is correct that these works are relevant to the optimal batch size discussion, particularly “Coupling Adaptive Batch Sizes with Learning Rates” which also proposes a linear scaling rule. We apologize for this and will cite these works when we update the manuscript.\n\n2) Yes, we believe that there is an extremely close relationship between Bayesian model comparison and some aspects of the information bottleneck principle. For instance, Bayesian model comparison essentially minimizes the information content of the parameters (the “minimum description length” principle), which is conceptually very close to minimizing the mutual information content between the intermediate representations and the inputs. However before we say more, we would like to study the information bottleneck more closely.\n\n3) This is a very interesting question. In conventional online learning, one only uses each training example once. This is different to the Bayesian perspective, where one should re-use all old samples repeatedly. Consequently I’m not sure there is a principled answer. However intuitively, it would be sensible to increase the batch size or decay the learning rate proportional to the total number of training examples seen to date, thus ensuring that the noise scale is constant as the training set grows.\n\n4) We intentionally ran most of our comparisons at constant number of training iterations, to ensure that the total “time” simulated by the underlying SDE of each curve was the same (equivalently ensuring that the expected distance travelled from the initialization point was the same in all cases). We believe that this provides a more meaningful comparison than holding the CPU time constant. However we are happy to add additional experiments to the appendix.\n\nBest wishes", "Best of my knowledge, the first one to use SDE to analysis sgd is\n\nQianxiao Li, Cheng Tai, Weinan E Stochastic modified equations and adaptive stochastic gradient algorithms ICML2017(arxiv preprint:2015)\n\n\n", "The idea of using SDE is also well-known\nMandt S, Hoffman M D, Blei D M. Stochastic Gradient Descent as Approximate Bayesian Inference. In ICML2017.\nPratik Chaudhari, Stefano Soatto Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks arXiv preprint\n", "Yes. I do know Bayesian optimization and I also pointed out that this paper lacks discussion of related work in my comment and added an example of the paper [1] from P. Hennig (who did a lot of work in BO and the paper that I referred to was one exemplar paper that propose the optima batchsize given learning rate. ).\n\nThank you to add more related worked that the author of the paper should discuss. \n\nI agree with you the these idea are not completely novel and the paper lacks discussion about related work. \n\nHowever, I do also recognize the contribution of the paper. Although they are not the first one that propose analysis of different minia and the analysis of optimal batchsize, the setting and analysis differs from existing work, and I think that such idea should be discussed more in deep learning community. I thus keep my vote on accepting the paper. ", "They all reveal the idea that bayesian methods will goes to the flat minima which will void over-fitting , even some of them have prove the generalization bound in non asymptotic time!", "C. Chen, D. Carlson, Z. Gan, C. Li, and L. Carin. Bridging the gap between stochastic gradient MCMC and stochastic optimization. In AISTATS, 2016.\n\nT. Chen, E. B. Fox, and C. Guestrin. Stochastic gradient Hamiltonian Monte Carlo. In Proceedings of the 31st International Conference on Machine Learning, pages 1683–1691, 2014.\n\nM. Welling and Y. W. Teh. Bayesian learning via stochastic gradient langevin dynamics. In ICML, 2011.\n\n\nhttps://arxiv.org/pdf/1707.05947", "What the paper said is a common sense in the bayesian opimization community!\n\nI can find tons of papers revealing the same idea!\n\nTheir strength is only to show the noise level is associated with batchsize, which is a naive idea I think everyone konws", "Thank you for your interest in our work!\n\nYes, that is exactly why the linear model can memorize random labels of the training data; it is \"over-parameterized\". A typical rule of thumb is linear models can memorize roughly two labels per parameter. This is also exactly why the deep networks in Zhang et al. can memorize random labels of their training data; they also have more parameters than training examples. In both linear and deep networks, if we made the training set sufficiently large we wouldn't be able to memorize randomly labelled data. I did a quick check and our model can memorize about 300 labels, it can't memorize 500.\n\nWhat surprised Zhang et al. is that the model did generalize well to the test set when trained on real informative labels, even though they had shown that the model was sufficiently flexible to assign random labels to the same inputs (ie meaningless solutions which do not generalize do exist). Again, we show exactly the same phenomenon here. The purpose of showing these results in a linear model is precisely to make these results intuitive, and to demonstrate that they should not be explained by any property unique to deep learning.\n\nWe show that good minima which generalize well to the test set can be distinguished from bad minima which do not by evaluating the Bayesian evidence (also known as the marginal likelihood). This evidence is a weighted combination of the depth and breadth of a minimum, but it is invariant to model parameterization, which clarifies the recent debate surrounding sharp minima.", "It is stated in the paper that linear models have the behavior as deep nets that generalize well on informative labels but can also memorize random labels of the same inputs. Is this just because in the synthetic experiment the number of training instance is 200, which is strictly less than the number of model parameters, which is 201 = 200 + 1? Since the model is linear, this amounts to solving an underdetermined system of linear equations, which is guaranteed to have a solution with probability 1 (assuming the inputs are randomly sampled, which is the case in the experiment). I am wondering whether the same phenomenon can be observed when the training set size is larger? Say, 500? Thanks!" ]
[ 3, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJij4yg0Z", "iclr_2018_BJij4yg0Z", "iclr_2018_BJij4yg0Z", "iclr_2018_BJij4yg0Z", "SJUvMa-bf", "By74tbqxf", "HkoLoX5lf", "SJAadCKxz", "HkoLoX5lf", "SJUvMa-bf", "SJUvMa-bf", "r1tVifbbz", "HJBT-QZWf", "r1tVifbbz", "HkoLoX5lf", "HyDLn-pRZ", "iclr_2018_BJij4yg0Z" ]
iclr_2018_SyELrEeAb
Implicit Causal Models for Genome-wide Association Studies
Progress in probabilistic generative models has accelerated, developing richer models with neural architectures, implicit densities, and with scalable algorithms for their Bayesian inference. However, there has been limited progress in models that capture causal relationships, for example, how individual genetic factors cause major human diseases. In this work, we focus on two challenges in particular: How do we build richer causal models, which can capture highly nonlinear relationships and interactions between multiple causes? How do we adjust for latent confounders, which are variables influencing both cause and effect and which prevent learning of causal relationships? To address these challenges, we synthesize ideas from causality and modern probabilistic modeling. For the first, we describe implicit causal models, a class of causal models that leverages neural architectures with an implicit density. For the second, we describe an implicit causal model that adjusts for confounders by sharing strength across examples. In experiments, we scale Bayesian inference on up to a billion genetic measurements. We achieve state of the art accuracy for identifying causal factors: we significantly outperform the second best result by an absolute difference of 15-45.3%.
accepted-poster-papers
The reviewers agree that the work is high quality, clear, original, and could be significant. Despite this, the scores are borderline. The reason is due to rough agreement that the empirical evaluations are not quite there yet. In particular, two reviewers agree that, in the synthetic experiments, the method is evaluated on data that is an order of magnitude too easy and quite far from the nature of real data, which has a much lower signal to noise ratio. However, the authors have addressed the majority of the concerns and there is little doubt that the authors are capable of carrying out this new experiment and reporting its results. Even if the results are surprising, they should shed light on what seems to be an interesting new approach.
train
[ "BJaMh7FSM", "SynpHhUBf", "ByKzdqIBM", "HJ80r5Brf", "HkjOg7SBM", "S1EEVuwlG", "SyBCAMcgG", "HJxrNo3xz", "S15hIOT7M", "rJwfLOpQf", "S1488upQG", "HJR3vuamM" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Unfortunately we weren't able to finish the experiments by today, which is the deadline. Regardless if the paper is accepted, we hope to finish these experiments and get them into the paper by camera-ready and/or the next arxiv update. (And thanks again to all the reviewers for the helpful feedback.)\n\nre:rescale. To clarify, we used their precise code for the simulations and GCAT fitting (https://github.com/StoreyLab/gcatest). So we did rescale the variance components.", "Song, Hao and Storey (2015) rescale the variance components (Suppl information section 6 https://media.nature.com/original/nature-assets/ng/journal/v47/n5/extref/ng.3244-S1.pdf) and it's not clear from the paper that you do as well. So before you set the experiments running, I would suggest checking that you have rescaling \"on\". This can make a HUGE difference (from 50% of signal variance down to 5%).", "Note we followed the precise protocol of Song, Hao, Storey (2015). We think it's important to stress this as we did not at all deviate from published experiments. Namely, we did not make any changes to the hyperparameters to be fair with baselines. (In fact, the experiment setup favors logistic factor analysis, not ours).\n\nWe're running your proposed experiment now: # causal variants ~ [5, 2000], constraining variance of causal variants. We will update you in the next day or two.", "I totally agree with this reviewer. The effect size of the SNPs (unless they very penetrant) is 1%-5%. I even think 10% is too high. If the p-values were inflated, you can see that in the Q-Q plot. I also don't understand why ROC curve and more particularly for FP<5% (which is a knee of the ROC curve). This is what I asked earlier. \n\nI agree with the reviewer, good work but more experiment in a realistic setting is required to actually evaluate this method. ", "The revision clarifies some of my concerns but the fact that all the methods get perfect recall (at which pv threshold?) made me look more into the simulation settings. 10 causal variants with weights drawn from N(0, 0.5) is an incredible amount of signal, making the problem too easy. For future revision of the paper I encourage the authors to sample the number of causal variants in [5, 2000] and make sure that the *total* variance explained by all the causal variants is within a reasonable range (I would say 0.01 to 0.1, but it’s arguable). I also encourage the authors to add Manhattan plots of both the simulated and real data as supplementary, alongside a qq-plot of the test statistics on null-only data on simulation (\\beta_m = 0 for all m). Again: this is potentially a good paper, but the experiments need some work.", "In this paper, the authors propose to use the so-called implicit model to tackle Genome-Wide Association problem. The model can be viewed as a variant of Structural Equation Model. Overal the paper is interesting and relatively well-written but some important details are missing and way more experiments need to be done to show the effectiveness of the approach.\n\n* How do the authors call a variant to be associated with the phenotype (y)? More specifically, what is the distribution of the null hypothesis? Section D.3 in the appendix does not explain the hypothesis testing part well. This method models $x$ (genetic), $y$ (phenotype), and $z$ (confounder) but does not have a latent variable for the association. For example, there is no latent indicator variable (e.g., Spike-Slab models [1]) for each variant. Did they do hypothesis testing separately after they fit the model? If so, this has double dipping problem because the data is used once to fit the model and again to perform statistical inference. \n\n* In GWAS, a method resulting in high power with control of FP is favored. In traditional univariate GWAS, the false positive rate is controlled by genome-wide significant level (7e-8), Bonferroni correction or other FP control approaches. Why Table 1 does not report FP? I need Table 1 to report the following: What is the power of this method if FPR is controlled(False Positive Rate < 0.05)? Also, the ROC curve for FPR<0.05 should be reported for all methods. \n\n* I believe that authors did a good job in term of a survey of the available models for GWA from marginal regression to mixed effect model, etc. The authors account for typical confounders such as cryptic relatedness which I liked. However, I recommend the authors to be cautious calling the association detected by their method \" a Causal Association.\" There are tons of research done to understand the causal effect of the genetic variants and this paper (and this venue) is not addressing those. There are several ways for an associated variant to be non-causal and this paper does not even scratch the surface of that. For example, in many studies, discovering the causal SNPs means finding a genetic variant among the SNPs in LD of each other (so-called fine mapping). The LD-pruning procedure proposed in this paper does not help for that purpose. \n\n* This approach jointly models the genetic variants and the phenotype (y). Let us assume that one can directly maximize the ML (ELBO maximizes a lower bound of ML). The objective function is disproportionally influenced by the genetic variants (x) than y because M is very large ( $\\prod_{m=1}^M p(w) p(x|z,w,\\phi) >> p(z) p(y|x,z,\\theta) $ ). Effectively, the model focuses on the genetic variants, not by the disease. This is why multi-variate GWAS focuses on the conditional p(y|x,z) and not p(y,x,z). Nothing was shown in the paper that this focusing on p(y,x,z) is advantageous to p(y|x,z). \n\n* In this paper, the authors use deep neural networks to model the general functional causal models. Since estimation of the causal effects is generally unidentifiable (Sprites 1993), I think using a general functional causal model with confounder modeling would have a larger chance to weaken the causal effects because the confounder part can also explain part of the causal influences. Is there a theoretical guarantee for the proposed method? Practically, how did the authors control the model complexity to avoid trivial solutions?\n\nMinor\n-------\n* The idea of representing (conditional) densities by neural networks was proposed in the generative adversarial networks (GAN). In this paper, the authors represent the functional causal models by neural networks, which is very related to the representation used in GANs. The only difference is that GAN does not specify a causal interpretation. I suggest the authors add a short discussion of the relations to GAN.\n\n* Previous methods on causal discovery rely on restricted functional causal models for identifiability results. They also use Gaussian process or multi-layer perceptron to model the functions implicitly, which can be consider as neural networks with one hidden layer. The sentence “These models typically focus on the task of causal discovery, and they assume fixed nonlinearities or smoothness which we relax using neural networks.” in the related work section is not appropriate. \n\n[1] Scalable Variational Inference for Bayesian Variable Selection in Regression, and Its Accuracy in Genetic Association Studies", "This paper tackles two problems common in genome-wide association studies: confounding (i.e. structured noise) due to population structure and the potential presence of non-linear interactions between different parts of the genome. To solve the first problem this paper effectively suggests learning the latent confounders jointly with the rest of the model. For the second problem, this paper proposes “implicit causal models’, that is, models that leverage neural architectures with an implicit density. \n\nThe main contribution of this paper is to create a bridge between the statistical genetics community and the ML community. The method is technically sound and does indeed generalize techniques currently used in statistical genetics. The main concerns with this paper is that 1) the claim that it can detect epistatic interactions is not really supported. Yes, in principle the neural model used to model y could detect them, but no experiments are shown to really tease this case apart 2) validating GWAS results is really hard, because no causal information is usually available. The authors did a great job on the simulation framework, but table 1 falls short in terms of evaluation metric: to properly assess the performance of the method on simulated data, it would be good to have evidence that the type 1 error is calibrated (e.g. by means of qq plots vs null distribution) for all methods. At the very least, a ROC curve could be used to show the quality of the ranking of the causal SNPs for each method, irrespective of p-value cutoff.\n\nQuality: see above. The technical parts of this paper are definitely high-quality, the experimental side could be improved.\nClarity: if the target audience of this paper is the probabilistic ML community, it’s very clear. If the statistical genetics community is expected to read this, section 3.1 could result too difficult to parse. As an aside: ICLR might be the right venue for this paper given the high ML content, but perhaps a bioinformatics journal would be a better fit, depending on intended audience.\n", "The paper presents a non-linear generative model for GWAS that models population structure.\nNon-linearities are modeled using neural networks as non-linear function approximators and inference is performed using likelihood-free variational inference.\nThe paper is overall well-written and makes new and non-trivial contributions to model inference and the application.\nStated contributions are that the model captures causal relationships, models highly non-linear interactions between causes and accounts for confounders. However, not all claims are well-supported by the data provided in the paper. \nEspecially, the aspect of causality does not seem to be considered in the application beyond a simple dependence test between SNPs and phenotypes.\n\nThe paper also suffers from unconvincing experimental validation:\n- The evaluation metric for simulations based on precision is not meaningful without reporting the recall at the same time.\n\n- The details on how significance in each experiment has been determined are not sufficient.\nFrom the description in D.3 the p-value a p-value threshold of 0.0025 has been applied. Has this threshold been used for all methods?\nThe description in D.3 seems to describe a posterior probability of the weight being zero, instead of a Frequentist p-value, which would be the probability of estimating a parameter at least as large on a data set that had been generated with a 0-weight.\n\n- Genomic control is applied in the real world experiment but not on the simulations. Genomic control changes the acceptance threshold of each method in a different way. Both precision and recall depend on this acceptance threshold. Genomic control is a heuristic that adjusts for being too anti-conservative, but also for being too conservative, making it hard to judge the performance of each method on its own. Consequently, the paper should provide additional detail on the results and should contrast the performance of the method without the use of genomic control.\n\nminor:\n\nThe authors claim to model nonlinear, learnable gene-gene and gene-population interactions.\nWhile neural networks may approximate highly non-linear functions, it still seems as if the confounders are modeled largely as linear. This is indicated by the fact that the authors report performance gains from adding the confounders as input to the final layer.\n\nThe two step approach to confounder correction is compared to PCA and LMMs, which are stated to first estimate confounders and then use them for testing.\nFor LMMs this is not really true, as LMMs treat the confounder as a latent variable throughout and only estimate the induced covariance.\n", "Thanks for the details, suggestions, and praise! On the experimental validation, see discussion in the Comment to all reviewers.\n\n> * However, I recommend the authors to be cautious calling the association detected by their method \" a Causal Association.\" [...]\n\nThanks for this caution. We added these limitations and notes in the revision. In short, we agree guaranteeing real-world causation has myriads of complexity. For example, the discussion now includes future work to incorporate linkage disequilibrium and the granularity of SNP loci.\n\n> * [...] Effectively, the model focuses on the genetic variants, not by the disease. This is why multi-variate GWAS focuses on the conditional p(y|x,z) and not p(y,x,z). Nothing was shown in the paper that this focusing on p(y,x,z) is advantageous to p(y|x,z).\n\nThis might be a misunderstanding: we also focus on the conditional, as the goal is to infer parameters from p(y | x). We only use the joint p(y, x, z) to adjust for population-confounders, and which is similar to common techniques in GWAS. The paragraph below Eq 4 describes that like two-stage estimation, the method can be thought of as first inferring p(z | x, y); then it infers parameters in p(y | x, z), drawn over posterior samples of z.\n\n> * [...] Since estimation of the causal effects is generally unidentifiable (Sprites 1993), I think using a general functional causal model with confounder modeling would have a larger chance to weaken the causal effects because the confounder part can also explain part of the causal influences. Is there a theoretical guarantee for the proposed method? Practically, how did the authors control the model complexity to avoid trivial solutions?\n\nProposition 1 provides a consistency guarantee, rendering the adjustment for latent confounders valid in observational data (assuming the causal graph is correct). In practice, the number of latent dimension in the confounder reduces to typical probabilistic modeling with latent variable models, precisely like the number of latent factors in PCA (Price et al., 2006) and logistic factor analysis (Song, Hao, Storey, 2015). In our experiments, Appendix D explains that we fix the latent dimension across methods and experiments.\n\n> * The idea of representing (conditional) densities by neural networks was proposed in the generative adversarial networks (GAN). In this paper, the authors represent the functional causal models by neural networks, which is very related to the representation used in GANs. [...]\n\nThanks for the note. We avoided discussion of GANs as we did not use any of their architecture or training insights. However, implicit models are indeed a model class including GANs (e.g., Mohamed and Lakshminarayanan, 2016). We also applied likelihood-free variational inference (Tran et al., 2017), which uses adversarial training.\n\n> * Previous methods on causal discovery rely on restricted functional causal models for identifiability results. They also use Gaussian process or multi-layer perceptron to model the functions implicitly, which can be consider as neural networks with one hidden layer. [...]\n\nDo you have references for functional causal models with MLPs? We'd love to revise the statement for models not assuming fixed nonlinearities. We're only familiar with works such as Mooij et al. (2010), which uses GPs and does typically assume smoothness via its kernel. In private discussion with Joris Mooij, we have clarified the statement.", "Thanks for the detailed comments! On the experimental validation, see discussion in the Comment to all reviewers.\n\n> The authors claim to model nonlinear, learnable gene-gene and gene-population interactions. While neural networks may approximate highly non-linear functions, it still seems as if the confounders are modeled largely as linear.\n\nNonlinear effect strictly from confounder to trait was not necessary in practice. However, as evidenced by the experiments, the trait's neural network gets noticeable improvement from nonlinear interaction between genes, and between gene-confounder (population). (See reply to AnonReviewer2 for more details.)\n\n> The two step approach to confounder correction is compared to PCA and LMMs, which are stated to first estimate confounders and then use them for testing. For LMMs this is not really true, as LMMs treat the confounder as a latent variable throughout and only estimate the induced covariance.\n\nThanks for this correction; we adjusted the comment in the revision.", "Thanks for the comments!\n\n> 1) the claim that it can detect epistatic interactions is not really supported. Yes, in principle the neural model used to model y could detect them, but no experiments are shown to really tease this case apart\n\nOne quantitative evidence is that for all setting configurations excluding Spatial, the GCAT baseline captures the latent confounder as well as the implicit causal model (true data is generated from a class that the GCAT subsumes). This means the baseline and implicit causal model only differ by the trait's model, p(y | x, z). The baseline uses a linear model; the implicit causal model uses a neural network. The latter outperforms across all configurations.\n\nWe're open to other suggestions on how to show this quantitatively. In practice, we also see that each trained hidden unit in the first layer has nonzero weights coming from multiple SNPs simultaneously.\n\n> 2) to properly assess the performance of the method on simulated data, it would be good to have evidence that the type 1 error is calibrated [...]\n\nSee discussion in the Comment to all reviewers.\n\n> Clarity: if the target audience of this paper is the probabilistic ML community, it’s very clear. If the statistical genetics community is expected to read this, section 3.1 could result too difficult to parse.\n\nThanks for the note! We targeted the audience to probabilistic ML. We're planning to submit another work applying these methods to new GWAS. There, we show newly discovered causal SNPs for the first time, and we believe this is more appropriate and interesting for the genetics community.", "Thanks to the three reviewers for their excellent feedback. They all found the paper interesting, well-written, and novel. Quoting R2 for example, \"The main contribution of this paper is to create a bridge between the statistical genetics community and the ML community. The method is technically sound and does indeed generalize techniques currently used in statistical genetics.\"\n\nWe addressed comments in the replies and revision. All reviewers asked questions about the experiments; we provide more detail here.\n\n> R1: From the description in D.3 the p-value threshold of 0.0025 has been applied. Has this threshold been used for all methods?\n\nThe experiment's procedure strictly follows Song, Hao, Storey (2015). Namely, the p-value threshold is 0.0025 and set across all methods.\n\n> R1: [...] the paper should provide additional detail on the results and should contrast the performance of the method without the use of genomic control.\n\nWe used genomic control only to compare to baselines in the literature for the real-world experiment. Unfortunately, we're unable to reproduce these papers' results using the genomic control. This makes it difficult to compare to baselines without genomic control unless we're unfair against them. It's also difficult to necessarily assess which method performs best as there is no ground truth: we can only establish that our work can indeed capture well-recognized causal SNPs as the baselines.\n\n> R1, R2, and R3 ask about recall, false positive rate, and ROC curves.\n\nWe included recall in the revision. All true-positives were found across all methods. Like a real-world experiment, only few (10) SNPs are causal; and in absolute number, the number of false positives typically ranged from 0 to 50 (excluding number of expected false positives); PCA deviated more and had up to 300 for TGP and sparse settings. Rarely did a method not capture all true positives, and if so, it only missed one.\n\nRegarding false positive rate, it gives a similar signal as the measured precision. This is because as mentioned, the number of true positives was roughly the same across methods. Because precision is the number of detected true positives over the number of detected true positives and false positives, precision simply differed by a method's number of false positives. We also did control for the number of expected false positives; this is clarified in Appendix D's revision.\n\n> R3: * How do the authors call a variant to be associated with the phenotype (y)? More specifically, what is the distribution of the null hypothesis?\n\nWe revised Appendix D. In summary, we followed Song Hao Storey (2015), which calculates a likelihood ratio test statistic for each SNP. It is the difference of the maximum likelihood solution on the trait model to the maximum likelihood solution on the trait model with influence of the SNP fixed to zero. This null has a chi^2 distribution with degrees of freedom equal to the number of weights fixed at zero (equal to the number of hidden units in the first layer)." ]
[ -1, -1, -1, -1, -1, 5, 6, 6, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, 5, 5, 5, -1, -1, -1, -1 ]
[ "SynpHhUBf", "ByKzdqIBM", "HkjOg7SBM", "HkjOg7SBM", "S1488upQG", "iclr_2018_SyELrEeAb", "iclr_2018_SyELrEeAb", "iclr_2018_SyELrEeAb", "S1EEVuwlG", "HJxrNo3xz", "SyBCAMcgG", "iclr_2018_SyELrEeAb" ]
iclr_2018_HJC2SzZCW
Sensitivity and Generalization in Neural Networks: an Empirical Study
In practice it is often found that large over-parameterized neural networks generalize better than their smaller counterparts, an observation that appears to conflict with classical notions of function complexity, which typically favor smaller models. In this work, we investigate this tension between complexity and generalization through an extensive empirical exploration of two natural metrics of complexity related to sensitivity to input perturbations. Our experiments survey thousands of models with different architectures, optimizers, and other hyper-parameters, as well as four different image classification datasets. We find that trained neural networks are more robust to input perturbations in the vicinity of the training data manifold, as measured by the input-output Jacobian of the network, and that this correlates well with generalization. We further establish that factors associated with poor generalization -- such as full-batch training or using random labels -- correspond to higher sensitivity, while factors associated with good generalization -- such as data augmentation and ReLU non-linearities -- give rise to more robust functions. Finally, we demonstrate how the input-output Jacobian norm can be predictive of generalization at the level of individual test points.
accepted-poster-papers
Reviewers always find problems in papers like this. AnonReviewer1 would have preferred to have seen a study of traditional architectures, rather than fully connected ones, which are now less frequently used. They thought the paper was too long, the figures too cluttered, and were not convinced by the discussion around linear v. elliptical trajectories. I appreciate the need for a parametrizable architecture, although it may not be justified to translate these insights to other architectures, and then the fact that fully connected architectures are less common undermines the impact of the work. I don't find the length a problem, and I don't find the figures a problem. After the back and forth, AnonReviewer3 believes that there are data compatibility issues associated with the studied transformations and that non-linear transformations would have been more informative. I find the reviewers response to be convincing. AnonReviewer2 is strongly in favor of acceptance, finding the work exhaustive, interesting, and of high quality. I'm inclined to agree.
train
[ "rJOMWJmSf", "HyVOdjxHf", "H1gNqp1BG", "HkwqeuYeG", "rJzvIiKlf", "rJtlOoqlG", "S1p14OT7M", "ryrJixIMG", "rkLQyWIGf", "Byq8l-IfM", "SyLS0eLzM", "SkJsigUGz" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "Thank you for taking the time to consider and respond to our rebuttal!\n------------------------------------------------------------------------------------------------------------------------------------------------------\n\n(1)\n>> Indeed, it is clearly expected that the performance of the networks sensibly drops in point far away from the data support.\n\nThis statement is true, but we strongly disagree that it detracts from our paper:\n1.1) This is not the statement we are making / evaluating in the paper (yet it is implied as motivation for experiments in section 4.1; see comments below);\n1.2) Our findings in section 4.1 (Figure 3), is that the _sensitivity_ of the network to small input perturbations increases away from the data. To the best of our knowledge, this result is novel and not obvious.\n1.3) This common-sense expectation only corroborates the correlation we establish between sensitivity and generalization (i.e. “trained networks perform poorly away from training data” and “trained networks are highly sensitive away from training data”) in other numerous experiments.\n------------------------------------------------------------------------------------------------------------------------------------------------------\n\n(2)\n>>Similarly, spatial translations too present data compatibility issues, as they lead to non representative data points ( such as the cropped digits shown in Figure 2).\n\n>>To my opinion, it could have been more useful to investigate the model performance in case of meaningful data variation, representing indeed \"hard\" and realistic testing cases. As I pointed in my previous review, these cases may be represented by nonlinear spatial warping: generating such nonlinear paths in the manifold would be significantly more meaningful.\n\nWe strongly disagree with this argument and hope that you will reconsider this perspective in light of the key points below:\n2.1) As we stated in the paper and rebuttal, the best-performing metric in our work (Frobenius norm of the Jacobian) is evaluated at individual data points and _has nothing to do at all_ with the trajectories used to evaluate the other metric (linear region transitions). As such this concern cannot apply to the relationship between Jacobian norm and generalization.\n\n2.2) The statement that other spatial warpings are more relevant/meaningful than horizontal translation on the datasets we consider (MNIST, Fashion-MNIST, CIFAR10, CIFAR100) requires justification. Without appropriate data augmentation, spatial warping will generate images that are not representative of the true data distribution as well. We emphasize that the trajectory interpolating horizontal translations is _not a linear trajectory in input space_ (see Figure 2) and we see no obvious reasons to assume it is simpler than other warping transformations.\n------------------------------------------------------------------------------------------------------------------------------------------------------\n\nThank you again for carefully considering our paper.", "We appreciated the effort provided by the authors in improving the paper. However the following problems still remain.\n\n1. The authors did not include well established architectures (DenseNets, VGG, AlexNet) for the experiments.\n\n2. Presenting the results for any possible parameter configuration makes the plot pretty hard to read.\n\n3. The approach of elliptic interpolation still remains not fully convincing - as AnonReviewer3 also pointed out.\n\n4. The paper is not yet compliant with the suggested format and the added material does not fully justify the exceeded length.", "I thank the authors for the detailed reply to my comments.\nI may understand the idea of studying the model behaviour on data laying far from the training data. However, I believe that the proposed experimental paradigm still does not allow to draw insightful conclusions. Indeed, it is clearly expected that the performance of the networks sensibly drops in point far away from the data support. Similarly, spatial translations too present data compatibility issues, as they lead to non representative data points ( such as the cropped digits shown in Figure 2).\n\nTo my opinion, it could have been more useful to investigate the model performance in case of meaningful data variation, representing indeed \"hard\" and realistic testing cases. As I pointed in my previous review, these cases may be represented by nonlinear spatial warping: generating such nonlinear paths in the manifold would be significantly more meaningful. \n\n", "The authors have undertaken a large scale empirical evaluation on sensitivity and generalization for DNNs within the scope of image classification. They are investigating the suitability of the F-norm of the input-output Jacobian in large scale DNNs and they evaluate sensitivity and generalization metrics across the input space, both on and of the data manifold. They convincingly present strong empirical evidence for the F-norm of the Jacobian to be predictive and informative of generalization of the DNN within the image classification domain.\n\nThe paper is well written. The problem is clearly presented and motivated. Most potential questions of a reader as well as interesting details are supplied by footnotes and the appendix.\nThe contributions are to my knowledge both novel and significant.\nThe paper seem to be technically correct. The methodology and conclusions are reasonable.\nI believe that this is important work and applaud the authors for undertaking it. I hope that the interesting leads will be further investigated and that similar studies will be conducted beyond the scope of image classification. '\nThe research and further investigations would be strengthened if they would include a survey on the networks presented in the literature in a similar manner as the authors did with the generated networks within the presented study. For example compare networks from benchmark competitions in terms of sensitivity and generalization using the metrics presented here.\n\nPlease define \"generalization gap\" and show how you calculate/estimate it. The term us used differently in much of the machine learning literature(?). Given this and that the usually sought after generalization error is unobtainable due to the unknown joint distribution over data and label, it is necessary to clarify the precise meaning of \"generalization gap\" and how you calculated it. I intuitively understand but I am not sure that the metric I have in mind is the same as the one you use. Such clarification will also improve the accessibility for a wider audience.\n\nFigure 4:\nI find Figure 4:Center a bit confusion. Is it there to show where on the x-axis of Figure 4:Top,the three points are located? Does this mean that the points are not located at pi/3, pi, 5pi/3 as indicated in the figure and the vertical lines of the figure grid? If it is not, then is it maybe possible to make the different sub-figure in Figure 4 more distinctive, as to not visually float into each other?\n\nFigure 5:\nThe figure makes me curious about what the regions look like close to the training points, which is currently hidden by the content of the inset squares. Maybe the square content can be made fully transparent so that only the border is kept? The three inset squares could be shown right below each sub-figure, aligned at the x-axis with the respective position of each of the data points.", "This work investigates sensitivity and generalisation properties of neural networks with respect to a number of metrics aimed at quantifying the robustness with respect to data variability, varying parameters and representativity of training/testing data. \nThe validation is based on the Jacobian of the network, and in the detection of the “transitions” associated to the data space. These measures are linked, as the former quantifies the sensitivity of the network respect to infinitesimal data variations, while the latter quantifies the complexity of the modelled data space. \nThe study explores a number of experimental setting, where the behaviour of the network is analysed on synthetic paths around training data, from pure random data points, to curves interpolating different/same data classes.\nThe experimental results are performed on CIFAR10,CIFAR100, and MNIST. Highly-parameterised networks seem to offer a better generalisation, while lower Jacobian norm are usually associated to better generalisation and fewer transitions, and can be obtained with data augmentation.\n\nThe paper proposes an interesting analysis aimed at the empirical exploration of neural network properties, the proposed metrics provide relevant insights to understand the behaviour of a network under varying data points. \n\nMajor remarks.\n\nThe proposed investigation is to my opinion quite controversial. Interesting data variation does not usually corresponds to linear data change. When considering the linear interpolation of training data, the authors are actually creating data instances not compatible with the original data source: for example, the pixel-wise intensity average of digits is not a digit anymore. For this reason, the conclusions drawn about the model sensitivity are to my opinion based a potentially uninteresting experimental context. Meaningful data variation can be way more complex and high-dimensional, for example by considering spatial warps of digits, or occlusions and superpositions of natural images. This kind of variability is likely to correspond to real data changes, and may lead to more reliable conclusions. For this reason, the proposed results may provide little indications of the true behaviour of the models data in case of meaningful data variations. \n\nMoreover, although performed within a cross-validation setting, training and testing are still applied to the same dataset. Cross-validation doesn’t rule out validation bias, while it is also known that the classification performance significantly drops when applied to independent “unseen” data, provided for example in different cohorts. I would expect that highly parameterised models would lead to worse performance when applied to genuinely independent cohorts, and I believe that this work should extend the investigation to this experimental setting.\n\nMinor remarks.\n\nThe authors should revise the presentation of the proposed work. The 14 figures(!) of main text are not presented in the order of appearance. The main one (figure 1) is provided in the first paragraph of the introduction and never discussed in the rest of the paper. \n", "This paper proposes an analysis of the robustness of deep neural networks with respect to data perturbations. \n\n*Quality*\nThe quality of exposition is not satisfactory. Actually, the paper is pretty difficult to evaluate at the present stage and it needs a drastic change in the writing style.\n\n*Clarity*\nThe paper is not clear and highly unstructured.\n\n*Originality* \nThe originality is limited for what regards Section 3: the proposed metrics are quite standard tools from differential geometry. Also, the idea of taking into account the data manifold is not brand new since already proposed in “Universal Adversarial Perturbation” at CVPR 2017.\n\n*Significance*\nDue to some flaws in the experimental settings, the relevance of the presented results is very limited. First, the authors essentially exploit a customized architecture, which has been broadly fine-tuned regarding hyper-parameters, gating functions and optimizers. Why not using well established architectures (such as DenseNets, ResNets, VGG, AlexNet)? \nMoreover, despite having a complete portrait of the fine-tuning process is appreciable, this compromises the clarity of the figures which are pretty hard to interpret and absolutely not self-explanatory: probably it’s better to only consider the best configuration as opposed to all the possible ones.\nSecond, authors assume that circular interpolation is a viable way to traverse the data manifold. The reviewer believes that it is an over-simplistic assumption. In fact, it is not guaranteed a priori that such trajectories are geodesic curves so, a priori, it is not clear why this could be a sound technique to explore the data manifold.\n\nCONS:\nThe paper is difficult to read and needs to be thoroughly re-organized. The problem is not stated in a clear manner, and paper’s contribution is not outlined. The proposed architectures should be explained in detail. The results of the sensitivity analysis should be discussed in detail. The authors should explain the approach of traversing the data manifold with ellipses (although the reviewer believes that such approach needs to be changed with something more principled). Figures and results are not clear.\nThe authors are kindly asked to shape their paper to match the suggested format of 8 pages + 1 of references (or similar). The work is definitely too long considered its quality. Additional plots and discussion can be moved to an appendix.\nDespite the additional explanation in Footnote 6, the graphs are not clear. Probably authors should avoid to present the result for each possible configuration of the hyper-parameters, gatings and optimizers and just choose the best setting.\nApart from the customized architecture, authors should have considered established deep nets, such as DenseNets, ResNets, VGG, AlexNet.\nThe idea of considering the data manifold within the measurement of complexity is a nice claim, which unfortunately is paired with a not convincing experimental analysis. Why ellipses should be a proper way to explore the data manifold? In general, circular interpolation is not guaranteed to be geodesic curves which lie on the data manifold.\n\nMinor Comments: \nSentence to rephrase: “We study common in the machine learning community ways to ...”\nPlease, put the footnotes in the corresponding page in which it is referred.\nThe reference to ReLU is trivially wrong and need to be changed with [Nair & Hinton ICML 2010]\n\n**UPDATED EVALUATION AFTER AUTHORS' REBUTTAL**\nWe appreciated the effort in providing specific responses and we also inspected the updated version of the paper. Unfortunately, despite the authors' effort, the reviewer deems that the conceptual issues that have been highlighted are still present in the paper which, therefore, is not ready for acceptance yet. ", "Thank you for the updated evaluation and for taking the time to review the revised submission. \n\nWe believe we have addressed all of the original concerns in our rebuttal. We would greatly appreciate any further feedback on what specific \"conceptual issues\" remain unresolved, so that we can use your feedback to further improve our paper.\n \nWe have also uploaded a new revision that further improves clarity and exposition, especially related to areas that you flagged as unclear.", "\n(5)\n>> Second, authors assume that circular interpolation is a viable way to traverse the data manifold. The reviewer believes that it is an over-simplistic assumption. In fact, it is not guaranteed a priori that such trajectories are geodesic curves so, a priori, it is not clear why this could be a sound technique to explore the data manifold.\n\nThe interpretation of the reviewer is incorrect. Nowhere in the text did we claim to traverse the data manifold with an ellipse.\n\nEllipses intersecting the data manifold at 3 specific points are studied only in section 4.1 to track our metrics along a trajectory with respect to how close a point to the data manifold is, which is an appropriate choice for this purpose.\n\nIn the rest of the paper, the transitions metric is computed along a trajectory that interpolates horizontal translations of an image (as stated in section 3.2), which is not an ellipse and generally lies within the data manifold for translation invariant datasets. In fact we augment the training data with translations for one experiment. \n\nWe have edited the toy illustration of such a trajectory in Figure 2 (right) to make this more clear in the second revision.\n\nIn addition, we never claimed any of our curves to be geodesics and fail to see how this property is necessary for the purpose of our work.\n\nFinally, the relationship between the Frobenius norm of the network Jacobian and generalization is averaged over individual data points and has nothing to do with traversing the data manifold.\n\nWe thank the reviewer for the useful feedback and will improve the exposition in the next revision.\n------------------------------------------------------------------------------------------------------------------------------------------------------\n(6)\n>> The authors should explain the approach of traversing the data manifold with ellipses (although the reviewer believes that such approach needs to be changed with something more principled).\n\nPlease see discussion about trajectories above (5).\n------------------------------------------------------------------------------------------------------------------------------------------------------\n(7)\n>> Probably authors should avoid to present the result for each possible configuration of the hyper-parameters, gatings and optimizers and just choose the best setting.\n\nPlease see the relevant discussion above (4).\n------------------------------------------------------------------------------------------------------------------------------------------------------\n(8)\n>> Apart from the customized architecture, authors should have considered established deep nets, such as DenseNets, ResNets, VGG, AlexNet.\n\nPlease see relevant discussion above (3).\n------------------------------------------------------------------------------------------------------------------------------------------------------\n(9)\n>> The idea of considering the data manifold within the measurement of complexity is a nice claim, which unfortunately is paired with a not convincing experimental analysis. Why ellipses should be a proper way to explore the data manifold? In general, circular interpolation is not guaranteed to be geodesic curves which lie on the data manifold.\n\nPlease see the relevant discussion above (5).\n------------------------------------------------------------------------------------------------------------------------------------------------------\n(10)\n>> Sentence to rephrase: “We study common in the machine learning community ways to ...”\n\nThank you, we have changed the wording in the second revision.\n------------------------------------------------------------------------------------------------------------------------------------------------------\n(11)\n>> Please, put the footnotes in the corresponding page in which it is referred.\n\nThank you, we have fixed the footnotes in the second revision.\n------------------------------------------------------------------------------------------------------------------------------------------------------\n(12) \n>> The reference to ReLU is trivially wrong and need to be changed with [Nair & Hinton ICML 2010]\n\nThank you for pointing this out, we have fixed the reference in the second revision.\n------------------------------------------------------------------------------------------------------------------------------------------------------\n\nWe thank the reviewer for the detailed feedback! We will work to improve the clarity of our work in the next revision.", "\n(1)\n>> The proposed investigation is to my opinion quite controversial. Interesting data variation does not usually corresponds to linear data change. When considering the linear interpolation of training data, the authors are actually creating data instances not compatible with the original data source: for example, the pixel-wise intensity average of digits is not a digit anymore. For this reason, the conclusions drawn about the model sensitivity are to my opinion based a potentially uninteresting experimental context. Meaningful data variation can be way more complex and high-dimensional, for example by considering spatial warps of digits, or occlusions and superpositions of natural images. This kind of variability is likely to correspond to real data changes, and may lead to more reliable conclusions. For this reason, the proposed results may provide little indications of the true behaviour of the models data in case of meaningful data variations.\n\nWe believe there to be two potential concerns in this remark and shall address them separately.\n\n\n1.A) The concern of linear interpolation of the training data being incompatible with the original data source, and as such our sampling trajectories in section 4.1 not containing meaningful data variations. This is by design! Such a trajectory will indeed lie mostly outside of the data manifold, yet intersect it in 3 points. Measuring our metrics along these trajectories allow us to draw conclusions about the behavior of a trained neural network near the data manifold and away from it (Figure 3).\n\nOtherwise, in the rest of the paper, transitions are counted along a trajectory interpolating horizontal translations of an image (see definition in section 3.2), which do represent a complex curve along a meaningful data variation (translation). We agree that analyzing a richer set of transformations within the data manifold would be interesting. However, characterizing data variation is a complex field of study, and we believe that translations provide a well defined and tractable set of transformations which typically remain within the data distribution. \n\nFinally, the best-performing metric of our work, that is the Frobenius norm of the Jacobian (see definition in section 3.1) is averaged over individual data points and does not hinge on any kind of interpolation!\n\n\n1.B) We can alternatively interpret your remark as being skeptical regarding the Jacobian norm reflecting sensitivity with respect to meaningful data variations. Indeed, it does not, and as described in section 3.1, it reflects sensitivity to isotropic perturbations.\n\nMotivated by this concern, we have performed an additional experiment measuring the norm of the Jacobian of the output with respect to horizontal shift, hence sensitivity to a meaningful data variation (translation) along the data manifold (bottom part of the figure, in contrast to the top): https://www.dropbox.com/s/cmh2s3eqb7vihj9/horizontal_translation_jacobian.pdf?dl=1\n\nWe observe an effect qualitatively similar to (yet less noisy than) when the Frobenius norm of the input-output Jacobian is considered (top part of the figure, or Figure 8 in the first paper revision / Figure 9 (bottom) in the second revision).\n\nHowever, we believe our current results still provide a useful insight. Different datasets have different axes of data variations. Which those are may not always be clear: indeed, understanding all the meaningful axes of data variations would essentially amount to solving the problem of generating natural images. Yet in the absence of preconceived notions of what directions are meaningful, the input-output Jacobian can be a universal metric indicative of generalization, as evidenced by our experiments on 4 datasets (MNIST, Fashion-MNIST, CIFAR10, CIFAR100) which definitely have different notions of meaningful data variations.\n------------------------------------------------------------------------------------------------------------------------------------------------------", "\n(1)\n>> Please define \"generalization gap\" and show how you calculate/estimate it. The term us used differently in much of the machine learning literature(?). Given this and that the usually sought after generalization error is unobtainable due to the unknown joint distribution over data and label, it is necessary to clarify the precise meaning of \"generalization gap\" and how you calculated it. I intuitively understand but I am not sure that the metric I have in mind is the same as the one you use. Such clarification will also improve the accessibility for a wider audience.\n\nThank you for the comment, we have included the definition in the Appendix A.4 in the second revision.\n\nWe define generalization gap as the difference between train and test accuracy on the whole train and test sets. Precisely,\n\nGeneralization gap = (# correctly classified training images)/(50K) - (# correctly classified test images)/(10K).\n\n(all training and test sets are of size 50K and 10K respectively)\n------------------------------------------------------------------------------------------------------------------------------------------------------\n(2)\n>> Figure 4:\nI find Figure 4:Center a bit confusion. Is it there to show where on the x-axis of Figure 4:Top,the three points are located? Does this mean that the points are not located at pi/3, pi, 5pi/3 as indicated in the figure and the vertical lines of the figure grid? If it is not, then is it maybe possible to make the different sub-figure in Figure 4 more distinctive, as to not visually float into each other?\n\nWe apologize for the confusing figure. Central figure (now top in the second revision) is there for the exact reason you mention (to show where the points are located, which indeed should be pi/3, pi, 5pi/3). We have separated the subfigures further and aligned the digits with the values of pi/3, pi and 5pi/3 precisely in the second revision.\n------------------------------------------------------------------------------------------------------------------------------------------------------\n(3)\n>> Figure 5:\nThe figure makes me curious about what the regions look like close to the training points, which is currently hidden by the content of the inset squares. Maybe the square content can be made fully transparent so that only the border is kept? The three inset squares could be shown right below each sub-figure, aligned at the x-axis with the respective position of each of the data points.\n\nThank you for the interesting suggestion! We have produced the requested figures with only the boundaries overlayed:\n-- before training: https://www.dropbox.com/s/14xcvoval4eluz4/boundaries_before_transparent.png?dl=1;\n-- after training: https://www.dropbox.com/s/lj7y9eimnqw0lsd/boundaries_after_transparent.png?dl=1.\n\nWe do not observe any special behavior at such scale. This is in agreement with Figure 3 (bottom), showing that the density of transitions around the points changes slowly.\n------------------------------------------------------------------------------------------------------------------------------------------------------\n\nThank you for the detailed review and helpful comments! We are pleased that you found our work useful.", "\n(2)\n>> Moreover, although performed within a cross-validation setting, training and testing are still applied to the same dataset. Cross-validation doesn’t rule out validation bias, while it is also known that the classification performance significantly drops when applied to independent “unseen” data, provided for example in different cohorts. I would expect that highly parameterised models would lead to worse performance when applied to genuinely independent cohorts, and I believe that this work should extend the investigation to this experimental setting.\n\nWe would like to address your concern. Could you please expand on what you mean by \"genuinely independent cohorts\"?\n\nAre you concerned that MNIST images in train and test may have digits written by same individuals? If so, we believe this should be less of a problem in Fashion-MNIST, CIFAR10, and CIFAR100 datasets where we see similar results.\n\nWe would like to provide some additional information regarding our training and evaluation procedure, hoping that this might address your concern. Train and test data are balanced random 50K and 10K i.i.d. samples respectively. We train all our networks for a large number of gradient steps (2^18 or 2^19 when applicable) without any regularization / validation / early stopping. We then evaluate all quantities mentioned in the paper on the whole 50K or 10K datasets respectively, when applicable.\n\nPlease let us know if the above answers your question. If no, we will be happy to expand on it once we fully understand your request.\n------------------------------------------------------------------------------------------------------------------------------------------------------\n(3)\n>> The authors should revise the presentation of the proposed work. The 14 figures(!) of main text are not presented in the order of appearance.\n\nThank you, we have rearranged the figures in the order of presentation in the second revision.\n------------------------------------------------------------------------------------------------------------------------------------------------------\n(4)\n>> The main one (figure 1) is provided in the first paragraph of the introduction and never discussed in the rest of the paper.\n\nWhile we acknowledge the importance of this figure, we consider it as motivation for our study (ultimately leading to key results presented in figures 3 [sensitivity along a trajectory intersecting the data manifold] and 4 in the first paper revision / 9 in the second revision [Jacobian norm correlating with generalization]), which is why it is only mentioned in the introduction.\n------------------------------------------------------------------------------------------------------------------------------------------------------\n\nWe thank you for the careful and insightful review. We believe that we were able to address your concerns both in terms of rebuttal and new experimental evidence, and hope that you will raise your score as a result.", "\n(1)\n>> The originality is limited for what regards Section 3: the proposed metrics are quite standard tools from differential geometry.\n\nWe did not claim to propose novel metrics anywhere in the paper; on the contrary, we cite prior work that used / introduced them in section 2.\n\nThe novelty of this work is in performing extensive evaluation of these metrics on trained neural networks and relating them to generalization (which is also emphasized multiple times in section 2).\n------------------------------------------------------------------------------------------------------------------------------------------------------\n(2)\n>> Also, the idea of taking into account the data manifold is not brand new since already proposed in “Universal Adversarial Perturbation” at CVPR 2017.\n\nThank you for the very interesting reference! We now cite it in related work in the second revision.\n\nHowever, we disagree with the claim that this work diminishes the novelty of ours. Nowhere in the paper did we assert to be the first to \"take into account the data manifold\". Our claim was to compare behavior of a trained network on and off the data manifold (see last paragraph of section 2).\n\nA great deal of previous research has examined the statistics of natural stimuli (e.g. dating at least as far back as Barlow, 1959). The specific paper you reference is a very interesting exploration of universal adversarial perturbations. However, it does not investigate the behavior of the network outside of the data manifold. \n------------------------------------------------------------------------------------------------------------------------------------------------------\n(3)\n>> Due to some flaws in the experimental settings, the relevance of the presented results is very limited. First, the authors essentially exploit a customized architecture, which has been broadly fine-tuned regarding hyper-parameters, gating functions and optimizers. Why not using well established architectures (such as DenseNets, ResNets, VGG, AlexNet)? \n\nThank you for your suggestion! Evaluating our metrics on the proposed architectures is indeed a very interesting direction for future research.\n\nHowever, we disagree that this establishes a flaw in our experiments. On the contrary, the architectures you suggest are ones that are extremely customized and fine-tuned, while the set of fully-connected (FC) architectures we consider are quite generic.\n\nThe reason for considering FC networks in this work was to perform a large-scale evaluation of the computationally-intensive metrics in a very wide variety of settings, to understand the resulting distribution over network behaviors, rather than measuring the behavior in a small number of hand-tuned scenarios (while extending the hundreds of thousands of experiments performed in this work onto complex convolutional architectures is beyond the scope of this work).\n\nWe further emphasize that:\n-- Almost all networks considered in this work have achieved 100% accuracy on the whole training set.\n-- Best-performing configurations yield test accuracies competitive with state-of-the-art results for FC networks.\n-- We evaluate our results on 4 different datasets of varying complexity (MNIST, Fashion-MNIST, CIFAR10, CIFAR100).\nFor this reason we believe our work presents results that are both comprehensive and appropriate to different generalization regimes.\n\nWe will make sure to improve our presentation and emphasize the above in our next revision.\n------------------------------------------------------------------------------------------------------------------------------------------------------\n(4)\n>> Moreover, despite having a complete portrait of the fine-tuning process is appreciable, this compromises the clarity of the figures which are pretty hard to interpret and absolutely not self-explanatory: probably it’s better to only consider the best configuration as opposed to all the possible ones.\n\nThis change would run counter to a principal strength of the paper -- that we examine the distribution over network behavior for a wide range of hyper-parameters and datasets (over thousands of experiments). Showing only the best-performing models would completely obfuscate the insights drawn from this analysis and significantly detract from our paper.\n\nWe will make sure to revise our presentation to make this point more clear.\n------------------------------------------------------------------------------------------------------------------------------------------------------" ]
[ -1, -1, -1, 8, 5, 4, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 4, 3, 5, -1, -1, -1, -1, -1, -1 ]
[ "H1gNqp1BG", "S1p14OT7M", "rkLQyWIGf", "iclr_2018_HJC2SzZCW", "iclr_2018_HJC2SzZCW", "iclr_2018_HJC2SzZCW", "rJtlOoqlG", "rJtlOoqlG", "rJzvIiKlf", "HkwqeuYeG", "rJzvIiKlf", "rJtlOoqlG" ]
iclr_2018_SyyGPP0TZ
Regularizing and Optimizing LSTM Language Models
In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM-based models. We propose the weight-dropped LSTM, which uses DropConnect on hidden-to-hidden weights, as a form of recurrent regularization. Further, we introduce NT-ASGD, a non-monotonically triggered (NT) variant of the averaged stochastic gradient method (ASGD), wherein the averaging trigger is determined using a NT condition as opposed to being tuned by the user. Using these and other regularization strategies, our ASGD Weight-Dropped LSTM (AWD-LSTM) achieves state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2. We also explore the viability of the proposed regularization and optimization strategies in the context of the quasi-recurrent neural network (QRNN) and demonstrate comparable performance to the AWD-LSTM counterpart. The code for reproducing the results is open sourced and is available at https://github.com/salesforce/awd-lstm-lm.
accepted-poster-papers
This paper presents a simple yet effective method for weight dropping for an LSTM that requires no modification of an RNN cell's formulation. Experimental results shows good perplexity results on benchmarks compared to many baselines. All reviewers agree that the paper will bring good contribution to the conference.
test
[ "HyQVY2Bxz", "ByC55Gcxz", "BJlf_TmWG", "r18OaJsGG", "S1YYPyjMz", "BkhO4kifz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper sets a new state of the art on word level language modelling on the Penn Treebank and Wikitext-2 datasets using various optimization and regularization techniques. These already very good results are further improved, by a large margin, using a Neural Cache.\n\nThe paper is well written, easy to follow and the results speak for themselves. One possible criticism is that the experimental methodology does not allow for reliable conclusions to be drawn about contributions of all different techniques, because they seem to have been evaluated at a single hyperparameter setting (that was hand tuned for the full model?).\n\nA variant on the Averaged SGD method is proposed. This so called NT-ASGD optimizer switches to averaging mode based on recent validation losses. I would have liked to see a more thorough assessment of NT-ASGD, especially against well tuned SGD.\n\nI particularly liked Figure 3 which shows how the Neural Cache makes the model much better at handling rare words and UNK (!) at the expense of very common words. Speaking of the Neural Cache, a natural baseline would have been dynamic evaluation.\n\nAll in all, the paper is a solid contribution which deserves to be accepted. It could become even better, were the experiments to tease the various factors apart.\n", "Clearly presented paper, including a number of reasonable techniques to improve LSTM-LMs. The proposed techniques are heuristic, but are reasonable and appear to yield improvements in perplexity. Some specific comments follow.\n\nre. \"ASGD\" for Averaged SGD: ASGD usually stands for Asynchronous SGD, have the authors considered an alternative acronym? AvSGD?\n\nre. Optimization criterion on page 2, note that SGD is usually taken to minimizing expected loss, not just empirical loss (Bottou thesis 1991).\n\nIs there any theoretical analysis of convergence for Averaged SGD?\n\nre. paragraph starting with \"To prevent such inefficient data usage, we randomly select the sequence length for the forward and backward pass in two steps\": the explanation is a bit unclear. What is the \"base sequence length\" exactly? Also, re. the motivation above this paragraph, I'm not sure what \"elements\" really refers to, though I can guess.\n\nWhat is the number of training tokens of the datasets used, PTB and WT2?\n\nCan the authors provide more explanation for what \"neural cache models\" are, and how they relate to \"pointer models\"?\n\nWhy do the sections \"Pointer models\", \"Ablation analysis\", and \"AWD-QRNN\" come after the Experiments section?", "This is a well-written paper that proposes regularization and optimization strategies for word-based language modeling tasks. The authors propose the use of DropConnect on the hidden-hidden connections as a regularization method, in order to take advantage of high-speed LSTM implementations via the cuDNN LSTM libraries from NVIDIA. The focus of this work is on the prevention of overfitting on the recurrent connections of the LSTM. The authors explore a variant of Average-SGD (NT-ASGD) as an optimization strategy which eliminates the need for tuning the average trigger and uses a constant learning rate. Averaging is triggered when the validation loss worsens or stagnates for a few cycles, leading to two new hyper parameters: logging interval and non-monotone interval. Other forms of well-know regularization methods were applied to the non-recurrent connections, input, output and embedding matrices. \n\nAs the authors point out, all the methods used in this paper have been proposed before and theoretical convergence explained. The novelty of this work lies in its successful application to the language modeling task achieving state-of-the-art results.\n\nOn the PTB task, the proposed AWD-LSTM achieves a perplexity of 57.3 vs 58.3 (Melis et al 2017) and almost the same perplexity as Melis et el. on the Wiki-Text2 task (65.8 vs 65.9). The addition of a cache model provides significant gains on both tasks. \n\nIt would be useful, if authors had explored the behavior of the AWD-LSTM algorithm with respect to various hyper parameters and provided a few insights towards their choices for other large vocabulary language modeling tasks (1 million vocabulary sizes). \n\nSimilarly, the choice of the average trigger and number of cycles seem arbitrary - it would have been good to see a graph over a range of values, showing their impact on the model's performance.\n\nA 3-layer LSTM has been used for the experiments - how was this choice made? What is the impact of this algorithm if the net was a 2-layer net as is typical in most large-scale LMs?\n\nTable 3 is interesting to see how the cache model helps with rare words and as such has applications in key word spotting tasks. Were the hyper parameters of the cache tuned to perform better on rare words? More details on the design of the cache model would have been useful.\n\nYou state that the gains obtained using the cache model were far less than what was obtained in Graves et al 2016 - what do you attribute this to?\n\nAblation analysis in Table 4 is very useful - in particular it shows how lack of regularization of the recurrent connections can lead to maximum degradation in performance.\n\nMost of the results in this paper have been based on one choice of various model parameters. Given the emperical nature of this work, it would have made the paper even clearer if an analysis of their choices were presented. Overall, it would be beneficial to the MLP community to see this paper accepted in the conference.\n", "\"re. \"ASGD\" for Averaged SGD: ASGD usually stands for Asynchronous SGD, have the authors considered an alternative acronym? AvSGD?\"\n\nAgreed! We have made the suggested change. \n\n\"re. Optimization criterion on page 2, note that SGD is usually taken to minimizing expected loss, not just empirical loss (Bottou thesis 1991).\"\n\nWe agree and have made the suggested change. \n\n\"Is there any theoretical analysis of convergence for Averaged SGD?\"\n\nAveraged SGD has been studied quite extensively; other than standard guarantees of convergence, two theoretical contributions stand out. First, in “Acceleration of stochastic approximation by averaging” Polyak and Juditsky show that, under circumstances, averaged SGD achieves the same asymptotic convergence rate as a second-order stochastic method. Second, in “Stochastic Gradient Descent as Approximate Bayesian Inference”, Mandt et. al. show that averaged SGD reduces the variance of the (noisy) SGD iterates around the minimizer of the loss function. \n\n\"re. paragraph starting with \"To prevent such inefficient data usage, we randomly select the sequence length for the forward and backward pass in two steps\": the explanation is a bit unclear. What is the \"base sequence length\" exactly? Also, re. the motivation above this paragraph, I'm not sure what \"elements\" really refers to, though I can guess.\"\n\nWe wanted to keep it generic so used elements rather than words but that may not have been best choice. The main aim is to prevent the model from seeing the data in the exact same batches each time. This is not a problem in many other tasks due to shuffling - but shuffling can’t be done when the data is sequential. The base sequence length for both PTB and WT-2 are 70 tokens though that is a hyperparameter that can be freely modified.\n\n\"What is the number of training tokens of the datasets used, PTB and WT2?\" \n\nPTB has 887k training tokens, WT2 has 2088k training tokens. \n\n\"Can the authors provide more explanation for what \"neural cache models\" are, and how they relate to \"pointer models\"?\"\n\nThey are quite similar; in our setup, the cache models are used atop an existing trained language model. The model uses hidden states from the previous tokens to point and in conjunction with the softmax, determines the next token. On the other hand, pointer models are trained in conjunction with the language model. Both point to previous tokens as a way to determine probability distributions for the next word. \n\n\"Why do the sections \"Pointer models\", \"Ablation analysis\", and \"AWD-QRNN\" come after the Experiments section?\"\n\nWe agree that they seem out of place and have reordered the sections. \n", "\"It would be useful, if authors had explored the behavior of the AWD-LSTM algorithm with respect to various hyper parameters and provided a few insights towards their choices for other large vocabulary language modeling tasks (1 million vocabulary sizes). \"\n\nWe have done preliminary experiments with data sets with large vocabulary sizes (such as WikiText-103 and the Google One Billion Word Corpus). Due to the large softmax costs associated with an increased vocabulary, an adaptive or hierarchical softmax is indispensable. In this case, tying the word vectors and softmax weights is non-trivial. Using a naive tying approach and the AWD-QRNN architecture described in the paper, we were able to train WikiText-103 to state-of-the-art performance and have received favorable initial results for One Billion Word corpus as well. This line of research warrants more work related to scalability and convergence and as such we will be continuing our investigation and analysis. \n\n\"Similarly, the choice of the average trigger and number of cycles seem arbitrary - it would have been good to see a graph over a range of values, showing their impact on the model's performance.\"\n\nWe have carried out a sensitivity experiment and tabulate the results below. In particular, we vary the number of cycles from 2 to 10 and report the testing perplexity for AWD-QRNN on the PTB data set along with the epoch at which the averaging was triggered. The final perplexity is fairly insensitive to precise specification of the cycle length; this observation is true on the other models as well.\n\n+-----------------+--------------------+-----------------+\n|Interval Len.| T (epochs) | Test Perp. |\n+-----------------+--------------------+-----------------+\n| 2 | 58 | 58.74 |\n+-----------------+--------------------+-----------------+\n| 3 | 58 | 58.74 |\n+-----------------+--------------------+-----------------+\n| 4 | 68 | 58.47 |\n+-----------------+--------------------+-----------------+\n| 5 | 68 | 58.47 |\n+-----------------+--------------------+-----------------+\n| 6 | 68 | 58.47 |\n+-----------------+--------------------+-----------------+\n| 7 | 71 | 58.42 |\n+-----------------+--------------------+-----------------+\n| 8 | 72 | 58.37 |\n+-----------------+--------------------+-----------------+\n| 9 | 72 | 58.37 |\n+-----------------+--------------------+-----------------+\n\n\n\"A 3-layer LSTM has been used for the experiments - how was this choice made? What is the impact of this algorithm if the net was a 2-layer net as is typical in most large-scale LMs?\"\n\nFor the same number of parameters, we found 3 layered LSTMs to have better performance as compared to the 2-layered ones. This difference was not alleviated by hyperparameter tuning though it was not entirely extensive due to computational resources. \n\n\"Table 3 is interesting to see how the cache model helps with rare words and as such has applications in keyword spotting tasks. Were the hyper parameters of the cache tuned to perform better on rare words? More details on the design of the cache model would have been useful.\"\n\nAnalogous to the best model, the cache model was tuned to provide lower validation perplexity. The resulting efficacy on rare words is hence incidental but not entirely surprising. \n\n\"You state that the gains obtained using the cache model were far less than what was obtained in Graves et al 2016 - what do you attribute this to?\"\n\nWe hypothesize that the reduction is due to the fact that our base language models have improved. Language models typically do well on common words while cache models are useful for rare words and those relating to past context. Graves et. al. does not use tied weights, for example, where tied weights were also shown to benefit rare words. As the language models get better at rarer words, or at using context, the usefulness of cache models diminishes given that there is no avenue left for improvement.\n", "“Speaking of the Neural Cache, a natural baseline would have been dynamic evaluation.”\nWe agree. Incidentally, there is a recent manuscript which applies dynamic evaluation to the AWD-LSTM framework resulting in lower perplexity values. We will refrain from providing additional details about this work to protect the double blind anonymity.\n\n\n“This so called NT-ASGD optimizer switches to averaging mode based on recent validation losses. I would have liked to see a more thorough assessment of NT-ASGD, especially against well tuned SGD.”\n\nWe conducted a few experiments comparing NT-ASGD and SGD for different initial learning rates and tabulate the results below. In the SGD experiments, we use the same non-monotonically triggered criterion but instead of averaging the iterates, use it to reduce the learning rate by 4 (we also tried 2, 5 and 10). \n\n+-----------+------------------+\n| LR | SGD | ASGD|\n+-----------+-------+----------+\n| 30 | 61.52 | 58.47 |\n+-----------+-------+----------+\n| 40 | 61.77 | 59.98 |\n+-----------+-------+----------+\n| 50 | 63.30 | 64.32 |\n+-----------+-------+----------+\n| 60 | 64.86 | 69.78 |\n+-----------+-------+----------+\n" ]
[ 7, 7, 7, -1, -1, -1 ]
[ 4, 4, 5, -1, -1, -1 ]
[ "iclr_2018_SyyGPP0TZ", "iclr_2018_SyyGPP0TZ", "iclr_2018_SyyGPP0TZ", "ByC55Gcxz", "BJlf_TmWG", "HyQVY2Bxz" ]
iclr_2018_H1meywxRW
DCN+: Mixed Objective And Deep Residual Coattention for Question Answering
Traditional models for question answering optimize using cross entropy loss, which encourages exact answers at the cost of penalizing nearby or overlapping answers that are sometimes equally accurate. We propose a mixed objective that combines cross entropy loss with self-critical policy learning, using rewards derived from word overlap to solve the misalignment between evaluation metric and optimization objective. In addition to the mixed objective, we introduce a deep residual coattention encoder that is inspired by recent work in deep self-attention and residual networks. Our proposals improve model performance across question types and input lengths, especially for long questions that requires the ability to capture long-term dependencies. On the Stanford Question Answering Dataset, our model achieves state of the art results with 75.1% exact match accuracy and 83.1% F1, while the ensemble obtains 78.9% exact match accuracy and 86.0% F1.
accepted-poster-papers
This is an interesting paper that provides modeling improvements over several strong baselines and presents SOTA on Squad. One criticism of the paper is that it evaluates only on Squad, which is somewhat of an artificial task, but we think for publication purposes at ICLR, the paper has a reasonable set of components.
train
[ "BJ5BhWlSG", "HyeSGKwgf", "rkf-R8qlz", "rJ9UPyaeG", "SJIhjddQG", "rkUPcK4mG", "HyABctE7G", "SkPEcKVQG", "Bk4rc8Ebf", "HkzhLBVZf", "Sk7GJVNZf", "Skuf674Wf", "S1UPKfXbM", "H1UVzfmZf", "SJtORWmZM", "BJfdokG-M", "rJdarOXyG" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "public", "public", "author", "author", "public", "public", "author" ]
[ "The performance of the model on SQuAD dataset is impressive. In addition to the performance on the test set, we are also interested in the sample complexity of the proposed model. Currently, the SQuAD dataset splits the collection of passages into a training set, a development set, and a test set in a ratio of 80%:10%:10% where the test set is not released. Given the released training and dev set, we are wondering what would happen if we split the data in a different ratio, for example, 50% for training and the rest 50% for dev. We will really appreciate it if the authors can report the model performance (on training/dev respectively) under this scenario. \n", "Summary:\nThis paper proposed an extension of the dynamic coattention network (DCN) with deeper residual layers and self-attention. It also introduced a mixed objective with self-critical policy learning to encourage predictions with high word overlap with the gold answer span. The resulting DCN+ model achieved significant improvement over DCN.\n\nStrengths:\nThe model and the mixed objective is well-motivated and clearly explained.\nNear state-of-the-art performance on SQuAD dataset (according to the SQuAD leaderboard).\n\nOther questions and comments:\nThe ablation shows 0.7 improvement on EM with mixed objective. It is interesting that the mixed objective (which targets F1) also brings improvement on EM. \n", "This paper proposed an improved version of dynamic coattention networks, which is used for question answering tasks. Specifically, there are 2 aspects to improve DCN: one is to use a mixed objective that combines cross entropy with self-critical policy learning, the other one is to imporve DCN with deep residual coattention encoder. The proposed model achieved STOA performance on Stanford Question Asnwering Dataset and several ablation experiments show the effectiveness of these two improvements. Although DCN+ is an improvement of DCN, I think the improvement is not incremental. \n\nOne question is that since the model is compicated, will the authors release the source code to repeat all the experimental results?", "The authors of this paper propose some extensions to the Dynamic Coattention Networks models presented last year at ICLR. First they modify the architecture of the answer selection model by adding an extra coattention layer to improve the capture of dependencies between question and answer descriptions. The other main modification is to train their DCN+ model using both cross entropy loss and F1 score (using RL supervision) in order to reward the system for making partial matching predictions. Empirical evaluations conducted on the SQuAD dataset indicates that this architecture achieves an improvement of at least 3%, both on F1 and exact match accuracy, over other comparable systems. An ablation study clearly shows the contribution of the deep coattention mechanism and mixed objective training on the model performance. \n\nThe paper is well written, ideas are presented clearly and the experiments section provide interesting insights such as the impact of RL on system training or the capability of the model to handle long questions and/or answers. It seems to me that this paper is a significant contribution to the field of question answering systems. \n", "We have performed another ablation study in which we train our DCN+ model without CoVe. This variant obtained 81.4% F1 on the dev set of SQuAD. This number also outperforms all other models studied in the paper, which suggests that our proposed changes to the original DCN are significant. For reference, DCN+ with CoVe obtained 83.1% on the dev set, and the test numbers tend to be a bit higher.", "Thanks for your comments. In practice the F1 and EM metrics are closely correlated. We chose to use the F1 score as a metric because it offers fine grain signals as to how well the span predicted matches the ground truth span, whereas the EM score only rewards exact gloss matches.", "Thanks for your comments. We can try to release the source code after the decision process.", "Thank you for your comments!\n", "Just to clarify, I do not say the models cannot do reading comprehension but that models trained on SQuAD are not doing it (which is due to SQuAD not the models), hence the results by Robin et al. The problem is that simple heuristics are good enough to almost solve SQuAD.\n\n(I am not an anonymous reviewer, just a concerned reader) ", "It is not our intention to characterize the other two datasets as bad. In fact, we think highly of the TriviaQA dataset and are investigating potential applications for it. We simply meant that each dataset has its own merits and drawbacks.\n\nFurthermore, it sounds like the anonymous reviewer thinks that the adversarial methods by Jia et al demonstrates the limitations of SQuAD. We disagree. We think that Robin's work rather demonstrates the limitations of state of the art reading comprehension models. In particular, we speculate that similar methods can be applied to the other datasets. Finally, to say that SQuAD models do not do reading comprehension is, in my opinion, unfair, and trivializes genuine hard work by the community.", "The point I was making is that SotA is achieved only through CoVe, otherwise your results would probably be 2-3% lower, which is not that convincing anymore. Improving upon DCN itself by 3% has been achieved by other, simpler models (e.g., Document Reader Chen et al 2017).\n\nIn general, making a model slightly deeper is not novel but a bit of engineering. The mixed objective is a neat little contribution (however, there is the Reinforced Mnemonic Reader which does something similar already, although probably written in parallel). Anyway, you are directly optimizing your evaluation metric (which is fine) that probably learns to give better bounds for your answer selection. The problem with the bounds can be seen by the rather large gap between F1 and Exact match. This gap is less pronounced in TriviaQA for instance where the mixed objective would probably have a much smaller effect. This results in climbing the SQuAD ladder but probably not really improving reading comprehension.", "I disagree with that. SQuAD has been awarded best resource reward (which was fine one year ago), however, the dataset itself consists to a large degree of simple paraphrases and the context size is simply to small to be a good RC dataset. Various recent dataset papers have shown the limitations of SQuAD.\n\nThe fact that well-regarded academic and industry institutions are all working on that has something to do with PR.\n\nFurthermore, works of [1] and [2] show that the dataset is not challenging. Especially [1] shows that models trained on SQuAD are easily fooled. So no reading comprehension.\n\nI think it is really problematic that the community overfits on certain datasets even when there are better alternatives. SQuAD is a great proof of concept dataset but not much more than that today.\n\nThe reason nobody is evaluating on the other datasets is that they are more challenging to handle because of their size and context length, not because they are 'bad'.\n\n[1] Robin Jia, and Percy Liang. \"Adversarial examples for evaluating reading comprehension systems.\" EMNLP (2017).\n[2] Dirk Weissenborn, Georg Wiese, Laura Seiffe. \"Making Neural QA as Simple as Possible but not Simpler\". CoNLL 2017\n\n", "Hi,\n\nYou are correct in that we only evaluate on the SQuAD dataset. In short, we agree with your sentiment that it would be interesting to evaluate on other datasets, however we respectfully disagree that SQuAD is known to be a bad dataset. In fact, we feel that it is one of the best datasets for reading comprehension. There seems to be agreement within the community about this in that\n\n1. SQuAD received the best resource paper award at EMNLP\n2. It is highly competitive, drawing significant participation from not only the authors' institution but other well-regarded academic and industry institutions (e.g. AI2, MSR (A), FAIR, CMU, Google, Stanford, Montreal, NYU ...).\n3. It has shown very useful downstream applications and impact (e.g. http://nlp.cs.washington.edu/zeroshot/)\n\nGiven these points, we feel that the performance gains afforded by our proposal (~3% F1) is significant, given that the top models on the leaderboard are within ~1% F1 of each other. We think these techniques are beneficial to the community at large.\n\nOf course, the fact that the other datasets do not (yet) have the above distinctions does not make them less interesting. We think that each dataset has its pros and cons. For example, TriviaQA is labeled via distant supervision. NewsQA frankly has not been very popular (2 leaderboard submissions in ~1 year), and there seems to be concerns regarding its evaluation (see https://openreview.net/forum?id=ry3iBFqgl), though the authors seems to have made some enhancements since then. Given the higher popularity and competitiveness of SQuAD, we felt that it is a better choice on which to compare our proposal with the best models developed by the community.\n\nNevertheless, the two datasets you mentioned are either larger and longer (TriviaQA) or at least longer (NewsQA) than SQuAD. We will attempt to evaluate on one of these two datasets, however given the time constraints we are unlikely to be able to fine tune the model. I will update here with results once we obtain them.", "Hi,\n\nYou are correct in that CoVe does provide a significant performance gain, as demonstrated by McCann et al. (https://arxiv.org/abs/1708.00107). However, CoVe itself, when combined with the original DCN, does not obtain state of the art performance whereas this work does (please see Table 1 of our paper). In addition, we feel that the performance gain provided by deep residual coattention and mixed objective are significant (3.2% dev F1) given the competitive nature of the task. For reference on how significant a 3% F1 gain is, the top 5 state of the art models on the leaderboard are within ~1% dev F1 of each other.\n\nIn addition, CoVe is focused on the encoder of the model, whereas our work focuses on the coattention and the mixed objective. Our additions are applicable to other types of encoders as well.\n\nWe decided to perform ablation studies with respect to DCN with CoVe because it seemed like a natural foundation to build upon, but I agree with your sentiment that we should also evaluate our proposed additions without CoVe. We can try to perform this experiment and update here with the results.\n\nThanks!", "Hi, most models this paper compares to are trained with GloVe embeddings but you only show results with CoVe (if I am not mistaken). Given the large boost of CoVe to the original model, it looks like this model is only able to achieve SotA because it uses CoVe and not because of the additional extensions. The mentioned 3% boost to the original model without CoVe would result in a much lower score which would probably **not be SoTA**. \n\nIs this correct?", "I noticed that you only evaluate against SQuAD which is known to be a bad dataset for evaluating machine comprehension. It has only short documents and most of the answers are easily extractable. This is a bit troubling especially given that there are plenty of good and much more complex datasets out there, e.g., TriviaQA, NewsQA, just to mention a few. It feels like we are totally overfitting on a simple dataset. Would it be possible to also provide results on one of those, otherwise it is really hard to judge whether there is indeed any significant improvement. I think this is a big issue.", "In Equation 17 (page 5), we made a typo in that we did not include the regularization terms $$\\log \\sigma_{ce}^2 + \\log \\sigma_{rl}^2$$." ]
[ -1, 7, 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 4, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1meywxRW", "iclr_2018_H1meywxRW", "iclr_2018_H1meywxRW", "iclr_2018_H1meywxRW", "Sk7GJVNZf", "HyeSGKwgf", "rkf-R8qlz", "rJ9UPyaeG", "HkzhLBVZf", "Skuf674Wf", "H1UVzfmZf", "S1UPKfXbM", "BJfdokG-M", "SJtORWmZM", "iclr_2018_H1meywxRW", "iclr_2018_H1meywxRW", "iclr_2018_H1meywxRW" ]
iclr_2018_H196sainb
Word translation without parallel data
State-of-the-art methods for learning cross-lingual word embeddings have relied on bilingual dictionaries or parallel corpora. Recent studies showed that the need for parallel data supervision can be alleviated with character-level information. While these methods showed encouraging results, they are not on par with their supervised counterparts and are limited to pairs of languages sharing a common alphabet. In this work, we show that we can build a bilingual dictionary between two languages without using any parallel corpora, by aligning monolingual word embedding spaces in an unsupervised way. Without using any character information, our model even outperforms existing supervised methods on cross-lingual tasks for some language pairs. Our experiments demonstrate that our method works very well also for distant language pairs, like English-Russian or English-Chinese. We finally describe experiments on the English-Esperanto low-resource language pair, on which there only exists a limited amount of parallel data, to show the potential impact of our method in fully unsupervised machine translation. Our code, embeddings and dictionaries are publicly available.
accepted-poster-papers
There is significant discussion on this paper and high variance between reviewers: one reviewer gave the paper a low score. However the committee feels that this paper should be accepted at the conference since it provides a better framework for reproducibility, performs more large scale experiments than prior work. One small issue the lack of comparison in terms of empirical results between this work and Zhang et al's work, but the responses provided to both the reviewers and anonymous commenters seem to be satisfactory.
train
[ "SyE3AHgxG", "rJEg3TtxM", "H1Qhqm9ez", "Sy_UZ--4f", "SJfsiaQ7z", "Skw7wkXQG", "B1yRBYGQM", "H1RBrtfmG", "rJ0n4tGQG", "HkFHDhiGz", "rkJTKT9lz", "BJ4hFZ5ez", "SkMy4hKlz", "rJcdzzcCb", "HkD8ivF0-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "public", "author", "public", "author", "public" ]
[ "This paper presents a new method for obtaining a bilingual dictionary, without requiring any parallel data between the source and target languages. The method consists of an adversarial approach for aligning two monolingual word embedding spaces, followed by a refinement step using frequent aligned words (according to the adversarial mapping). The approach is evaluated on single word translation, cross-lingual word similarity, and sentence translation retrieval tasks.\n\nThe paper presents an interesting approach which achieves good performance. The work is presented clearly, the approach is well-motivated and related to previous studies, and a thorough evaluation is performed.\n\nMy one concern is that the supervised approach that the paper compares to is limited: it is trained on a small fixed number of anchor points, while the unsupervised method uses significantly more words. I think the paper's comparisons are valid, but the abstract and introduction make very strong claims about outperforming \"state-of-the-art supervised approaches\". I think either a stronger supervised baseline should be included (trained on comparable data as the unsupervised approach), or the language/claims in the paper should be softened. The same holds for statements like \"... our method is a first step ...\", which is very hard to justify. I also do not think it is necessary to over-sell, given the solid work in the paper.\n\nFurther comments, questions and suggestions:\n- It might be useful to add more details of your actual approach in the Abstract, not just what it achieves.\n- Given you use trained word embeddings, it is not a given that the monolingual word embedding spaces would be alignable in a linear way. The actual word embedding method, therefore, has a big influence on performance (as you show). Could you comment on how crucial it would be to train monolingual embedding spaces on similar domains/data with similar co-occurrence statistics, in order for your method to be appropriate?\n- Would it be possible to add weights to the terms in eq. (6), or is this done implicitly?\n- How were the 5k source words for Procrustes supervised baseline selected?\n- Have you considered non-linear mappings, or jointly training the monolingual word embeddings while attempting the linear mapping between embedding spaces?\n- Do you think your approach would benefit from having a few parallel training points?\n\nSome minor grammatical mistakes/typos (nitpicking):\n- \"gives a good performance\" -> \"gives good performance\"\n- \"Recent works\", \"several works\", \"most works\", etc. -> \"recent studies\", \"several studies\", etc.\n- \"i.e, the improvements\" -> \"i.e., the improvements\"\n\nThe paper is well-written, relevant and interesting. I therefore recommend that the paper be accepted.\n\n", "The paper proposes a method to learn bilingual dictionaries without parallel data using an adversarial technique. The task is interesting and relevant, especially for in low-resource language pair settings.\n\nThe paper, however, misses comparison against important work from the literature that is very relevant to their task — decipherment (Ravi, 2013; Nuhn et al., 2012; Ravi & Knight, 2011) and other approaches like CCA. \n\nThe former set of works, while focused on machine translation also learns a translation table in the process. Besides, the authors also claim that their approach is particularly suited for low-resource MT and list this as one of their contributions. Previous works have used non-parallel and comparable corpora to learn MT models and for bilingual lexicon induction. The authors seem aware of corpora used in previous works (Tiedemann, 2012) yet provide no comparison against any of these methods. While some of the bilingual lexicon extraction works are cited (Haghighi et al., 2008; Artetxe et al., 2017), they do not demonstrate how their approach performs against these baseline methods. Such a comparison, even on language pairs which share some similarities (e.g., orthography), is warranted to determine the effectiveness of the proposed approach.\n\nThe proposed methodology is not novel, it rehashes existing adversarial techniques instead of other probabilistic models used in earlier works. \n\nFor the translation task, it would be useful to see performance of a supervised MT baseline (many tools available in open-source) that was trained on similar amount of parallel training data (60k pairs) and see the gap in performance with the proposed approach.\n\nThe paper mentions that the approach is “unsupervised”. However, it relies on bootstrapping from word embeddings learned on Wikipedia corpus, which is a comparable corpus even though individual sentences are not aligned across languages. How does the quality degrade if word embeddings had to be learned from scratch or initialized from a different source?", "An unsupervised approach is proposed to build bilingual dictionaries without parallel corpora, by aligning the monolingual word embeddings spaces, i.a. via adversarial learning.\n\nThe paper is very well-written and makes for a rather pleasant read, save for some need for down-toning the claims to novelty as voiced in the comment re: Ravi & Knight (2011) or simply in general: it's a very nice paper, I enjoy reading it *in spite*, and not *because* of the text sales-pitching itself at times.\n\nThere are some gaps in the awareness of the related work in the sub-field of bilingual lexicon induction, e.g. the work by Vulic & Moens (2016).\n\nThe evaluation is for the most part intrinsic, and it would be nice to see the approach applied downstream beyond the simplistic task of English-Esperanto translation: plenty of outlets out there for applying multilingual word embeddings. Would be nice to see at least some instead of the plethora of intrinsic evaluations of limited general interest.\n\nIn my view, to conclude, this is still a very nice paper, so I vote clear accept, in hope to see these minor flaws filtered out in the revision.", "Thank you for the very detailed response, also to the other reviewers' comments: all the questions and concerns were addressed very well.", "We thank you for your comment and we are glad to clarify.\n\nThe methodological difference between what we have proposed and Zhang et al.’s method is not just a better stopping criterion, but more importantly, a better underlying method. Here is the detailed comparison between the two approaches:\n- The very first step which is adversarial training with orthogonality constraint, is similar, see figure 1B in sec. 2.1 of our paper and figure 2 in [Zhang et al 2017a] (except for the use of earth mover distance) and figure 2b in [Zhang et al 2017b], but:\n- the refinement step described in sec. 2.2 and Figure 1C is not present is Zhang et al 2017a/b, nor is\n- the use of CSLS metric addressing the hubness problem, see sec 2.3 and figure 1D.\nIn contrast, we do not use any of the approaches described in Zhang et al. 2017b shown in their figure 2b and 2c.\nEmpirically, we demonstrate in tab. 1 the importance of both the refinement step and the use of CSLS metric to achieve excellent performance.\n\nIn addition to this key differences between the two approaches, we have also proposed a better stopping criterion as pointed out in your comment. This is actually not just a stopping criterion but a “validation” criterion that quantifies the closeness of the source and target spaces, and that correlates well with the word translation accuracy (see Figure 2). We not only use it as a stopping criterion, but also to select the best models across several experiments, which is something that Zhang et al. cannot do. Moreover, their stopping criterion is based on “sharp drops of the generator loss”, and it did not work in our experiments to select the best models (see Figure 2 of our paper).\n\nIn terms of evaluation protocol, Zhang et al. compare their unsupervised approach with a supervised method trained on 50 or 100 pairs of words only, which is a little odd given that most papers consider 5000 pairs of words (see Mikolov et al., Dinu et al., Faruqui et al., Smith et al., Artetxe et al., etc.). As a result, they have an extremely weak supervised baseline, while our supervised baseline is itself the new state of the art.\n\nFinally, note that we have released our code and we know that other research groups were able to already reproduce our results, and we have also released our ground-truth dictionaries and evaluation pipeline, which will hopefully help the community make further strides in this area (as today we lack a standardized evaluation protocol as pointed out above, and large scale ground truth dictionaries in lots of different language pairs).", "We thank all the reviewers for the feedback and comments. We replied to each of them individually, and uploaded a revised version of the paper. In particular, we:\n- Rephrased one of the claims made in the abstract about unsupervised machine translation\n- Added the requested 45.1% result of our unsupervised approach on the WaCky datasets\n- Fixed some typos\n- Added missing citations", "We thank the reviewer for the feedback and comments.\n\nAs mentioned in the comments, we added to the paper citations to the work of Ravi & Knight (2011) and some subsequent works on decipherment, and down-toned some claims in the paper.\n\nThank you for pointing the paper of Vulic & Moens, we were not aware of this paper and we added a citation in the updated version of the paper. Note however that the work of Vulic & Moens relies on document-aligned corpora while our method does not require any form of alignment.\n\nWe evaluated the cross-lingual embeddings on 4 different tasks: cross-lingual word similarity, word translation, sentence retrieval, and sentence translation. It is true that the quality of these embeddings on other downstream tasks would be interesting and will be investigated in future work.", "We thank the reviewer for the feedback and comments.\n\nThe main concern of the review is about the lack of comparisons with existing works.\n- The reviewer reproaches the lack of comparison against CCA, while the comparison against CCA is provided in Table 2. The reviewer also points out the lack of comparison against Artetxe et al. (2017). This comparison is also provided in the paper.\n- We agree that our method could be compared to decipherment techniques, and would have been happy to try the method of Ravi & Knight but there is no open-source version of their code available online (like for Faruqui & Dyer, Dinu et al, Artexte et al, Smith et al). Therefore, considering the large body of literature in that domain, we focused on comparing our approach with the most recent state-of-the-art and supervised approaches, which in our opinion is a fair way to evaluate against reproducible baselines.\n\nThe second reviewer concern is about the performance of the model on non-comparable corpora. We considered that this was redundant with the results on Wikipedia provided in Table 1 and Table 2. As explained in one previous comment, our strategy was to first show that our supervised method (Procrustes-CSLS) is state-of-the-art, and then to compare our unsupervised approach against this new baseline. We added the result of our unsupervised approach (Adv - Refine - CSLS) on non-comparable WaCky corpora in Table 2. In particular, our unsupervised model on the non-comparable WaCky datasets is also state of the art with 45.1% accuracy.\n\nThe reviewer criticises the lack of novelty. To the best of our knowledge, the fact that an adversarial approach obtains state-of-the-art cross-lingual embeddings is new. Most importantly, the contributions of our paper are not limited to the adversarial approach. The CSLS method introduced to mitigate the hubness problem is new, and improves the state-of-the-art by up to 24% on the sentence retrieval task, as well as it improves the supervised baseline. We also introduced an unsupervised criterion that is highly correlated with the cross-lingual embeddings quality, which is also novel as far as we know, and a key element for training.\n\nAl last, please consider that we made our code publicly available and provided high-quality dictionaries for 110 oriented language pairs to help the community, as this type of resources are very difficult to find online.", "We thank the reviewer for the feedback and comments.\n\nIt is true that the supervised approach is limited in the sense that it only considers 5000 pairs of words. However, previous works have shown that using more than 5000 pairs of words does not improve the performance (Artetxe et al. (2017)), and can even be detrimental (see Dinu et al. (2015)). This is why we decided to consider 5000 pairs only, to be consistent with previous works. Also, note that we made our supervised baseline (Procrustes + CSLS) as strong as possible, and it is actually state-of-the-art.\n\nRegarding the claim \"this is a first step towards fully unsupervised machine translation\", what we meant is that the method proposed in the paper could potentially be used in a more complex framework for unsupervised MT at the sentence level. We rephrased this sentence in the updated version of the paper.\n\nWe now address the comments / suggestions of the reviewer:\n\n- The abstract could indeed benefit from details about the model. We will add some.\n- The co-occurrence statistics have indeed an impact on the overall performance of the model. This impact is consistent for both supervised and unsupervised approaches. Indeed, our unsupervised method obtains 66.2% accuracy on the English-Italian pair on the Wikipedia corpora (Table 2), and 45.1% accuracy on the UKWAC / ITWAC non-comparable corpora. This result was not in the paper (we thought it was redundant with Table 1), but we added it in Table 2 in the updated version. Figure 3 in the appendix also gives insights about the impact of the similarity of the two domains, by comparing the quality of English-English alignment using embeddings trained on different English corpora.\n- It would indeed possible to add weights in Equation (6). We tried to weight the r_S and r_T terms, but we did not observe a significant improvement compared to the current equation.\n- In the supervised approach, we generated translations for all words from the source language to the target language, and vice-versa (a translation being a pair (x, y) associated with the probability for y of being the correct translation of x). Then, we considered all pairs of words (x, y) such that y has a high probability of being a translation of x, but also that x has a high probability of being a translation of y. Then, we sorted all generated translation pairs by frequency of the source word, and took the 5000 first resulting pairs.\n- We tried to use non-linear mappings (namely a feedforward network with 1 or 2 hidden layers), but in these experiments, the adversarial training was quite unstable, and like in Mikolov et al. (2013), we did not observe better results compared to the linear mapping. Actually, the linear mapping was working significantly better, and since the Procrustes algorithm in the refinement step requires the mapping to be linear, we decided to focus on this type of mapping. Moreover, the linear mapping is convenient because we can impose the orthogonality constraint, which guarantees that the quality of the source monolingual embeddings is preserved after mapping.\n- We did not try to jointly learn the embeddings as well as the mapping, but this is a nice idea and definitely something that needs to be investigated. We think that the joint learning could improve the cross-lingual embeddings, but especially, it could significantly improve the quality of monolingual embeddings on low-resource languages.\n- Our approach would definitely benefit from having a few parallel training points. These points could be used to pretrain the linear mapping for the adversarial training, or even as a validation dataset. This will be the focus of future work.", "While the results in this paper are very nice, this method seems to be almost the same as Zhang et al., and even after reading the comments in the discussion I can't tell what the main methodological differences are. Is it really only the stopping criterion for training? If so, the title \"word translation without parallel data\" seems quite grandiose, and it should probably be something more like \"a better stopping criterion for word translation without parallel data\".\n\nI'm relatively familiar with this field, and if it's even difficult for me to tell the differences between this and highly relevant previous work I'm worried that it will be even more difficult for others to put this research in the appropriate context.", "I find the answer from the authors quite satisfying. In particular, the missing result that was provided in the answer (and should be definitely added to the paper) addresses my main concerns regarding the comparability with previous work. While this result is not as spectacular as the others (almost at par with the supervised system, possibly because the comparability of Wikipedia is playing a role as pointed in my previous comment) and I think that some of the claims in the paper should be reworded accordingly, it does convincingly show that the proposed method can achieve SOTA results in a standard dataset without any supervision.\n\nRegarding the work of Zhang et al. (2017b) and Artetxe et al. (2017), I agree on most of the comments on the former, and this new result shows that the proposed method works better than the latter. However, I still think that some of the claims regarding these papers (e.g. Zhang et al. (2017b) \"is significantly below supervised methods\" or Artetxe et al. (2017) does not work for en-ru and en-zh) are unfounded and need to either be supported experimentally or reconsidered.", "The paper by Dinu et al. provides embeddings and dictionaries for the English-Italian language pair. The embeddings they provide have become pretty standard and we found at least 5 previous methods that used this dataset:\nMikolov et al., Faruqui et al., Dinu et al., Smith et al., Artetxe et al.\nThese previous papers provide strong supervised SOTA baselines on the word translation task, and in Table 2 we show results of our supervised method compared to these 5 papers. The row “Procrustes + CSLS” is a supervised baseline, training our method with supervision using exactly the same word embeddings and dictionaries as in Dinu et al. These results show that our supervised baseline works better than all these previous approaches (reaching 44.9% P@1 en-it).\nThe requested unsupervised configuration “Adv - Refine - CSLS” using the same embeddings and dictionary as in Dinu et al. obtains 45.1% on en-it, which is better than our supervised baseline (and SOTA by more than 2%). \n\nHowever, this information is redundant with Table 1, which shows that our unsupervised approach is better than our supervised baseline on European languages. We therefore decided not to incorporate this result, but we will add it back as suggested.\n\nMoreover, using the Wacky datasets (non comparable corpora) to learn embeddings, we improved the SOTA by 11.5% and 26.6% on the sentence retrieval task using our CSLS method, see table 3. Again, these experiments use the very same setting as previously reported in the literature.\n\nMore generally, regarding your comment “they inexplicably use a different set of embeddings, trained in a different corpus”, note that:\n- As noted above, we did compare using the very same embeddings and settings as others.\n- We did study the effect of using different corpora: see fig. 3\n- As shown in the paper, using our method on Wikipedia improves the results by more than 20%\n- Wikipedia is available in most languages, pretrained embeddings were already released and publicly available, we just downloaded them (while the Wacky datasets are only available for a few languages)\n- We found that the monolingual quality of these pretrained embeddings is better than the one obtained on the Wacky datasets\n\nAs opposed to the 5 methods we compare ourselves against in the paper, Zhang et al. (2017):\n1) used different embeddings and dictionaries which they do not provide, \n2) used a lexicon of 50 or 100 word pairs only in their supervised baseline, which is different than standard practice since Mikolov et al. (2013b) (see Dinu et al., Faruqui et al., Smith et al., etc.) and which is also what we did, namely considering dictionaries with 5000 pairs. As a result, they compare themselves to a very weak baseline.\n3) in the retrieval task they consider a very simplistic settings, with only a few thousands words, as opposed to large dictionaries of 200k words (as done by Dinu et al., Smith et al. and us). \n4) they. do not provide a validation set and, as shown in Figure 2 in our paper, their stopping criterion does not work well.\nWe did try to run their code, but we have not been successful yet.\n\nAs for comparing against Artetxe et al., as reported in table 2 they obtain a P@1 of 39.7% while we obtain 45.1% using the same Dinu’s embeddings.\n\nFinally, we have released our code, along with our embeddings / dictionaries for reproducibility. We will share the link here as soon as the decisions are out in order to preserve anonymity.", "I think that the paper does not do a good job at comparing the proposed method with previous work.\n\nWhile most of the experiments are run in a custom dataset and do not include results from previous authors, the paper also reports some results in the standard dataset from Dinu et al. (2015) \"to allow for a direct comparison with previous approaches\", which I think that is necessary. However, they inexplicably use a different set of embeddings, trained in a different corpus, for their unsupervised method in these experiments, so their results are not actually comparable with the rest of the systems. While I think that these results are also interesting, as they shows that the training corpus and embedding hyperparameters can make a huge difference, I see no reason not to also report the truly comparable results with the standard embeddings used by previous work. In other words, Table 2 is missing a row for \"Adv - Refine - CSLS\" using the same embeddings as the rest of the systems.\n\nMoreover, I think that the choice of training the embeddings in Wikipedia is somewhat questionable. Wikipedia is a document-aligned comparable corpus, and it seems reasonable that the proposed method could somehow benefit from that, even if it was not originally designed to do so. In other words, while the proposed method is certainly unsupervised in design, I think that it was not tested in truly unsupervised conditions. In fact, there is some previous work that learns cross-lingual word embeddings from Wikipedia by exploiting this document alignment information (http://www.aclweb.org/anthology/P15-1165), which shows that this cross-lingual signal in Wikipedia is actually very strong.\n\nApart from that, I think that the paper is a bit unfair with some previous work. In particular, the proposed adversarial method is very similar to that of Zhang et al. (2017b), and the authors simply state that the performance of the latter \"is significantly below supervised methods\", without any experimental evidence that supports this claim. Considering that the implementation of Zhang et al. (2017b) is public (http://nlp.csai.tsinghua.edu.cn/~zm/UBiLexAT/), the authors could have easily tested it in their experiments and show that the proposed method is indeed better than that of Zhang et al. (2017b), but they don't.\n\nI also think that the authors are a bit unfair in their criticism of Artetxe et al. (2017). While the proposed method has the clear advantage of not requiring any cross-lingual signal, not even the assumption of shared numerals in Artetxe et al. (2017), it is not true that the latter is \"just not applicable\" to \"languages that do not share a common alphabet (en-ru and en-zh)\", as both Russian and Chinese, as well as many other languages that do not use a latin alphabet, do use arabic numerals. In relation to that, the statement that \"the method of Artetxe et al. (2017) on our dataset does not work on the word translation task for any of the language pairs, because the digits were filtered out from the datasets used to train the fastText embeddings\" clearly applies to the embeddings they use, and not to the method of Artetxe et al. (2017) itself. Once again, considering that the implementation of Artetxe et al. (2017) is public (https://github.com/artetxem/vecmap), the authors could have easily supported their claims experimentally, but they also fail to do so.", "Thank you for the pointer, we were aware of this work and we will add a citation. Note however that our focus is not to learn to a machine translation system (we just gave a simple example of this application, together with others like sentence retrieval, word similarity, etc.), but to infer a bilingual dictionary without using any labeled data. Unlike Ravi et al. we use monolingual data on both side at training time, and we infer a large bilingual dictionary (200K words). When we say \"this is a first step towards fully unsupervised machine translation\" it does not mean we are the first to look at this problem, we simply meant that our method could be used as a first step in a more complex pipeline. We will rephrase this sentence to avoid confusion.\nIn other words, the two works look at different things: this one is focussed on learning a bilingual dictionary, while the other is focussed on the problem of machine translation.", "Saying that the method is a first step towards fully unsupervised machine translation seems like a bold (if not false) statement. In particular, this has been done before using deciphering:\n\nRavi & Knight, \"Deciphering Foreign Language\", ACL 2011, http://aclweb.org/anthology/P/P11/P11-1002.pdf\n\nThere are plenty of other similar previous work besides this one. I think any claims on MT without parallel corpora should at least mention deciphering as related work." ]
[ 9, 3, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H196sainb", "iclr_2018_H196sainb", "iclr_2018_H196sainb", "rJ0n4tGQG", "HkFHDhiGz", "iclr_2018_H196sainb", "H1Qhqm9ez", "rJEg3TtxM", "SyE3AHgxG", "iclr_2018_H196sainb", "BJ4hFZ5ez", "SkMy4hKlz", "iclr_2018_H196sainb", "HkD8ivF0-", "iclr_2018_H196sainb" ]
iclr_2018_HkuGJ3kCb
All-but-the-Top: Simple and Effective Postprocessing for Word Representations
Real-valued word representations have transformed NLP applications; popular examples are word2vec and GloVe, recognized for their ability to capture linguistic regularities. In this paper, we demonstrate a {\em very simple}, and yet counter-intuitive, postprocessing technique -- eliminate the common mean vector and a few top dominating directions from the word vectors -- that renders off-the-shelf representations {\em even stronger}. The postprocessing is empirically validated on a variety of lexical-level intrinsic tasks (word similarity, concept categorization, word analogy) and sentence-level tasks (semantic textural similarity and text classification) on multiple datasets and with a variety of representation methods and hyperparameter choices in multiple languages; in each case, the processed representations are consistently better than the original ones.
accepted-poster-papers
This is a good paper with strong results via a set of simple steps for post processing off the shelf words embeddings. Reviewers are enthusiastic about it and the author responses are satisfactory.
train
[ "HyvB9RKez", "rkmdzXigz", "S1DUyihgz", "B1ab6bXMz", "Hksjh-mfM", "S1d8nWXMG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper provides theoretical and empirical motivations for removing the top few principle components of commonly-used word embeddings.\n\nThe paper is well-written and I enjoyed reading it. However, it does not explain how significant this result is beyond that of (Bullinaria and Levy, 2012), who also removed the top N dimensions when benchmarking SVD-factorized word embeddings. From what I can see, this paper provides a more detailed explanation of the phenomenon (\"why\" it works), which is supported with both theoretical results and a series of empirical analyses, as well as \"updating\" the benchmarks and methods from the pre-neural era. Although this contribution is relatively incremental, I find the depth of this work very interesting, and I think future work could perhaps rely on these insights to create better embedding algorithms that directly enforce isotropy.\n\nI have two concerns regarding the empirical section, which may be resolvable fairly quickly:\n1) Are the embedding vectors L2 normalized before using them in each task? This is known to significantly affect performance. I am curious whether removing the top PCs is redundant or not given L2 normalization.\n2) Most of the benchmarks used in this paper are \"toy\" tasks. As Schnabel et al (2015) and Tsvetkov et al (2015) showed, there is often little correlation between success on these benchmarks and improvement of downstream NLP tasks. I would like to measure the change in performance on a major NLP task that heavily relies on pre-trained word embeddings such as SQuAD.\n\nMinor Comments:\n* The last sentence in the first paragraph (\"The success comes from the geometry of the representations...\") is not true; the success stems from the ability to capture lexical similarity. Levy and Goldberg (2014) showed that searching for the closest word vector to (king - man + woman) is equivalent to optimizing a linear combination of 3 similarity terms [+(x,king), -(x,man), +(x, woman)]. This explanation was further demonstrated by Linzen (2016) who showed that even when removing the negative term (x, man), many analogies can still be solved, i.e. by looking for a word that is similar both to \"king\" and to \"woman\". Add to that the fact that the analogy trick works best when the vectors are L2 normalized; if they are all on the unit sphere, what is the geometric interpretation of (king - man + woman), which is not on the unit sphere? I suggest removing this sentence and other references to linguistic regularities from this paper, since they are controversial at best, and distract from the main findings.\n* This is also related to Bullinaria and Levy's (2012) finding that downweighting the eigenvalue matrix in SVD-based methods improves their performance. Levy et al (2015) showed that keeping the original eigenvalues can actually degenerate SVD-based embeddings. Perhaps there is a connection to the findings in this paper?\n", "This paper proposes that sets of word embeddings can be improved by subtracting the common mean vector and reducing the effect of dominant components of variation. \n\nComments:\n\nReference to 'energy' and 'isotropic' in the first paragraph come without any explanation. Can plain terms be used instead to express the same ideas? This would make understanding easier (I have a degree in maths, but never studied physics, and I had to look them up). Otherwise, I like the simple explanation of the method given in the intro. \n\nThe experiments conducted in the paper are comprehensive. It is very positive that the improvements appear to be quite consistent across well-known tasks. As well as proposing a simple trick to produce these improvements, the authors aim to provide theoretical insight and (implicitly, at least) pursue a better understanding of semantic word spaces. This has the potential to be a major contribution, as such spaces, and semantic representation in neural nets in general, is poorly understood. However (perhaps because I'm not familiar with Arora et al.) I found the mathematical analysis e.g. S2.2 dense, without any clearly-stated intuitive motivation or conclusions (as per the introduction section) about what is going on semantically. E.g. it is not clear to me why isotropy is something desirable in a word embedding space. I understand that the discarded components tend to encode frequency, and this is very interesting and somewhat indicative f why the method might work. However, Figure 2 is particularly hard to interpret? The correlations, and the distribution of high-frequency words) seems to be quite different for each of the three models?! \n\nIn general, I don't think the authors should rely on readers having read Arora et al. - anything that builds on that work needs to reintroduce their findings in pain terms in the current paper. \n\nAnother concern is the novelty in relation to related work. I have not read Arora et al. but the authors say that they 'null away the first principal component', and Sahlgren et al centre the mean. Taken together, this seems very similar to what the authors propose here (please clarify). More generally, these sorts of tricks have often been applied by deep learning researchers and passed around anecdotally (e.g. initialise transition matrix in RNNs with orthonormal noise) as ways to improve training. It is important to share and verify these things, but such a contribution feels more appropriate for a workshop than the main conference. This makes the work that the authors do in interpreting and understanding why these tricks work particularly important. As is, however, I thing that the conclusions from this analysis are unclear and opaque. Can they be better communicated, or is it the case that the results of the analysis are in fact inconclusive?\n\nThe vast amount of work included in the appendix is impressive. What particularly caught my eye was appendix B, where the authors try to understand if their method can simply be 'learned' by any network that uses pre-trained word embeddings. This is a really nice experiment, and I think it could easily be part of the main paper (perhaps swapping with the stuff in section 2.2). \n\nThe conclusion would be a good place for summarising the main findings in plain terms, but that doesn't really happen (unless the finding about frequency is the only finding). Instead, there is a vague connection to population genetics and language evolution. This may be an interesting future direction, but the connection is tenuous, so that this reader, at least, was left a little baffled. \n\n[REVISED following response]\n\nThanks for your thorough response which did a good job of addressing most of my concerns. I have changed the score accordingly.\n \n\n", "This paper proposes a simple post-processing technique for word representations designed to improve representational quality and performance on downstream tasks. The procedure involves mean subtraction followed by projecting out the first D principle directions and is motivated by improving isotropy of the partition function. Extensive empirical analysis supports the efficacy of the approach.\n\nThe idea of post-processing word embeddings to improve their performance is not new, but I believe the specific procedure and its connection to the concept of isotropy has not been investigated previously. Relative to other post-processing techniques, this method has a fair amount of theoretical justification, particularly as described in Appendix A. I think the experiments are reasonably comprehensive. All told, I think this is a good paper, but I do have some comments and questions that I think should be addressed before publication.\n\n1) I think it is useful to analyze the distribution of singular values of the matrix of word vectors. However, I did not find the heuristic analysis based on the visual appearance of these distributions to be convincing. For example, in Fig. 1, it is not clear to me that there exists a separation between regimes of exponential decay and rough constancy. It would be ideal if a more quantitative metric is established that captures the main qualitative behavior alluded to here.\n\nFurthermore, the vocabulary size is likely to have a strong effect on the shape of the distributions. Are the plots in Fig. 4 for the same vocabulary size? Related to this, the dimensionality of the representation will have a strong effect on the shape, and this should be controlled for in Fig. 8. One way to do this would be to instead plot the density of singular values. Finally, for the Gaussian matrix simulations, in the asymptotic limit, the density of singular values depends only on the ratio of dimensions, i.e. the vector dimension to the vocabulary size. Fig. 4/8 might be more revealing if this ratio were controlled for.\n\n2) It would be useful to describe why isotropy of the partition function is the goal, as opposed to isotropy of the vectors themselves. This may be argued in Arora et al. (2016), but summarizing that argument in this paper would be helpful. In fact, an additional experiment that would be very valuable would be to investigate empirically which form of isotropy is more effective in governing performance. One way to do this would be to enforce approximate isotropy of the partition function without also enforcing isotropy of the vectors themselves. Practically speaking, one might imagine doing this by requiring I = 1 to second order without also requiring that the mean vanish. I think this would allow for \\sigma_max > \\sigma_min while still satisfying I = 1 to second order. (But this is just off the top of my head -- there may be better ways to conduct this experiment).\n\nIt is not clear to me why the experiment leading to Table 2 is a good proxy for the exact computation of I. It would be great if there were some mathematical justification for this approximation.\n\nWhy does Fig. 3 use D=10, 20 when much smaller D are considered elsewhere? Also I think a log scale on the x-axis might be more informative.\n\n3) It would be good to mention other forms of post-processing, especially in the context of word similarity. For example, in the original paper, GloVe advocates averaging the target and context vector representations, and normalizing across the feature dimension before computing cosine similarity.\n\n4) I think it's likely that there is a strong connection between the optimal value of D and the frequency distribution of words in the evaluation dataset. While the paper does mention that D may depend on specifics of the dataset, etc., I would expect frequency-dependence to be the main factor, and it might be worth exploring this effect explicitly.\n", "Dear AnonReviewer3,\n\nThanks for your comments!\n\nQ: Connecting to Bullinaria and Levy's (2012) and Levy et al (2015): \nA: Thanks for pointing out these references. The key difference between our approach and the previous work is that we null out the top PCs of the *low-dimensional* word vectors (as the factorization of the cooccurrence matrix) while the previous works downgrade or remove the top PCs of the cooccurrence matrix itself. Specifically in the example of equation (5) and (6) in Levy et al (2015), downgrading the top PCs means downgrading the first few *dimensions* of the word vectors, not downgrading the top *PCs*. \n\nQ: Are the embedding vectors L2 normalized before using them in each task? This is known to significantly affect performance. I am curious whether removing the top PCs is redundant or not given L2 normalization: \nA: We also experimented with L2 normalized vectors — the results are similar, i.e., there is also a consistent improvement on all benchmarks in this paper. We are able to explain this phenomenon as follows: the word vectors almost all have the same norm; example: for GLOVE (average norm: 8.30, std: 1.56). Therefore normalizing it or not barely make any impact on the directions of top PCs and therefore does not affect the performances of the post processing. However, L2 normalization might affect the SVD-based vectors (for example Levy et al (2015)).\n\nQ: Impact on major NLP tasks:\nA: Showing that the post processing does not only work for the ``toy’’ examples but also in the real applications is exactly what we tried to do with the semantic textual similarity in Section 3 and the text classification task in Section 4. We are aware that there are other sophisticated downstream applications involving complicated neural networks — Q&A in SQuAD and even machine translation in WMT. Apart from the fact that we don’t have access to the source code to the top systems presented in the leaderboard, doing these experiments would take huge resources in terms of both timing and computational resources. We submit that it’s hard to exhaust all downstream applications in this one submission with our individual effort. Having said this, if the reviewer has a specific downstream application in mind and a specific code to suggest we are happy to experiment with it. \n\nQ: Linguistic regularities.\nA: Thanks for your advice. We have removed this part from our paper.", "Dear AnonReviewer2,\n\nThanks for your comments! \n\nQ: Quantitative metric in Figure 1.\nA: We admit that it's hard to see the separation between two regimes in a log-scale of the x-axis, but we prefer to plot it this way. This is because other than the top 10 components, the rest of the singular values are very small and almost the same as each other (as shown in Figure 4) and these constitute a large portion (~95%) of all singular values. We don't want to spend a large portion of the graph capturing this fact. Additionally, in this scale, it's easy to see the decay of the top singular values (approximately exponentially w.r.t. the index). \n\nFor completeness, p we plot the singular values in a linear scale of its index in this anonymized link: https://www.dropbox.com/s/marzc41z2oy6qau/decay.pdf?dl=0 . The decay of the eigenvalues is much more obvious in this plot and there is a very clear separation between the two regimes. To complete this discussion, we provide the following table of the normalized singular values (x100) of the first 10 out of 300 components. \n\nGLOVE | 2.12 1.27 1.11 0.95 0.69 0.69 0.64 0.62 0.60 0.56 0.53 0.52 0.51 0.50 0.49 0.49 0.46 0.45 0.44 0.44\nRAND-WALK | 3.29 2.45 1.89 1.50 1.38 1.21 1.12 1.03 0.96 0.89 0.84 0.79 0.77 0.69 0.67 0.64 0.63 0.62 0.60 0.57\nWORD2VEC | 4.17 2.40 2.23 2.15 1.90 1.61 1.51 1.39 1.30 1.23 1.16 1.06 1.00 0.97 0.95 0.91 0.86 0.85 0.78 0.77\n \nQ: The density of singular values depends on the shape of the distribution.\nA: This is a very subtle point and we really appreciate it being brought out. However, this dependence mentioned by the reviewer is moot in this case for the following reason: in random matrix theory, the asymptotic density of singular values depends heavily on the ratio of the size of the matrix, where the asymptotic limit is studied when both dimensions go to infinity (while keeping the ratio unchanged). Such a case does not fit the scenario here: the dimension of word vectors is only 300 which is extremely small with respect to the vocabulary size (~1,000,000). \nIn addition, when we plot the figures, we actually take all words in the vocabulary (where the vocabulary size is presented in Table 1). Although different publicly available embeddings are trained on different corpora, the vocabularies are not very different especialy when compared to the size of the word vector dimensions. For instance, Glove and word2vec have essentially the same vocabulary size (~1,000,000). We dont expect these small differences to signficantly affect the density of singular vectors, especially when the vocabular sizes are so large compared to the dimension of the word vectors. \n\nQ: Why isotropy of the partition function is the goal, as opposed to the isotropy of the vectors themselves.\nA: Mathematically, istotropy is an asymptotic property of a sequence of vectors. For random vectors, the joint density of an isotropic random vector only depends on the norm of its argument (regardless of the angle). For an infinite sequnence of deterministic vectors, the empirical density of the vectors again is angularly invariant. \n\nFor deterministic and finite set of word vectors, there is no one single definition of isotropy. Indeed, one of our contributions is to postulate a ``correct\" definition of isotropy (via the partition function) -- this choice allows us to derive practical word embedding algorithms and we empirically demonstrate the improved quality of the resulting word vectors in our paper. \n\nQ: Why D=10,20 is much larger than the D applied elsewhere\nA: Thanks for pointing this out. This is actually a typo; we had chosen D consistently as before (i.e., D = 2 or GloVe and 3 for word2vec). We have fixed this in the revised version.\n\nQ: Other forms of postprocessing\nA: Thank you for this very fair advice. There are various ways to merge two sets of representations (word vectors and the context vectors), taking the average or concatenating them together to form longer vectors are two of the approaches. Several methods we attempted did not yield any noticeable and consistent improvements; example: one way to achieve isotropy is by \"whitening the spectrum\" which didnt work. The same holds for context vectors; if the reviewer has any specific postprocessing baselines that operate solely on word vectors to suggest, we are happy to conduct the corresponding experiments. \n\nThank you.", "Dear AnonReviewer1,\n\nThanks for your comments. \n\nQ: Why isotropy is desirable in a word embedding space.\nA: A priori, there is no scientific reason for isotropy (other than the heuristic idea that isotropy spreads out the word vectors and the larger separation potentially leads to better representations). The isotropic property was introduced in (Arora et al. 2015) as an axiom, but primarily their theorem on self-normalization to go through. Motivated by this work, we ask if explicitly imposing the isotropy constraint we can get better empirical performance -- the results in this paper answer this affirmatively. \n\nQ: Interpretation of Figure 2.\nA: Our main point behind Figure 2 is the finding that the top coefficients capture the frequency to some extent -- we couldnt quantify the extent in a precise manner; any suggestions are welcome. Also note that frequency may not be the only aspect capture in the top coefficients. \nRe correlations being different in the three models: this statement is not exactly true, for two reasons. First, the coefficients are equivalent up to a +/- sign, this is because singular vectors by themselves do not capture the directions, and therefore the correlations in CBOW and SKIP-GRAM are quite similar (they are reflectional symmetric w.r.t. the y-axis). Second, the training algorithms for GloVe (as a factorization of the co-occurrence matrix) and word2vec (directly optimizing parameters in a probabilistic model) are rather different. Even though a lot of works try to tighten them together, there is no conclusion that these two methods produce the same output. \n\nQ: Prior work on (Arora et al, 2015):\nA: We have made our manuscript as much self contained as possible (with small additions to enable this in the newly revised version, cf. top of page 4). Indeed, the main part o the paper barely requires the reader to know the results of (Arora et al, 2015). If the reviewer can suggest specific places where we could improve our exposition, that would be very much appreciated. The mathematical connections between our work and RAND-WALK are explored in detail in Appendix A and have made this material self contained . Given size limitations of the conference paper, we chose to emphasize our post-processing algorithm and its empirical performances in the main text and move these mathematical connections to the appendix.\n\nQ: Novelty in relation to the related work.\nA: We first note that our post-processing algorithm was derived independently of the works of (Arora et al. (2017)) and (Sahlgren et al. (2016)) -- time-stamps document this fact but we cannot divulge explicitly due to author anonymity. Second, although there is a superficial similarity between our work and (Arora et al. 2017), the nulling directions we take and the one they take are fundamentally different. Specifically, in Arora et al. (2017), the first dominating vector is *dataset-specific*, i.e., the first compute the sentence representation for the entire STS dataset, then extract the top direction from those sentence representations and finally project the sentence representation away from it. By doing so, the top direction will inherently encode the common information across the entire dataset, the top direction for the \"headlines\" dataset may encode common information about news articles while the top direction for \"Twitter'15\" may encode the common information about tweets. In contrast, our dominating vectors are over the entire vocabulary of the language. \n\nWe also updated the related work section in the newly revised version to highlight these differences. \n\nQ: Explanation and interpretation of these tricks.\nA: The \"simple tricks\" that are originally found to work empirically very well, have a mathematical basis underlying them on several occasions. Indeed, the \"trick\" mentioned by the reviewer -- initializing transition matrix in RNNs with orthonormal matrix -- is a paper at ICML 2017 (Vorontsov et al. 2017). We have proposed post-processing as a good initialization on word representations in downstream tasks. We submit that our paper makes multi-faceted attempts to explain our initialization procedure: (a) by theoretically analyzing the isotropy property of word representations; (b) empirically demonstrating the boosted performances of word representations consistently in various tasks; (c) a study on the connection between the postprocessing algorithms and the arithmetic operations learnt end-to-end by modern day deep learning NLP pipelines. \n\nQ: Conclusion.\nThanks for the advice; we appreciate it. We have updated the conclusion and summarize the main point of our paper: the simple post-processing operation should be used for word embeddings in downstream tasks or as intializations for training task-specific embeddings. \n\nThanks!\n" ]
[ 6, 7, 7, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1 ]
[ "iclr_2018_HkuGJ3kCb", "iclr_2018_HkuGJ3kCb", "iclr_2018_HkuGJ3kCb", "HyvB9RKez", "S1DUyihgz", "rkmdzXigz" ]
iclr_2018_B18WgG-CZ
Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning
A lot of the recent success in natural language processing (NLP) has been driven by distributed vector representations of words trained on large amounts of text in an unsupervised manner. These representations are typically used as general purpose features for words across a range of NLP problems. However, extending this success to learning representations of sequences of words, such as sentences, remains an open problem. Recent work has explored unsupervised as well as supervised learning techniques with different training objectives to learn general purpose fixed-length sentence representations. In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model. We train this model on several data sources with multiple training objectives on over 100 million sentences. Extensive experiments demonstrate that sharing a single recurrent sentence encoder across weakly related tasks leads to consistent improvements over previous methods. We present substantial improvements in the context of transfer learning and low-resource settings using our learned general-purpose representations.
accepted-poster-papers
This paper presents a very cool setup for multi task learning for learning fixed length representations for sentences. Although the authors accept the fact that fixed length representations may not be suitable for complex, long pieces of text (often, sentences), such representations may be useful for several tasks. They use a significantly large scale setup with six interesting tasks and show that learning generic representations for sentences across tasks is useful than learning in isolation. Two out of three reviewers presented extensive critique of the paper and there's thorough back and forth between the reviewers and the authors. The committee believes that this paper will add positive value to the conference.
train
[ "H1qLBusxz", "BJOtf_JlG", "BkSxEc5xz", "HJP15O6XG", "By7S_wamz", "r1df_v6XM", "HkW2Lwamz", "SyxDHPTmM", "By43QwTmM", "HJ-F7vamM", "B19f7vp7G", "B1u5zP6XG", "Skog6b2XM", "HyNahb37M", "Bk1qKe3XG", "S1mE8yRxz", "SJlxJ4ixG", "HkHMeSHez", "Syi8hpvCZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "public", "public", "public", "public" ]
[ "---- updates: ----\n\nI had a ton of comments and concerns, and I think the authors did an admirable job in addressing them. I think the paper represents a solid empirical contribution to this area and is worth publishing in ICLR. \n\n---- original review follows: ----\n\nThis paper is about learning sentence embeddings by combining a bunch of training signals: predicting the next & previous sentences (skip-thought), predicting the sentence's translation, classifying entailment relationships between two sentences, and predicting the constituent parse of a sentence. This is a simple idea that combines a bunch of things from prior work into one framework and yields strong results, outperforming most prior work on most tasks. \n\nI think this paper is impressive in how it scales up training to use so many tasks and such large training sets for each task. That and its strong experimental results make it worthy of publication. It's not very surprising that adding more tasks and data improves performance on average across downstream tasks, but it is nice to see the experimental results in detail. While many people would think of this idea, few would have the resources and expertise necessary to do it justice. I also like how the authors move beyond the standard sentence tasks to evaluate also on the Quora question duplicate task with different amounts of training data and also consider the sentence characteristic / syntactic property tasks. It would be great if the authors could release their pretrained sentence representation model so that other researchers could use it. \n\nI do have some nitpicks here and there with the presentation and exposition, and I am concerned that at times the paper appears to be minimizing its weaknesses, but I think these are things that can be addressed in the next revision. I understand that sometimes it's tempting to minimize one's weaknesses in order to get a paper accepted because the reviewers may not understand the area very well and may get hung up on the wrong things. I understand the area well and so all the feedback I offer below comes from a place of desiring this paper's publication while also desiring it to be as accurate and helpful for the community as possible. \n\nBelow I'll discuss my concerns with the experiments and description of the results.\n\nRegarding the results in Table 2:\n\nThe results in Table 2 seem a little bit unstable, as it is unclear which setting to use for the classification tasks; maybe it depends on the kind of classification being performed. One model seems best for the sentiment tasks (\"+2L +STP\") while other models seem best for SUBJ and MPQA. Adding parsing as a training task hurts performance on the sentence classification tasks while helping performance on the semantic tasks, as the authors note. It is unclear which is the best general model. In particular, when others write papers comparing to the results in this paper, which setting should they compare to? It would be nice if the authors could discuss this. \n\nThe results reported for the CNN-LSTM of Gan et al. do not exactly match those of any single row from Gan et al, either v1 or v2 on arxiv or the published EMNLP version. How were those specific numbers selected? \n\nThe caption of Table 2 states \"All results except ours are taken from Conneau et al. (2017).\" However, Conneau et al (neither the latest arxiv version nor the published EMNLP version) does not include many of the results in the table, such as CNN-LSTM and DiscSent mentioned in the following sentence in the caption. Did the authors replicate the results of those methods themselves, or report them from other papers?\n\nWhat does bold and underlining indicate in Table 2? I couldn't find this explained anywhere. \n\nAt the bottom of Table 2, in the section with approaches trained from scratch on these tasks, I'd suggest including the 89.7 SST result of Munkhdalai and Yu (2017) and the 96.1 TREC result of Zhou et al. (2016) (as well as potentially other results from Zhou et al, since they report results on others of these datasets). The reason this is important is because readers may observe that the paper's new method achieves higher accuracies on SST and TREC than all other reported results and mistakenly think that the new method is SOTA on those tasks. I'd also suggest adding the results from Radford et al. (2017) who report 86.9 on MR and 91.4 on CR. For other results on these datasets, including stronger results in non-fixed-dimensional-sentence-embedding transfer settings, see results and references in McCann et al. (2017). While the methods presented in this paper are better than prior work in learning general purpose, fixed-dimensional sentence embeddings, they still do not produce state-of-the-art results on that many of these tasks, if any. I think this is important to note. \n\nFor all tasks for which there is additional training, there's a confound due to the dimensionality of the sentence embeddings across papers. Using higher-dimensional sentence embeddings leads to more parameters in the linear model being trained on the task data. So it is unclear if the increase in hidden units in rows with \"+L\" is improving the results because of providing more weights for the linear model or whether it is learning a better sentence representation. \n\nThe main sentence embedding results are in Table 2, and use the SentEval framework. However, not all tasks are included. The STS Benchmark results are included, which use an additional layer trained on the STS Benchmark training data just like the SICK tasks. But the other STS results, which use cosine similarity on the embedding space directly without any retraining, are only included in the appendix (in Table 7). The new approach does not do very well on those unsupervised tasks. On two years of data it is better than InferSent and on two years it is worse. Both are always worse than the charagram-phrase results of Wieting et al (2016a), which has 66.1 on 2012, 57.2 on 2013, 74.7 on 2014, and 76.1 on 2015. Charagram-phrase trains on automatically-generated paraphrase phrase pairs, but these are generated automatically from parallel text, the same type of resource used in the \"+Fr\" and \"+De\" models proposed in this submission, so I think it should be considered as a comparable model. \n\nThe results in the bottom section of Table 7, reported from Arora et al (2016), were in turn copied from Wieting et al (2016b), so I think it would make sense to also cite Wieting et al (2016b) if those results are to be included. Also, it doesn't seem appropriate to designate those as \"Supervised Approaches\" as they only require parallel text, which is a subset of the resources required by the new model. \n\nThere are some other details in the appendix that I find concerning:\n\nSection 8 describes how there is some task-specific tuning of which function to compute on the encoder to produce the sentence representation for the task. This means that part of the improvement over prior work (especially skip-thought and InferSent) is likely due to this additional tuning. So I suppose to use these sentence representations in other tasks, this same kind of tuning would have to be done on a validation set for each task? Doesn't that slightly weaken the point about having \"general purpose\" sentence representations?\n\nSection 9 provides details about how the representations are created for different training settings. I am confused by the language here. For example, the first setting (\"+STN +Fr +De\") is described as \"A concatenation of the representations trained on these tasks with a unidirectional and bidirectional GRU with 1500 hidden units each.\" I'm not able to parse this. I think the authors mean \"The sentence representation h_x is the concatenation of the final hidden vectors from a forward GRU (with 1500-dimensional hidden vectors) and a bidirectional GRU (also with 1500-dimensional hidden vectors)\". Is this correct? \n\nAlso in Sec 9: I found it surprising how each setting that adds a training task uses the concatenation of a representation with that task and one without that task. What is the motivation for doing this? This seems to me to be an important point that should be discussed in Section 3 or 4. And when doing this, are the concatenated representations always trained jointly from scratch with the special task only updating a subset of the parameters, or do you use the fixed pretrained sentence representation from the previous row and just concatenate it with the new one? To be more concrete, if I want to get the encoder for the second setting (\"+STN +Fr +De +NLI\"), do I have to train two times or can I just train once? That is, the train-once setting would correspond to only updating the NLI-specific representation parameters when training on NLI data; on other data, all parameters would be updated. The train-twice setting would first train a representation on \"+STN +Fr +De\", then set it aside, then train a separate representation on \"+STN +Fr +De +NLI\", then finally concatenate the two representations as my sentence representation. Do you use train-once or train-twice? \n\nRegarding the results in Table 3:\n\nWhat do bold and underline indicate?\n\nWhat are the embeddings corresponding to the row labeled \"Multilingual\"?\n\nIn the caption, I can't find footnote 4. \n\nThe caption includes the sentence \"our embeddings have 1040 pairs out of 2034 for which atleast one of the words is OOV, so a comparison with other embeddings isn't fair on RW.\" How were those pairs handled? If they were excluded, then I think the authors should not report results on RW. I suspect that most of the embeddings included in the table also have many OOVs in the RW dataset but still compute results on it using either an unknown word embedding or some baseline similarity of zero for pairs with an OOV. I think the authors should find some way (like one of those mentioned, or some other way) of computing similarity of those pairs with OOVs. It doesn't make much sense to me to omit pairs with OOVs. \n\nThere are much better embeddings on SimLex than the embeddings whose results are reported in the table. Wieting et al. (2016a) report SimLex correlation of 0.706 and Mrkšić et al. (2017) report 0.751. I'd suggest adding the results of some stronger embeddings to better contextualize the embeddings obtained by the new method. Some readers may mistakenly think that the embeddings are SOTA on SimLex since no stronger results are provided in the table. \n\n\nThe points below are more minor/specific:\n\nSec. 2:\n\nIn Sec. 2, the paper discusses its focus on fixed-length sentence representations to distinguish itself from other work that produces sentence representations that are not fixed-length. I feel the motivation for this is lacking. Why should we prefer a fixed-length representation of a sentence? For certain downstream applications, it might actually be easier for practitioners to use a representation that provides a representation for each position in a sentence (Melamud et al., 2016; Peters et al., 2017; McCann et al., 2017) rather than an opaque sentence representation. Some might argue that since sentences have different lengths, it would be appropriate for a sentence representation to have a length proportional to the length of the sentence. I would suggest adding some motivation for the focus on fixed-length representations. \n\nSec. 4.1:\n\n\"We take a simpler approach and pick a new task to train on after every parameter update sampled uniformly. An NLI minibatch is interspersed after every ten parameter updates on sequence-to-sequence tasks\"\nThese two sentences seem contradictory. Maybe in the first sentence \"pick a new task\" should be changed to \"pick a new sequence-to-sequence task\"?\n\nSec. 5.1:\n\ntypo: \"updating the parameters our sentence\" --> \"updating the parameters of our sentence\"\n\nSec. 5.2:\n\ntypo in Table 4 caption: \"and The\" --> \". The\"\n\ntypo: \"parsing improvements performance\" --> \"parsing improves performance\"\n\n\nIn general, there are many missing citations for the tasks, datasets, and prior work on them. I understand that the authors are pasting in numbers from many places and just providing pointers to papers that provide more citation info, but I think this can lead to mis-attribution of methods. I would suggest including citations for all datasets/tasks and methods whose results are being reported. \n\n\nReferences:\n\nMcCann, Bryan, James Bradbury, Caiming Xiong, and Richard Socher. \"Learned in translation: Contextualized word vectors.\" CoRR 2017.\n\nMelamud, Oren, Jacob Goldberger, and Ido Dagan. \"context2vec: Learning Generic Context Embedding with Bidirectional LSTM.\" CoNLL 2016.\n\nMrkšić, Nikola, Ivan Vulić, Diarmuid Ó. Séaghdha, Ira Leviant, Roi Reichart, Milica Gašić, Anna Korhonen, and Steve Young. \"Semantic Specialisation of Distributional Word Vector Spaces using Monolingual and Cross-Lingual Constraints.\" TACL 2017.\n\nMunkhdalai, Tsendsuren, and Hong Yu. \"Neural semantic encoders.\" EACL 2017. \n\nPagliardini, Matteo, Prakhar Gupta, and Martin Jaggi. \"Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features.\" arXiv preprint arXiv:1703.02507 (2017).\n\nPeters, Matthew E., Waleed Ammar, Chandra Bhagavatula, and Russell Power. \"Semi-supervised sequence tagging with bidirectional language models.\" ACL 2017.\n\nRadford, Alec, Rafal Jozefowicz, and Ilya Sutskever. \"Learning to generate reviews and discovering sentiment.\" arXiv preprint arXiv:1704.01444 2017.\n\nWieting, John, Mohit Bansal, Kevin Gimpel, and Karen Livescu. \"Charagram: Embedding words and sentences via character n-grams.\" EMNLP 2016a.\n\nWieting, John, Mohit Bansal, Kevin Gimpel, and Karen Livescu. \"Towards universal paraphrastic sentence embeddings.\" ICLR 2016b.\n\nZhou, Peng, Zhenyu Qi, Suncong Zheng, Jiaming Xu, Hongyun Bao, and Bo Xu. \"Text Classification Improved by Integrating Bidirectional LSTM with Two-dimensional Max Pooling.\" COLING 2016.\n", "Follow-Up Comments\n----\n\nI continue to argue that this paper makes a contribution to a major open question, and clearly warrants acceptance. \n\nI agree with R1 that the results do not tell a completely clear story, and that the benefits of pretraining are occasionally minimal or absent. However, R1 uses this as the basis to argue for rejection, which does not seem reasonable to me at all. This limitation is an empirical fact that the paper has done a reasonable job of revealing, and it does not take away the paper's reason for existence, since many of the results are still quite strong, and the trends do support the merit of the proposed approach.\n\nThe authors mostly addressed my main concern, which was the relatively weak ablation. More combinations would be nice, but assuming reasonable resource constraints, I think the authors have done their due diligence, and the paper makes a clear contribution. I disagree with the response, though, that the authors can lean on other papers to help fill in the ablation—every paper in this area uses subtly different configurations.\n\nI have one small lingering concern, which is not big enough to warrant acceptance: R2's point 10 is valid—the use of multiple RNNs trained on different objectives in the ablation experiments unexpected and unusual, and deserves mention in the body of the paper, rather than only in an appendix.\n\n----\nOriginal Review\n---\n\nThis paper explores a variety of tasks for the pretraining of a bidirectional GRU sentence encoder for use in data-poor downstream tasks. The authors find that the combination of supervised training with NLI, MT, and parsing, plus unsupervised training on the SkipThought objective yields a model that robustly outperforms the best prior method on every task included in the standard SentEval suite, and several others.\n\nThis paper isn't especially novel. The main results of the paper stem from a combination of a few ideas that were ripe for combination (SkipThought from Kiros, BiLSTM-max and S/MNLI from Conneau, MT from McCann, parsing following Luong, etc.). However, the problem that the paper addresses is a major open issue within NLP, and the paper is very well done, so it would be in the best interest of all involved to make sure that the results are published promptly. I strongly support acceptance.\n\nMy one major request would be a more complete ablation analysis. It would be valuable for researchers working on other languages (among others) to know which labeled or unlabeled datasets contributed the most. Your ablation does not offer enough evidence to one to infer this---among other things, NLI and MT are never presented in isolation, and parsing is never presented without those two. Minimally, this should involve presenting results for models trained separately on each of the pretraining tasks.\n\nI'll also echo another question from Samuel's comment: Could you say more about how you conducted the evaluation on the SentEval tasks? Did your task-specific model (or the training/tuning procedure for that model) differ much from prior work?\n\nDetails:\n\nThe paragraph starting \"we take a simpler approach\" is a bit confusing. If task batches are sampled *uniformly*, how is NLI be sampled less often than the other tasks?\n\nGiven how many model runs are presented, and that the results don't uniformly favor your largest/last model, it'd be helpful to include some kind of average of performance across tasks that can be used as a single-number metric for comparison. This also applies to the word representation evaluation table.\n\nWhen comparing word embeddings, it would be helpful to include the 840B-word release of GloVe embeddings. Impressionistically, that is much more widely used than the older 6B-word release for which Faruqui reports numbers. This isn't essential to the paper, but it would make your argument in that section more compelling.\n\n\"glove\" => GloVe; \"fasttext\" => fastText", "This paper shows that learning sentence representations from a diverse set of tasks (skip-thought objective, MT, constituency parsing, and natural language inference) produces .\nThe main contribution of the paper is to show learning from multiple tasks improves the quality of the learned representations.\nExperiments on various text classification and sentiment analysis datasets show that the proposed method is competitive with existing approaches.\nThere is an impressive number of experiments presented in the paper, but the results are a bit mixed, and it is not always clear that adding more tasks help.\n\nI think this paper addresses an important problem of learning general purpose sentence representations. \nHowever, I am unable to draw a definitive conclusion from the paper. \nFrom Table 2, the best performing model is not always the one with more tasks. \nFor example, adding a parsing objective can either improve or lower the performance quite significantly.\nCould it be that datasets such as MRPC, SICK, and STSB require more understanding of syntax?\nEven if this is the case, why adding this objective hurt performance for other datasets?\nImportantly, it is also not clear whether the performance improvement comes from having more unlabeled data (even if it is trained with the same training objective) or having multiple training objectives.\nAnother question I have is that if there is any specific reason that language modeling is not included as one of the training objectives to learn sentence representations, given that it seems to be the easiest one to collect training data for.\n\nThe results for transfer learning and low resource settings are more positive.\nHowever, it is not surprising that pretraining parts of the model on a large amount of unlabeled data helps when there is not a lot of labeled examples.\n\nOverall, while the main contribution of the paper is that having multiple training objectives help learning better sentence, I am not yet convinced by the experiments that this is indeed the case.", "We would like to thank our reviewers and the community in general for their constructive comments and feedback. We have made a few revisions to our paper addressing a few of the concerns raised:\n\n- To quantify transfer performance with a single number, we use the mean difference of our model from Infersent (AllNLI) in Table 2 across all 10 tasks, this number clear illustrates the benefits of adding more tasks on the quality of the learned representations.\n- We've added three more rows in Table 3 corresponding to the glove840B, charagram and attract-repel embeddings.\n- We've added two more rows in Appendix Table 7 to compare sentence embeddings of different sizes in a non-parametric way.\n- We've added a new row to Table 2 (+STN) that corresponds to our implementation of skipthoughts that predicts only the next sentence given the current.\n- We've added two more competitive supervised baseline approaches (Neural Semantic Encoder & BLSTM-2DCNN) trained from scratch in Table 2. This is to give readers an idea of the gaps that still exist between transfer approaches and those that learn from scratch.", "(4) \"Given how many model runs are presented, and that the results don't uniformly favor your largest/last model, it'd be helpful to include some kind of average of performance across tasks that can be used as a single-number metric for comparison. This also applies to the word representation evaluation table.\"\n\n(Response 4) Thank you for suggesting this. To quantify transfer performance with a single number, we use the mean difference of our models from Infersent (AllNLI) in Table 2 across all 10 tasks (*). The results are : 0.01|0.99|1.33|1.39|1.47|1.74. As evident from the results, adding more tasks certainly seems to help on average, even in the parsing scenario (1.39 vs 1.47 for a model of same architecture and capacity); however, adding more capacity (+2L) seems to have a greater impact. \n\nHowever, since the tasks presented in Table 2 are reasonably diverse, one might build representations that are suited to small subsets of these tasks but not others (see for example Radford et al (2017), who achieve very impressive results on sentiment classification tasks such as MR/CR/SST, etc.). Having a single evaluation score across all tasks might not be meaningful in such cases.\n\n* For MRPC and STSB we consider only the F1 score and Spearman scores respectively and we also multiply the SICK-R scores by 100 to map all differences to the same scale.\n\n(5) \"When comparing word embeddings, it would be helpful to include the 840B-word release of GloVe embeddings. Impressionistically, that is much more widely used than the older 6B-word release for which Faruqui reports numbers. This isn't essential to the paper, but it would make your argument in that section more compelling.\"\n\n(Response 5) Thank you for the suggestion. The results for the glove 840B vectors are: |0.34|0.41|0.71|0.46|0.57|0.71|0.76|0.8|. Since this release contains case-sensitive vectors, we only pick vectors for the lower-cased words. These results have been included in the revised manuscript.", "Hi,\n\nThank you for your reviews and positive feedback. We’ve drafted responses to your concerns below.\n\n(1) \"My one major request would be a more complete ablation analysis. It would be valuable for researchers working on other languages (among others) to know which labeled or unlabeled datasets contributed the most. Your ablation does not offer enough evidence to one to infer this---among other things, NLI and MT are never presented in isolation, and parsing is never presented without those two. Minimally, this should involve presenting results for models trained separately on each of the pretraining tasks.\"\n\n(Response 1) Regarding your point and in response to Sam Bowman’s comment we ran a model with just skipthoughts whose results have been added to Table 2 of the revised manuscript. Experiments for other tasks in isolation have already been presented in prior work (ex: NLI - Conneau et al (2017), MT - Hill et al (2015), Skipthoughts - Kiros et al (2015)), the results from those experimental configurations are shown in Table 2. Analysing the results of prior work in Table 2 in the context of our experiments we can examine the impact of different tasks on the quality of the learned representations. For example the results of Conneau et al (2017) that use NLI outperform Skipthoughts (Kiros et al 2015) which in turn outperforms NMT En-Fr (Hill et al 2015). But as we note in our experiment (+STN +Fr +De), it is possible to combine the benefits of skipthoughts and NMT to yield comparable performance to NLI (delta of 0.01 from Infersent). Adding more tasks and increasing model capacity (to match Conneau et al) from this point appears to produce reasonable improvements. An even more thorough analysis of the impact of each task on the learned representations is definitely something we intend to focus on in future work.\n\n(2) \"I'll also echo another question from Samuel's comment: Could you say more about how you conducted the evaluation on the SentEval tasks? Did your task-specific model (or the training/tuning procedure for that model) differ much from prior work?\"\n\n(Response 2) We used the SentEval evaluation suite, as is, with a single change from the default 5-fold cross validation to 10-fold cross validation. This was to match the same setting used by Conneau et al (2017), following instructions on their GitHub page “kfold (int): k in the kfold-validation. Set to 10 to be comparable to published results (default: 5)”.\n\nWe also tuned the way in which the fixed-length sentence representation is computed given the hidden states corresponding to each word, by either max-pooling all the hidden states or selecting the last hidden state.\n\n(3) \"The paragraph starting \"we take a simpler approach\" is a bit confusing. If task batches are sampled *uniformly*, how is NLI be sampled less often than the other tasks?\"\n\n(Response 3) All tasks except NLI are sampled uniformly. We intersperse one NLI minibatch after 10 updates on other tasks as described. We have changed the description in the revised manuscript to indicate that we sample a new sequence-to-sequence task uniformly, making it clear that NLI is sampled less frequently.", "Hi,\n\nThank you for your reviews. We appreciate the feedback and constructive criticism. We’ve drafted some responses to your comments below.\n\n1) \"I think this paper addresses an important problem of learning general purpose sentence representations. \nHowever, I am unable to draw a definitive conclusion from the paper. \nFrom Table 2, the best performing model is not always the one with more tasks. \nFor example, adding a parsing objective can either improve or lower the performance quite significantly.\nCould it be that datasets such as MRPC, SICK, and STSB require more understanding of syntax?\nEven if this is the case, why adding this objective hurt performance for other datasets?\"\n\n(Response 1) In response to your concern about performance not strictly increasing with the addition of more tasks, we believe that there is no free lunch when it comes to which of these models to use. As we argue in the motivation of the paper, there may not always be a single (or multiple) training objective that results in improvements across all possible transfer learning benchmarks. Different training objectives will result in different learned inductive biases which in turn will affect results on different transfer learning tasks. In future work, we hope to uncover some of these inductive biases in greater detail which we can use to understand the relationships between the tasks used to train our model and its performance on a particular transfer task.\n\nHowever, a closer look at the improvements obtained with the addition of more tasks in Table 2 reveals that adding more tasks does indeed help on average (across all 10 tasks). To quantify transfer performance with a single number, we use the mean difference of our models from Infersent (AllNLI) in Table 2 across all 10 tasks (*). The results are : 0.01|0.99|1.33|1.39|1.47|1.74. As evident from the results, adding more tasks certainly seems to help on average, even in the parsing scenario (1.39 vs 1.47 for a model of same architecture and capacity) however adding more capacity (+2L) seems to have a bigger impact. \n\nRegarding your concerns about not seeing improvements across all tasks when adding a new task, this could be because although we add more tasks, we do not increase model capacity. Increased capacity may be required to learn all tasks effectively, or it may be that the inductive biases learned with the addition of new tasks are not useful for some subset of tasks (as you point out regarding the “understanding of syntax” in MRPC, SICK and STSB).\n\n* For MRPC and STSB we consider only the F1 score and Spearman scores respectively and we also multiply the SICK-R scores by 100 to map all differences to the same scale.\n\n(2) \"Importantly, it is also not clear whether the performance improvement comes from having more unlabeled data (even if it is trained with the same training objective) or having multiple training objectives.\nAnother question I have is that if there is any specific reason that language modeling is not included as one of the training objectives to learn sentence representations, given that it seems to be the easiest one to collect training data for.\"\n\n(Response 2) Since all models were run for a total of 7 days, those with the same capacity see the same number of training examples and also undergo the same number of parameter updates. Adding more tasks increases the diversity of data the model observes. The skip-thoughts experiment compared to the skip-thoughts + translation result illustrates this point particularly well, showing increased performance from adding the translation task which serves as a substitute for the alternative single task setup with additional skip-thought training data. Statistically speaking over even more experiments, as discussed above we observe a general trend that with a fixed training example budget the addition of new tasks generally improves performance.\n\nWe did not train our models with an (unconditional) language modeling objective because we believe that this does not lend itself easily to learning fixed length sentence representations. For example, the hidden state corresponding to the last word in a sentence is very unlikely to capture the entire history of the sentence. Also, a teacher-forced language modeling objective also emphasizes one-step-ahead prediction which we felt was not very well suited to learning representations that capture aspects of the sentence as a whole.", "Hi,\n\nThanks for the question! We did not observe strict improvements on our transfer tasks when training for longer, nor were perplexities on our validation sets strongly correlated with transfer performance.\n\nFor example, when running our +STN +Fr +De +NLI +L +STP +Par model for 5 more days, we observed some (but not substantial) difference in transfer performance. To quantify transfer performance with a single number, we use the mean difference of our model from Infersent (AllNLI) in Table 2 across all 10 tasks (*). Our results on this metric for additional days of computation, starting at day (7) were: |(7) 1.47|(8) 1.21|(9) 1.36|(10) 1.52|(11) 1.53|(12) 1.37| with a mean of 1.41 and variance of 0.014. \n\nTo determine if these changes were statistically significant, we generated two sets of scores, the first being day (x - Infersent) and the other being (day_(x + 1) - Infersent) (across all 10 tasks, over the additional 5 days) and then used a pairwise t-test between these two sets of scores. We found the p-values to be statistically insignificant from days 8 to 12. \n\n* For MRPC and STSB we consider only the F1 score and Spearman scores respectively and we also multiply the SICK-R scores by 100 to map all differences to the same scale.", "(12) “The caption includes the sentence \"our embeddings have 1040 pairs out of 2034 for which atleast one of the words is OOV, so a comparison with other embeddings isn't fair on RW.\" How were those pairs handled? If they were excluded, then I think the authors should not report results on RW. I suspect that most of the embeddings included in the table also have many OOVs in the RW dataset but still compute results on it using either an unknown word embedding or some baseline similarity of zero for pairs with an OOV. I think the authors should find some way (like one of those mentioned, or some other way) of computing similarity of those pairs with OOVs. It doesn't make much sense to me to omit pairs with OOVs.”\n\n(Response 12) The evaluation suite provided by Faruqui & Dyer (wordvectors.org), which we use, simply excludes words that are OOV. We have removed the RW column in the revised manuscript.\n(13) “There are much better embeddings on SimLex than the embeddings whose results are reported in the table. Wieting et al. (2016a) report SimLex correlation of 0.706 and Mrkšić et al. (2017) report 0.751. I'd suggest adding the results of some stronger embeddings to better contextualize the embeddings obtained by the new method. Some readers may mistakenly think that the embeddings are SOTA on SimLex since no stronger results are provided in the table.”\n\n(Response 13) Thank you for the pointers! We’ve added results from Weiting et al. (2016a) and Mrkšić et al. (2017) to Table 3 in the revised manuscript.\n\n(14) “In Sec. 2, the paper discusses its focus on fixed-length sentence representations to distinguish itself from other work that produces sentence representations that are not fixed-length. I feel the motivation for this is lacking. Why should we prefer a fixed-length representation of a sentence? For certain downstream applications, it might actually be easier for practitioners to use a representation that provides a representation for each position in a sentence (Melamud et al., 2016; Peters et al., 2017; McCann et al., 2017) rather than an opaque sentence representation. Some might argue that since sentences have different lengths, it would be appropriate for a sentence representation to have a length proportional to the length of the sentence. I would suggest adding some motivation for the focus on fixed-length representations.” \n\n(Response 14) A thorough analysis of the pros and cons of fixed-length versus variable length sentence representations is something we hope to explore in the future. Some of the advantages of fixed-length representations include (a) being able to compute a straightforward non-parametric similarity score between two sentences (such as in the STS* tasks) (b) easy to use simple task-specific classifiers on top of the features extracted by our RNN (in contrast to work by McCann et al (2017) that uses a heavily parameterized task-specific classifier) (c) easy to manipulate aspects of a sentence with gradient based optimization for controllable generation such as in Mueller et al (2017).\n\nAlso, similar to McCann et al (2017), it is possible to represent a sentence using the all of the RNN’s hidden states instead of just the last as in this work. In the classic, attention free encoder-decoder paradigm the architecture encourages all the necessary information for subsequent tasks from a variable length input to be captured in a fixed length vector. Concatenation of the intermediate hidden unit representations is possible but would likely contain redundant information with respect to the final code layer. However, in future work, we would like to compare our approach more directly with McCann et al (2017) using their proposed Bi-attentive classification network with all of the hidden states of our general purpose multi-task GRU instead of just the last. \n\n(15) \"We take a simpler approach and pick a new task to train on after every parameter update sampled uniformly. An NLI minibatch is interspersed after every ten parameter updates on sequence-to-sequence tasks\"\nThese two sentences seem contradictory. Maybe in the first sentence \"pick a new task\" should be changed to \"pick a new sequence-to-sequence task\"?\n\n(Response 15) Thank you for the above suggestion and noticing typos, we’ve made the edits you suggested to clarify things.\n\nReferences\n\nMueller, Jonas, David Gifford, and Tommi Jaakkola. \"Sequence to better sequence: continuous revision of combinatorial structures.\" International Conference on Machine Learning. 2017.", "(7) “The results in the bottom section of Table 7, reported from Arora et al (2016), were in turn copied from Wieting et al (2016b), so I think it would make sense to also cite Wieting et al (2016b) if those results are to be included. Also, it doesn't seem appropriate to designate those as \"Supervised Approaches\" as they only require parallel text, which is a subset of the resources required by the new model. “\n\n(Response 7) Thank you for pointing this out. We will certainly cite Weiting et al (2016b). We designated those models as “Supervised Approaches” since they were marked as such in Arora et al (2016).\n\n(8) “Section 8 describes how there is some task-specific tuning of which function to compute on the encoder to produce the sentence representation for the task. This means that part of the improvement over prior work (especially skip-thought and InferSent) is likely due to this additional tuning. So I suppose to use these sentence representations in other tasks, this same kind of tuning would have to be done on a validation set for each task? Doesn't that slightly weaken the point about having \"general purpose\" sentence representations?”\n\n(Response 8) We only tune the way in which we compute the fixed length sentence representations given all of the GRU’s hidden states (i.e., by max-pooling or picking the last hidden state). The GRU itself is still “general purpose”. Moreover, there appears to be an clear trend to infer which tasks benefit from max-pooling (sentiment related) and which benefit from using the last hidden state (others).\n\n(9) “Section 9 provides details about how the representations are created for different training settings. I am confused by the language here. For example, the first setting (\"+STN +Fr +De\") is described as \"A concatenation of the representations trained on these tasks with a unidirectional and bidirectional GRU with 1500 hidden units each.\" I'm not able to parse this. I think the authors mean \"The sentence representation h_x is the concatenation of the final hidden vectors from a forward GRU (with 1500-dimensional hidden vectors) and a bidirectional GRU (also with 1500-dimensional hidden vectors)\". Is this correct?”\n\n(Response 9) Yes, this is correct. We’ve clarified this in our revised manuscript according to your suggestion.\n\n(10) “Also in Sec 9: I found it surprising how each setting that adds a training task uses the concatenation of a representation with that task and one without that task. What is the motivation for doing this? This seems to me to be an important point that should be discussed in Section 3 or 4. And when doing this, are the concatenated representations always trained jointly from scratch with the special task only updating a subset of the parameters, or do you use the fixed pretrained sentence representation from the previous row and just concatenate it with the new one? To be more concrete, if I want to get the encoder for the second setting (\"+STN +Fr +De +NLI\"), do I have to train two times or can I just train once? That is, the train-once setting would correspond to only updating the NLI-specific representation parameters when training on NLI data; on other data, all parameters would be updated. The train-twice setting would first train a representation on \"+STN +Fr +De\", then set it aside, then train a separate representation on \"+STN +Fr +De +NLI\", then finally concatenate the two representations as my sentence representation. Do you use train-once or train-twice? “\n\n(Response 10) Our approach corresponds to your description of “train-twice”. We adopted the concatenation strategy described in this paper since it requires training only 1 model per set of training objectives (except for the initial set of objectives: +STN +Fr +De).\n\n(11) “Regarding the results in Table 3: What do bold and underline indicate? What are the embeddings corresponding to the row labeled \"Multilingual\"?”\n\n(Response 11) Bold and underline have the same semantics as in Table 2. The Multilingual embeddings correspond to Faruqui & Dyer’s (2014b) “Improving Vector Space Word Representations Using Multilingual Correlation” (we missed citing this paper, thanks for pointing this out).", "(4) “At the bottom of Table 2, in the section with approaches trained from scratch on these tasks, I'd suggest including the 89.7 SST result of Munkhdalai and Yu (2017) and the 96.1 TREC result of Zhou et al. (2016) (as well as potentially other results from Zhou et al, since they report results on others of these datasets). The reason this is important is because readers may observe that the paper's new method achieves higher accuracies on SST and TREC than all other reported results and mistakenly think that the new method is SOTA on those tasks. I'd also suggest adding the results from Radford et al. (2017) who report 86.9 on MR and 91.4 on CR. For other results on these datasets, including stronger results in non-fixed-dimensional-sentence-embedding transfer settings, see results and references in McCann et al. (2017). While the methods presented in this paper are better than prior work in learning general purpose, fixed-dimensional sentence embeddings, they still do not produce state-of-the-art results on that many of these tasks, if any. I think this is important to note.”\n\n(Response 4) These are great suggestions, we will certainly add more prior work in the “Approaches trained from scratch on these tasks” section of Table 2. Aside from clearing up any confusion about our model being state-of-the-art on these tasks, this will also give readers a sense of the gap between transfer learning approaches and purely supervised, task-specifc models. We’ve edited our manuscript to add results from Munkhdalai and Yu (2017), Zhou et al. (2016) and Radford et al. (2017). Regarding comparisons with McCann et al. (2017), it is unclear where it fits. It is a transfer approach but uses a heavily parameterized neural network as it’s task specific classifier in contrast to this and previous work that has used linear classifiers.\n\n(5) “For all tasks for which there is additional training, there's a confound due to the dimensionality of the sentence embeddings across papers. Using higher-dimensional sentence embeddings leads to more parameters in the linear model being trained on the task data. So it is unclear if the increase in hidden units in rows with \"+L\" is improving the results because of providing more weights for the linear model or whether it is learning a better sentence representation. “\n\n(Response 5) We observed improvements when increasing the number of hidden units even on tasks that are not evaluated using parametric models, such as in the STS* tasks (Appendix Table 7). We considered two models trained on the same subset of tasks but with different GRU hidden state sizes (+STN +Fr +De +NLI vs +STN +Fr +De +NLI +L). The following were our results on the STS12/13/14/15/16 benchmarks (Appendix Table 7) in the format |small vs large|. |60.8 vs 61.2|53.5 vs 53.4|64 vs 65.3|73.4 vs 74.6|64.9 vs 66.2|. These results have been added to the paper.\n\n(6) “The main sentence embedding results are in Table 2, and use the SentEval framework. However, not all tasks are included. The STS Benchmark results are included, which use an additional layer trained on the STS Benchmark training data just like the SICK tasks. But the other STS results, which use cosine similarity on the embedding space directly without any retraining, are only included in the appendix (in Table 7). The new approach does not do very well on those unsupervised tasks. On two years of data it is better than InferSent and on two years it is worse. Both are always worse than the charagram-phrase results of Wieting et al (2016a), which has 66.1 on 2012, 57.2 on 2013, 74.7 on 2014, and 76.1 on 2015. Charagram-phrase trains on automatically-generated paraphrase phrase pairs, but these are generated automatically from parallel text, the same type of resource used in the \"+Fr\" and \"+De\" models proposed in this submission, so I think it should be considered as a comparable model. “\n\n(Response 6) We believe that our model doesn’t do as well on the STS benchmarks since our sentence representations capture certain features about a sentence, like sentence length, word order, and others, which aren’t very informative for tasks like semantic similarity. Even with a simple parametric model such as logistic regression in Table 2, the model can learn to down-weight such features and up-weight the ones that are actually useful to determine semantic similarity (STSB). With the STS evaluations that use cosine similarities, sentence vectors are expected to be similar in “every aspect”, which may not be a realistic expectation.We’ve also added charagram to Table 7 in our revised manuscript.", "Hi,\n\nThank you for your thorough and in-depth review. We appreciate the feedback and constructive criticism. We’ve addressed some your concerns below.\n\n(1) “The results in Table 2 seem a little bit unstable, as it is unclear which setting to use for the classification tasks; maybe it depends on the kind of classification being performed. One model seems best for the sentiment tasks (\"+2L +STP\") while other models seem best for SUBJ and MPQA. Adding parsing as a training task hurts performance on the sentence classification tasks while helping performance on the semantic tasks, as the authors note. It is unclear which is the best general model. In particular, when others write papers comparing to the results in this paper, which setting should they compare to? It would be nice if the authors could discuss this.”\n\n(Response 1) Indeed, there is no free lunch when it comes to which of these models to pick. As we argue in the motivation of the paper, there may not always be a single (or multiple) training objective that results in improvements across all possible transfer learning benchmarks. To understand how the addition of tasks impacts performance quantitatively, below we report the average improvement across all 10 tasks from Table 2 over Infersent (AllNLI) for each row (for our models). The results are : 0.01|0.99|1.33|1.39|1.47|1.74 (*), where we have reversed the order of last two experiments to make this point clearer here and in the revised manuscript. The 1.74 score results from a model with more capacity. When capacity remains constant (as in the other experiments) we see from these results that adding more tasks helps on average, even for parsing (ex. 1.39 vs 1.47 for a model of same architecture). However, adding more capacity in the +2L experiment also helps. While measuring the average improvement over Infersent on all SentEval tasks makes sense in this work, it may not necessarily be a good metric for other approaches, such as the sentiment neuron work of Radford et al. that learns to encode some aspects of a sentence better than others.\n\n* For MRPC and STSB we consider only the F1 score and Spearman scores respectively and we also multiply the SICK-R scores by 100 to have all differences in the same scale. \n\n(2) “The results reported for the CNN-LSTM of Gan et al. do not exactly match those of any single row from Gan et al, either v1 or v2 on arxiv or the published EMNLP version. How were those specific numbers selected?” \n\n(Response 2) Thank you for pointing this out. The results reported for the CNN-LSTM model of Gan et al. correspond to the best of their combine+emb and combine models in their v1 arXiv submission. We made a small mistake when reporting their results for MPQA, which should be 89.1 instead of 89.0. We have updated this row with results from their latest arXiv revision.\n\n(3) “What does bold and underlining indicate in Table 2? I couldn't find this explained anywhere.”\n\n(Response 3) We used the same underline and bold semantics as in Conneau et al (2017). Bold numbers indicate the best performing transfer model on a given task. Underlines are used for each task to indicate both our best performing model as well as the best performing transfer model that isn't ours. We’ve clarified this in our revised manuscript.", "Just wanted to point out that the above comment was made by the authors but doesn't show up as such since we commented from an alternate openreview ID that wasn't associated with this paper.", "Just wanted to point out that the above comment was made by the authors but doesn't show up as such since we commented from an alternate openreview ID that wasn't associated with this paper.", "Hi,\n\nThank you for your questions,\n\nIn Tables 2, 4, Appendix 6 & Appendix 7 we use the concatenation of the representations produced by two separately trained multi-task GRUs with the strategy described in Appendix section 9. In Table 5, the sentence representations were produced by a single multi-task GRU instead of adopting the concatenation strategy from Appendix section 9, since we wanted to isolate the impact of adding new tasks on our sentence representations. Table 3 evaluates only word embeddings, and not sentence representations, using a single multi-task model. The size of the sentence embeddings of \"+STN +Fr +De +NLI\" is 3,000; more generally, our models that do not have a +L in Tables 2, 4, Appendix 6 & Appendix 7 are of size 3,000 and the ones that do have a +L or +2L are of size 4,096. We did not add the representation dimensions in Table 2 since we ran out of space (horizontally), but for comparison, the dimensions of Infersent and Skipthoughts are 4096, 4800 respectively.\n", "In the Appendix, it is said \"In tables 3 and 5 we do not concatenate the representations of multiple models.\", which is a bit confusing. Are the embeddings a concatenation of separate encoders for the results in Table 2, and from a shared encoder in table 3 and 5?\nIt would be nice to include the size of the embeddings in Table 2 for a clearer and fair comparison to other methods: in particular, what is the size of the sentence embeddings of \"+STN +Fr +De +NLI\" and variants?\n", "the models in this paper were trained for some arbitrary duration of 7 days. how was this duration selected, and how stable are the reported results w.r.t. different # of training days? if the training was terminated after 5 days (or 8 days or whatever that is not 7 days), would the results stay as they are reported in the submission?", "Hi,\n\nThank you for your questions!\n\n1. We did not consider starting our ablations with just SkipThought or NMT since they’ve been explored individually by Kiros et al 2015 and Hill et al 2016 respectively. We however, ran an experiment with just a large skip-thought next model (+STN + L) (4096 dimensions). Our results of this new ablation on the set of tasks presented in Table 2 (in the same column order) are - 78.9|85.8|93.7|87.2|80.4|84.2|72.4/81.6|0.840|82.1|72.9/72.4 , which indicates that there is some improvement with the addition of NMT even on a smaller model. We'll include these results in our first paper revision.\n\n2. We did use SentEval in all evaluations except on the Quora dataset and Table 5 since they aren’t a part of SentEval (hence ‘largely’). We’ll make this clearer.\n\n3. The following are the results for our model (+STN +Fr +De +NLI +L +STP) using vocabulary expansion with the glove 840B vectors versus using our learned <unk> token for every OOV on the set of tasks presented in Table 2. The results are in the same column order and are in format (vocab expansion/<unk> token). 82.2/81.83| 87.8/87.3| 93.8/93.7| 91.3/91.1| 84.5/84.0| 92.4/91.8| (78.0/83.8)/(78.4/84.1)| 0.885/0.884| 86.8/86.9| (79.2/78.8)/(79.0/78.6)|. These results indicate that there is a small benefit to performing vocabulary expansion.", "Cool results. I very much appreciated the analysis (and Adi et al. results)!\n\n– Do you have any results (formal or informal) on either SkipThought alone or MT alone using your implementation? It's a bit odd that the ablation starts with three tasks.\n– Do you actually use SentEval (Conneau's evaluation software toolkit), or your own implementation of the same set of evaluations? You claim that you borrowed your evaluation 'largely' from them.\n– Do you have any impression of how important it was to perform vocabulary expansion (relative to using an UNK token and/or raw w2v vectors)?" ]
[ 8, 8, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B18WgG-CZ", "iclr_2018_B18WgG-CZ", "iclr_2018_B18WgG-CZ", "iclr_2018_B18WgG-CZ", "BJOtf_JlG", "BJOtf_JlG", "BkSxEc5xz", "SJlxJ4ixG", "H1qLBusxz", "H1qLBusxz", "H1qLBusxz", "H1qLBusxz", "HkHMeSHez", "Bk1qKe3XG", "S1mE8yRxz", "iclr_2018_B18WgG-CZ", "iclr_2018_B18WgG-CZ", "Syi8hpvCZ", "iclr_2018_B18WgG-CZ" ]
iclr_2018_r1dHXnH6-
Natural Language Inference over Interaction Space
Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis. We introduce Interactive Inference Network (IIN), a novel class of neural network architectures that is able to achieve high-level understanding of the sentence pair by hierarchically extracting semantic features from interaction space. We show that an interaction tensor (attention weight) contains semantic information to solve natural language inference, and a denser interaction tensor contains richer semantic information. One instance of such architecture, Densely Interactive Inference Network (DIIN), demonstrates the state-of-the-art performance on large scale NLI copora and large-scale NLI alike corpus. It's noteworthy that DIIN achieve a greater than 20% error reduction on the challenging Multi-Genre NLI (MultiNLI) dataset with respect to the strongest published system.
accepted-poster-papers
This paper presents a marginally interesting idea -- that of an interaction tensor that compares two sentence representations word by word, and feeds the interaction tensor into a higher level feature extraction mechanism. It produces good results on multi-NLI and SNLI datasets. There is some criticism about comparing with several baselines for multi-NLI where there was a restriction of not using inter-sentence comparison networks, but the authors do compare with a similar approach without that restriction and shows improvements. However, there is no solid error analysis that shows what type of examples this interaction tensor idea captures better than other strong baselines such as ESIM. Overall, the committee feels this paper will add value to the conference.
train
[ "B1td1m1Gf", "r1c8O_eSM", "rkFAbDamf", "SJeSa6_ez", "HJIhPadgf", "r1SU3CYeM", "H1YPRv6Xf", "S1r9UvTXz", "Hymo5L67G", "rySbtLamM" ]
[ "public", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Thank you for the detailed description of the architecture. I choose this paper for ICLR Reproducibility Challenge and I think I could reproduce the main parts of the architecture. Still, there are a few details that I wasn't able to understand from the paper:\n1. How are the weights of the network initialized?\n2. What is the length of character input? As far as I understood for every sentence in the given dataset we take a hard cutoff of words (32 words for SNLI, etc) and for character-embedding input we need to take some number X and pad words shorter than X with 0s or cut longer ones. So how is that number X determined?\n3. What is the size of the character embedding and kernel-size of the convolution applied on character embeddings?\n4. Have you tried to experiment with embeddings of Syntactical features instead of taking their one-hot representations?\n\nIt was easy to follow the paper as it was well organized with clearly stated details and explanations.\nHere is the repository where I've tried to reproduce the architecture and results stated in this paper: https://github.com/YerevaNN/DIIN-in-Keras\n\nHere is the link to our reproducibility report: https://arxiv.org/abs/1802.03198", "Over claim about novelty?\nAfter doing extensive research and consulting various experts, we are 100% certain that no previous work have taken similar approach on recognising textual entailment task before. The related work section only introduces the relevant literatures that adopts attention or CNN modules. If there are works which have shown the point we are arguing, we eager to see it.\n\nNecessity of formulate the model into a new framework?\nThe section 3.1 provides intuition and motivation how and why each module works together. By having the ablation study, we empirically assess the necessity of each module. By reducing 20% of the previous state-of-the-art performance on MultiNLI corpus, we show the power of this kind of new architecture. To encourage researchers to further working on this direction, we feel it is necessary to formulate it into a general framework.", "We thank reviewer R3 for insightful comments and feedback. We have updated our paper to clarify the noted issues and we list our responses as follows:\n\nNovelty of our approaches?\nThe novelty of our approaches stands on two sides: the performance side and the method side. \nOn performance side, our approach pushed the new state-of-the-art performance on various dataset which is a strong indicator of the novelty. The model reduces 20% of error rate of previous state-of-the-art performance on MultiNLI corpus.\nOn method side, our approach learned deep semantic feature from interaction space. To the best of our knowledge, it is the first attempt to solve NLI tasks with such approach.\n\nPerformance difference between proposed model and previous models?\nThe intention of this paper is to explore a new approach that extracting deep semantic features from a dense alignment tensor to solve NLI tasks. The experiment results show that our approach achieves and pushes state-of-the-art performance on multiple dataset thus justifying its potential. In error analysis section, we show the strength of our model on samples with paraphrase, antonyms and overlapping words. On the other hand, the model has limitation on samples with rules (CONDITIONAL tags). Therefore, our approaches and other approaches is in complementary with each other.\n\nWhy the model has lower accuracy on samples with CONDITIONAL tag (rules)?\nSince our approach is mainly based on attention alignment tensor, our approach has advantage over cases such as paraphrase where attention mechanism excels. How to solve samples with rules will be the focus of our future work.\n\nUnfair comparison with other model in MultiNLI?\nThough other models are subject to the limitation of RepEval 2017 workshop, ESIM baseline provided by Williams et al (2017) are not subject to such restrict. ESIM shows state-of-the-art performance on SNLI corpus. Our model outperforms their model on MultiNLI with near 6.5% of accuracy on single model setting. It is a fair comparison and our model has clear advantage over it. \n\nNo improvement over SNLI?\nAs stated by Williams et al (2017), SNLI mainly uses caption sentences which is relatively simple in syntax and semantics. However, the MultiNLI corpus, a follow-up corpus on SNLI, collects sentences from various kinds of literature and is far more challenging. Even though our model has similar performance with ESIM on SNLI, it outperforms ESIM with a large margin on MultiNLI corpus.\n\nMissing of untied parameter ablation study?\nThanks for pointing it out. We have included the study and result into the updated version of paper. As a reference, the tied parameter version of DIIN achieves 78.5 on matched-dev set and 78.3 on mismatched-dev set.\n\nReference:\nWilliams, A., Nangia, N., & Bowman, S. R. (2017). A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. arXiv.org.\n", "This paper proposes Densely Interactive Inference Network to solve recognizing textual entailment via extracting a semantic feature from interaction tensor end-to-end. Their results show that this model has better performance than others.\n\nEven though the results of this paper is interesting, I have the problem with paper writing and motivation for their architecture:\n\n- Paper pages are well beyond 8-page limits for ICLR. The paper should be 8-pages + References. This paper has 11 pages excluding the references.\n- The introduction text in the 2nd page doesn't have smooth flow and sometimes hard to follow.\n- In my view section, 3.1 is redundant and text in section 3.2 can be improved\n- Encoding layer in section 3.2 is really hard to follow in regards to equations and naming e.g p_{itr att} and why choose \\alpha(a,b,w)? \n- Encoding layer in section 3.2, there is no motivation why it needs to use fuse gate.\n- Feature Extraction Layer is very confusing again. What is FSDR or TSDR?\n- Why the paper uses Eq. 8? the intuition behind it?\n- One important thing which is missing in this paper, I didn't understand what is the motivation behind using each of these components? and how each of these components is selected?\n- How long does it take to train this network? Since it needs to works with other models (GLOV+ char features + POS tagging,..), it requires lots of effort to set up this network.\n\nEven though the paper outperforms others, it would be useful to the community by providing the motivation and intuition why each of these components was chosen. This is important especially for this paper because each layer of their architecture uses multiple components, i.e. embedding layer [Glov+ Character Features + Syntactical features]. In my view, having just good results are not enough and will not guarantee a publication in ICLR, the paper should be well-written and well-motivated in order to be useful for the future research and the other researchers.\nIn summary, I don't think the paper is ready yet and it needs significant revision.\n\n\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\nComments after the rebuttal and revision :\nI'd like thanks the authors for the revision and their answers. \nHere are my comments after reading the revised version and considering the rebuttal:\n- It is fair to say that the paper presentation is much better now. That said I am still having issues with 11 pages.\n- The authors imply on page 2, end of paragraph 5, that this is the first work that shows attention weight contains rich semantic and previous works are used attention merely as a medium for alignment. Referring to the some of the related works (cited in this paper), I am not sure this is a correct statement.\n- The authors claim to introduce a new class of architectures for NLI and generability of for this problem. In my view, this is a very strong statement and unsupported in the paper, especially considering ablation studies (table 5). In order for the model to show the best performance, all these components should come together. I am not sure why this method can be considered a class of architecture and why not just a new model?\n\nsome other comments:\n- In page 4, the citation is missing for highway networks\n- Page 5, equation 1, the parenthesis should close after \\hat{P}_j.\n\nSince the new version has been improved, I have increased my review score. However, I'm still not convinced that this paper would be a good fit at ICLR given novelty and contribution.", "Thank you for this paper! It is very nice piece of work and the problem of coding the \"necessary semantic information required for understanding the text\" is really a very important one.\n\nYet, as many papers, it fails to be clear in describing what is its real novelty and the introduction does not help in focussing what is this innovation. \n\nThe key point of the paper seems to demonstrate that the \"interaction tensor contains the necessary semantic information required for understanding the text\". This is a clear issue as this demostration is given only using 1) ablation studies removing gates and non capabilities; 2) analyzing the behavior of the model in the annotated subpart of the MultiNLI corpus; 3) a visual representation of the alignment produced by the model. Hence, there is not a direct analysis of what's inside the interaction tensors. This is the major limitation of the study. According to this analysis, DIIN seems to be a very good paraphrase detector and word aligner. In fact, Table 6 reports the astonishing 100% in paraphrase detection for the Mismatch examples. It seems also that examples where rules are necessary are not correctly modeled by DIIN: this is shown by the poor result on Conditional and Active Passive. Hence, DIIN seems not to be able to capture rules. \n\nFor a better demostration, there should be a clearer analysis of these \"interaction tensors\". The issue of the interpretability of what is in these tensors is gaining attention and should be taken into consideration if the main claim of the paper is that: \"interaction tensor contains the necessary semantic information required for understanding the text\". Some interesting attempts have been made in \"Harnessing Deep Neural Networks with Logic Rules\", ACL 2016 and in \"Can we explain natural language inference decisions taken with neural networks? Inference rules in distributed representations\", IJCNN 2017.\n\n\nMinor issues\n======\nCapital letters are used in the middle of some sentences, e.g. \"On the other hand, A mul\", \"powerful capability, We hypothesize\"\n\n\n\n", "Pros: \nThe paper proposes a “Densely Interactive Inference Network (DIIN)” for NLI or NLI alike tasks. Although using tensors to capture high-order interaction and performing dimension reduction over that are both not novel, the paper explores them for NLI. The paper is written clearly and is very easy to follow. The ablation experiments in Table 5 give a good level of details to help observe different components' effectiveness.\nCons:\n1) The differences of performances between the proposed model and the previous models are not very clear. With regard to MultiNLI, since the previous results (e.g., those in Table 2) did not use cross-sentence attention and had to represent a premise or a hypothesis as a *fixed-length* vector, is it fair to compare DIIN with them? Note that the proposed DIIN model does represent a premise or a hypothesis by variable lengths (see interaction layer in Figure 1), and tensors provide some sorts of attention between them. Can this (Table 2) really shows the advantage of the proposed models? However, when a variable-length representation is allowed (see Table 3 on SNLI), the advantage of the model is also not observed, with no improvement as a single model (compared with ESIM) and being almost same as previous models (e.g., model 18 in Table 3) in ensembling.\n2) Method-wise, as discussed above, using tensors to capture high-order interaction and performing dimension reduction over that are both not novel.\n3) The paper mentions the use of untied parameters for premise and hypothesis, but it doesn’t compare it with tied version in the experiment section. \n4) In Table 6, for CONDITIONAL tag, why the baseline models (lower total accuracies) have a 100% accuracy, but DIIN only has about a 60% accuracy?\n", "Here we list all the revisions since submission:\n\nAs of Jan. 5 2018:\n1. We reworded the introduction to clarify our novelty and to make the sentences more readable.\n2. We updated section 3.2 with clearer motivation of each component.\n3. We cleaned the math notation in section 3.2. \n4. We included an extra ablation experiment to indicate how tied encoding weight helps the performance.\n5. We added the character feature implementation detail.\n6. We reserved a placeholder for code link in section 3.2\n", "Thank you Martin, I do appreciate your interest in our work. \nThough we have released the code, due to the anonymous policy I can't provide the link here. Therefore, I'll provide the link to you after the paper is accepted/rejected. \n\n1. Except the embedding, all weights are randomly initialised with Tensorflow's default setting.\n2. The length of character input is 16 for each token. During preprocessing we need to pad or cut to satisfy the constraint. We have updated the paper to make this implementation detail more clear. \n3. The size of character embedding is 100 and the kernel-size of the 1D convolution on character embedding is 5. We have updated the paper to make this implementation detail more clear. \n4. It is a good point to replace one-hot representation of syntactical feature with their embedding. However, we didn't observe any improvement over our baseline empirically. The embedding serves the purpose of dimension reduction. A linear layer over one-hot input will behave the same way as embedding does. Since the dimensionality of syntactical feature is very small (48D), we believe it is not necessary to replace it.\n\nYour code is neat in general. I hope my code will give you additional insight in near future.", "We thank reviewer R2 for insightful comments and feedback. We have updated our paper to clarify the noted defects and we list our responses as follows:\n\nBeyond 8-page limit?\n8-page length is a recommendation rather than a restriction in ICLR 2018. We have done extensive experiments to show the strength of our model and wish to be as comprehensive as possible. In order to keep the paper length short, we’ll move part of content to supplementary materials.\n\nIntroduction in the 2nd page hard to follow?\nThanks for pointing it out. We have reworded the introduction in second page to make it clear.\n\nIs section 3.1 redundant? \nThe section 3.1 serves the purpose of introducing a general framework on the NLI task. As indicated by paper “Supervised Learning of Universal Sentence Representations from Natural Language Inference Data”, there is a generic NLI training scheme. We intend to propose a new training scheme. The experiment 3 in the ablation study intends to compare between these two approaches. \n\nBad equation notations?\nThanks for pointing it out. We have updated the paper to clarify the equations.\n\nWhy choosing \\alpha(a,b) and \\beta(a,b)?\nWe choose this notation on purpose because both of them are the function of attention. Attention has various kinds of implementation and we want to keep the option open. We have attempted multiple version of attention and they perform similarly. Therefore, we only choose one version to be presented in this paper. Other implementations of attention mechanism are beyond the discuss of this paper.\n\nThe motivation of fuse gate?\nAs indicated in encoding layer description as well as in ablation study experiment 6&7, we are motivated to use fuse gate as a skip connection. Even though using addition as skip-connection is useful to combine the output of self-attention and highway network, we empirically demonstrate that using fuse gate, which weights the new information and old one to carefully fuse them, is better than using addition. Similar observation can be found in paper “Ruminating Reader: Reasoning with Gated Multi-Hop Attention”. \n\nWhat is FSDR or TSDR?\nThanks for pointing it out. FSDR stands for “first scale down ratio” and TSDR stands for “transition scale down ratio”. They are two scale down ratio hyperparameters in feature extraction layer. In the updated version of paper, we have renamed these two hyperparameters with greek characters \\eta and \\theta. \n\n\nWhy the paper uses Eq. 8? The intuition?\nEquation is an auxiliary function that helps L2 regularization. As the training goes by, the model starts to overfit the training set. Therefore, we wish to increase the L2 regularization strength along with the training process.\n\nHow long does it take to train this network?\nThe training time depends on the task. With a single TITAN X GPU, it takes near 48 hours to converge on MultiNLI, 24 hours on SNLI and Quora Question Pair corpora. Though our model has many components as input, they only need to be processed once. The setup time is ignorable after the dataset has been processed. We have open sourced our code and processed dataset. However, due to the anonymous policy, we intend to post the link here after the paper is accepted. \n\nThe motivation behind complex component e.g. embedding layer?\nThanks for pointing it out. We have updated the paper to make sure it is as comprehensive as possible. If a component is one of the common practices, then we have cited the related references. If a component is innovated, then we have explained the motivation in text and empirically evaluate it with ablation studies.\n", "We thank reviewer R1 for insightful comments and feedback. We have updated our paper to clarify the noted defects and we list our responses as follows:\n\nNovelty of our approaches?\nThe novelty of our approaches stands on two sides: performance side and method side. \nOn performance side, our approach pushed the new state-of-the-art performance on various dataset which is a strong indicator of the novelty.\nOn method side, our approach learned deep semantic feature from interaction space. To the best of our knowledge, it is the first attempt to solve NLI tasks with such approach.\n\n\nMain claim? \nWe have updated the main claim of the paper to “the attention weight can assist with machine to understanding the text”. As noted in the review, our system is a very good paraphrase detector and word aligner. It is because our system is an exploration of attention mechanism and the feature extractor can well extract the alignment feature from attention weight (the interaction tensor). Given the fact that attention is not designed for capturing rules, our system is not strong on the samples with rules involved. How to solve the ruled based samples will be the focus of our future work. \n\n\nModel interpretability?\nThe interaction tensor is essentially a kind of high-order attention weight to model the alignment of two sentences in a more flexible way. As compared between experiment 1 and 8 in ablation study, a high-order attention weight outperform the regular attention weight with a large margin. \nThe deep learning research is known to be challenging in interpreting the model. In NLP deep learning research, we often empirically evaluate the model capacity with their performance on challenging task. \nEven though it is challenging to interpret our models directly, we tried to implement ablation studies, error analysis and visual representation to provide an intuition for our approach. \nWe thank R1 for providing very interesting paper.\n\n\nGrammer issues?\nThanks for pointing it out. We have updated the paper to remove the noted defects.\n" ]
[ -1, -1, -1, 5, 6, 6, -1, -1, -1, -1 ]
[ -1, -1, -1, 5, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_r1dHXnH6-", "Hymo5L67G", "r1SU3CYeM", "iclr_2018_r1dHXnH6-", "iclr_2018_r1dHXnH6-", "iclr_2018_r1dHXnH6-", "iclr_2018_r1dHXnH6-", "B1td1m1Gf", "SJeSa6_ez", "HJIhPadgf" ]
iclr_2018_SJ1nzBeA-
Multi-Task Learning for Document Ranking and Query Suggestion
We propose a multi-task learning framework to jointly learn document ranking and query suggestion for web search. It consists of two major components, a document ranker, and a query recommender. Document ranker combines current query and session information and compares the combined representation with document representation to rank the documents. Query recommender tracks users' query reformulation sequence considering all previous in-session queries using a sequence to sequence approach. As both tasks are driven by the users' underlying search intent, we perform joint learning of these two components through session recurrence, which encodes search context and intent. Extensive comparisons against state-of-the-art document ranking and query suggestion algorithms are performed on the public AOL search log, and the promising results endorse the effectiveness of the joint learning framework.
accepted-poster-papers
Overall, the committee finds this paper to be interesting, well written and proposes an end to end model for a very relevant task. The comparisons are also interesting and well rounded. Reviewer 2 is critical of the paper, but the committee finds the answers to the criticisms to be satisfactory. The paper will bring value to the conference.
train
[ "SJe5w-Ylz", "HkHVBVYxz", "BkYFdiqez", "H1B4TT37z", "By3-RP-Qf", "B1S_ta3mM", "H1iKCw-Qz", "S1PDTwW7z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "Novelty: It looks quite straightforward to combine document ranking and query suggestion. For the model architecture, it is a standard multi-task learning framework. For the “session encoder”, it is also proposed (at least, used) in (Sordoni et al., CIKM 2015). Therefore, I think the technical novelty of the work is limited. \n\nClarify: The paper is in general well written. One minor suggestion is to replace Figure 1 with Figure 3, which is more intuitive. \n\nExperiments: \n1.\tWhy don’t you try deep LSTM models and attention mechanisms (although you mentioned them as future work)? There are many open-source tools for deep LSTM/GRU and attention models, and I see no obstacle to implement your algorithms on their top. \n2.\tIn Table 2, M-NSRF with regularization significantly outperforms the version without regularization. This indicates that it might be the regularization that works rather than multi-task learning. For fair comparison, the regularization trick should also be applied to the baselines. \n3.\tFor the evaluation metric of query suggestion, why not using BLEU score? At least, you should compare with the metrics used in (Sordoni et al., 2015) for fairness. \n4.\tThe experiments are not very comprehensive – currently, there is only one experiment in the paper, from which one can hardly draw convincing conclusions.\n5.\tHow many words are there in your documents? What is the average length of each document? You only mention that “our goal is to rank candidate documents titles……” in Page 6, 2nd paragraph. It might be quite different for long document retrieval vs. short document retrievel. \n6.\tHow did you split the dataset into training, validation and test sets? It seems that you used a different splitting rule from (Sordoni et al., 2015), why? ", "This paper presents a joint learning framework for document ranking and query suggestion. It introduces the session embeddings to capture the connections between queries in a session, and potential impact of previous queries in a session to the document ranking of the current query. I like the idea in general. \n\nHowever, I have a few comments as follows:\n\n- Multi-task Match Tensor model, which is important in the experiments (best results), is only briefly introduced in Section 3.4. It is not very clear how to extend from match tensor model to a multi-task match tensor model. This makes me feel like this paper is not self-contained. The setting for this model is not introduced either in Section 4.2. \n\n- Section 3 is written mostly about what has been done but not why doing this. More intuition should be added to better explain the idea. \n\n- I like the analysis about testing the impact of the different model components in Section 4.4, especially analyzing the impact of the session. It would be nice to have some real examples to see the impact of session embeddings on document ranking. One more related question is how the clicked documents of a previous query in the same session influence the document ranking of this current query? Would that be feasible to consider in this proposed framework?\n\n- Session seems to play an important role in this multi task learning framework. This paper used the fixed 30 minute window of idle time to define a session. It would be nice to know how sensitive this model is to the definition / segmentation of sessions. \n", "The work is interesting and novel. The novelty lies not in the methods used (existing methods are used), but in the way these methods are combined to solve two problems (that so far have been treated separately in IR) simultaneously. The fitness of the proposed architecture and methodological choices to the task at hand is sufficiently argued. \n\nThe experimental evaluation is not the strongest, in terms of datasets and evaluation measures. While I understand why the AOL dataset was used, the document ranking experiments should also include runs on any of the conventional TREC datasets of documents, queries and actual (not simulated) relevance assessments. Simulating document relevance from clicks is a good enough approximation, but why not also use datasets with real human relevance assessments, especially since so many of them exist and are so easy to access?\n\nWhen evaluating ranking, MAP and NDCG are indeed two popular measures. But the choice of NDCG@1,3,10 seems a bit adhoc. Why not NDCG@5? Furthermore, as the aim seems to be to assess early precision, why not also report MRR? \n\nThe paper reports that the M-NSRF query suggestion method outperforms all baselines. This is not true. Table 2 shoes that M-NSRF is best for BLEU-1/2, but not for BLEU-3/4. \n\nThree final points:\n\n- Out of the contributions enumerated at the end of Section 1, only the novel model and the code & data release are contributions. The rigorous comparison to soa and its detailed analysis are the necessary evaluation parts of any empirical paper. \n- The conclusion states that this work provides useful intuitions about the advantages of multi-task learning involving deep neural networks for IR tasks. What are these? Where were they discussed? They should be outlined here, or referred to somehow.\n- Although the writing is coherent, there are a couple of recurrent English language mistakes (e.g. missing articles). The paper should be proofread and corrected.", "== Q3 == \n\nBLEU score is widely used in sequence-to-sequence generation tasks, such as machine translation and document summarization. It measures the overlap between the generated sequence and the gold sequence. As we consider query suggestion as a sequence generation task, BLEU score is a natural choice for evaluation.\n\nBased on the reviewer’s insightful suggestion and avoid any potential bias in the evaluation metric, we also revised our experiment setting by following the evaluation procedure and metrics used in Sordoni et al., 2015 for the query suggestion task. However, we have to note that in Sordoni et al., 2015 they used their neural model’s output as a feature for a learning-to-rank model, rather than directly evaluating its ranking quality. In order to understand the effectiveness of these neural models in query suggestion task, we directly used the models’ output to rank the candidate queries to compare their ranking quality. The results are included in the revised paper Table 3. \n\nWe observe that our model outperformed all baselines in this evaluation, except the Seq2seq with attention baseline. By examining the detailed suggestions resulted from these two models, we found that the attention mechanism tends to repeat the same words from the input query, and therefore is lack of variety and diversity. But our model tends to suggest highly related but totally new queries. To quantify this, we zoomed into the queries which have no overlap with their immediately previous queries, and found significant performance drop in Seq2Seq with attention baseline but significant performance improvement in our model (and our model outperformed seq2seq model by ~5%, seq2seq with attention model by ~10% and HRED-qs by 1% in this particular test set, all with p-value smaller than 0.05 in paired t-test). More details are provided in the paper. This further suggests the advantage of our model: it captures the users’ whole-session search intent and therefore suggests queries directly related their underlying intent, rather than just those immediately previous queries.\n\n== Q6 == \n\nDue to the nature of the seq2seq, HRED-qs and M-NSRF models, their model parameters are learned from a bag of sessions. And therefore, the temporal dynamics across sessions are not considered in such models. Based on this fact, we initially randomly split the AOL search log to create our training, validation and testing sets. But we also took the reviewer’s suggestion and presented an additional experiment by following Sordoni et al., 2015 to generate background, training, development and testing dataset for conducting re-ranking based evaluation procedure to compare different models’ ranking quality in this task (see the answer to Q3). The results are reported in the revised version of our paper. As we expected, the relative performance among different models stayed the same as that in our original evaluation setup. This also suggests the validity of our original evaluation setting.\n", "We thank the reviewer for providing suggestions to improve our experiments. We present the answers to the major concerns mentioned in the review.\n\n1. The main focus of this work is to develop the multi-task learning framework for joint learning of document ranking and query suggestion via explicit modeling of user search intents in search sessions. Exploring different network architectures for document ranking and/or query suggestion is orthogonal to this work. However, as we described in Sec. 3.4, our framework is flexible and can be incorporated into other document ranker and query suggestion maker. \n\nAs we demonstrated in the experiments, the proposed system is better than or competitive with the state-of-the-art approaches. Further improving its performance requires careful studies and adding model complexity may not help. For example, we have tried to apply 2-layer LSTM as the encoder in the sequence to sequence model, but we found slight drop (~0.02) in MAP. One possible reason may be that the 2-layer LSTM has more parameters and require more data to train. However, this type of empirical study is out of the scope of this paper and we leave it for future work. \n\n2. We test the performance of the baselines with the regularization technique and we found good improvement for the HRED-qs, 27.6/15.1/9.2/6.7. But the performance improvement for the seq2seq and seq2seq with global attention mechanism is rather marginal (~0.2%). One observation we found is that the predictive word distribution in seq2seq with attention baseline is much less skewed than ours, and since the effect of the regularizer is to reduce the skewness of the predictive word distribution, it generates less effect on the baseline than on our model.\n\n3. As our proposed multi-task learning framework focuses on document ranking and query suggestion, our experiments mainly focused on evaluating the proposed model in these two perspectives. We compared the document ranking with a large pool of prior works and the query suggestion performance with the closest prior work, including both quantitative and qualitative comparisons, along with an ablation study. We believe the experiments presented in the paper is as comprehensive if not more than a published ICLR paper.\n\n4. In our experimental dataset, the average length of the queries and documents (only the title field) are 3.15 and 6.77 respectively. We applied restriction on the max allowable length for query and document. We set maximum query length to 10 and maximum document length to 20. We have to admit that short document retrieval could be different from full document retrieval. As our first step towards jointly modeling the document ranking and query suggestion tasks, we follow the previous works (Huang et al., 2013, Shen et al., 2014) and limit our work to document titles only. In our future work, we will investigate the utility of our model on a full document retrieval setting.", "An interesting new finding we obtained recently by paying a closer look at the query suggestion results from our method (i.e., M-NSRF) and the best baseline method (i.e., Seq2seq with attention) is that the Seq2seq with attention model tends to repeat words from the input query, which gives little variety/diversity in the suggested queries. Instead, our model is able to generate highly related but totally new queries. To verify this, we restricted the evaluation in the testing queries, which do not have overlap with corresponding input queries, and we found that M-NSRF significantly outperformed Seq2seq with attention in both BLEU scores and ranking-based metric MRR (with p-value smaller than 0.05) on such queries. In the appendix of our paper, we provided some sample query suggestion output from our model to highlight its advantage in such cases.", "We thank the reviewer for the affirmative comments. We also appreciate the suggestions for improving the experiment comparisons and the writing quality of our paper. We have revised our paper based on the suggestions provided. We address the concerns related to the experiments as follows.\n\n1. Our model is trained and evaluated on search sessions, which are not widely available in public datasets (as it requires continuous monitoring of a user’s search behaviors). This limited our choice in evaluation datasets. For example, most TREC tracks only consider single queries. And TREC recently has created related tracks, but with only a handful of annotated sessions (e.g., TREC Tasks Track only has 150 tasks and Session Track only has around 5 thousand sessions in total so far). Exploring more public annotated search datasets for evaluating our developed framework is definitely a very important future work of ours.\n\n2. To help audiences easily compare our reported results with existing literature, we followed the previous works (Huang et al., 2013, Shen et al., 2014, Mitra et al., 2017) and used MAP, NDCG@1, NDCG@3, and NDCG@10 as the evaluation metrics. In this updated version, we included NDCD@5 and MRR to better compare different models' early precision, and the new results align with our previous experimental results that the proposed multi-task learning framework improves ranking performance.\n\n3. We acknowledge that the performance of the sequence-to-sequence model with attention mechanism was better than our proposed approach in the reported average BLEU-¾. And we have revised our statement to make it more precise.", "We thank the reviewer for recognizing our contribution to multi-task learning and giving suggestions to improve the writing. We have revised our paper accordingly and respond to the major concerns here.\n\n1. We have added a figure of the Multi-task Match-Tensor model in the appendix (see figure 4 in the revised version of the paper). We adapt multi-task learning in Match-Tensor model by adding the session encoder and the query decoder and keeping the other part of the model as it is. More details of this procedure are explained in the revised paper Section 3.4.\n\n2. We have revised the section 3 of our paper by adding more details about the motivation and design of every component in our proposed framework.\n\n3. We did the experiment to verify the impact of session embeddings on document ranking and result is reported in Table 4 of our revised paper. The row labeled with “M-NRF” resembles the model without considering the session embeddings in document ranking, where we observed a 2.8% drop in MAP. As the session encoder is designed to capture the search context carried till the current query, it provides an important signal for ranking documents under the current/reformulated queries. It would be great if we could summarize what kind of queries/sessions benefit from this session recursion (i.e., where does that 2.8% drop in MAP come from). \n\nIn our current model, we do not model the click sequence, and multiple clicks under the same query are assumed to be governed by the same query and session representation. As a result, the session recursion is not directly/immediately influenced by the click sequence. However, we do appreciate the great suggestion, and we believe adding another layer of session recursion at click sequence level would enable us to better capture this influence. And we will list this as a top priority in our future work. \n\n4. We appreciate the suggestion. We followed the most commonly used threshold to define the session in IR literature. In our future work, we will study the sensitivity of the segmentation of sessions." ]
[ 4, 6, 7, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJ1nzBeA-", "iclr_2018_SJ1nzBeA-", "iclr_2018_SJ1nzBeA-", "By3-RP-Qf", "SJe5w-Ylz", "H1iKCw-Qz", "BkYFdiqez", "HkHVBVYxz" ]
iclr_2018_HkgNdt26Z
Distributed Fine-tuning of Language Models on Private Data
One of the big challenges in machine learning applications is that training data can be different from the real-world data faced by the algorithm. In language modeling, users’ language (e.g. in private messaging) could change in a year and be completely different from what we observe in publicly available data. At the same time, public data can be used for obtaining general knowledge (i.e. general model of English). We study approaches to distributed fine-tuning of a general model on user private data with the additional requirements of maintaining the quality on the general data and minimization of communication costs. We propose a novel technique that significantly improves prediction quality on users’ language compared to a general model and outperforms gradient compression methods in terms of communication efficiency. The proposed procedure is fast and leads to an almost 70% perplexity reduction and 8.7 percentage point improvement in keystroke saving rate on informal English texts. Finally, we propose an experimental framework for evaluating differential privacy of distributed training of language models and show that our approach has good privacy guarantees.
accepted-poster-papers
The committee feels that this paper presents a simple, yet effective way to adapt language models from various users in a sufficiently privacy preserving way. Empirical results are quite strong. Reviewer 3 says that the novelty of the paper is not great, but does not provide any references to prior work that are similar to this paper. The meta-reviewer finds the responses to Reviewer 3 sufficient to address the concerns. Similarly, Reviewer 2 says that the paper may not be relevant to ICLR, but the committee feels its content does belong to the conference since the topic is extremely relevant to modern language processing techniques. In fact, the authors provide several references that show that this paper is similar in content to those submissions. Reviewer 1's concerns are also not sufficiently strong to warrant rejection. The responses to each criticism suffices and the meta-reviewer thinks that this paper will add value to the conference.
train
[ "ByBJfdKxM", "H1ywUV9gf", "ryEbgBTxG", "HkRPphPff", "Bk-IT3vMz", "ryrx6hvGf", "SyebtQVWG", "S10QJAz-z", "HJHV20GZf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "public", "public" ]
[ "This paper deals with improving language models on mobile equipments\nbased on small portion of text that the user has ever input. For this\npurpose, authors employed a linearly interpolated objectives between user\nspecific text and general English, and investigated which method (learning\nwithout forgetting and random reheasal) and which interepolation works better.\nMoreover, authors also look into privacy analysis to guarantee some level of\ndifferential privacy is preserved.\n\nBasically the motivation and method is good, the drawback of this paper is\nits narrow scope and lack of necessary explanations. Reading the paper,\nmany questions arise in mind:\n\n- The paper implicitly assumes that the statistics from all the users must\n be collected to improve \"general English\". Why is this necessary? Why not\n just using better enough basic English and the text of the target user?\n\n- To achieve the goal above, huge data (not the \"portion of the general English\") should be communicated over the network. Is this really worth doing? If only\n \"the portion of\" general English must be communicated, why is it validated?\n\n- For measuring performance, authors employ keystroke saving rate. For the\n purpose of mobile input, this is ok: but the use of language models will\n cover much different situation where keystrokes are not necessarily \n available, such as speech recognition or machine translation. Since this \n paper is concerned with a general methodology of language modeling, \n perplexity improvement (or other criteria generally applicable) is also\n important.\n\n- There are huge number of previous work on context dependent language models,\n let alone a mixture of general English and specific models. Are there any\n comparison with these previous efforts?\n\nFinally, this research only relates to ICLR in that the language model employed\nis LSTM: in other aspects, it easily and better fit to ordinary NLP conferences, such as EMNLP, NAACL or so. I would like to advise the authors to submit\nthis work to such conferences where it will be reviewed by more NLP experts.\n\nMinor:\n- t of $G_t$ in page 2 is not defined so far.\n- What is \"gr\" in Section 2.2?\n", "my main concern is the relevance of this paper to ICLR.\nThis paper is much related not to representation learning but to user-interface.\nThe paper is NOT well organized and so the technical novelty of the method is unclear.\nFor example, the existing method and proposed method seems to be mixed in Section 2.\nYou should clearly divide the existing study and your work. \nThe experimental setting is also unclear.\nKSS seems to need the user study.\nBut I do not catch the details of the user study, e.g., the number of users.\n", "This paper discusses the application of word prediction for software keyboards. The goal is to customize the predictions for each user to account for member specific information while adhering to the strict compute constraints and privacy requirements. \n\nThe authors propose a simple method of mixing the global model with user specific data. Collecting the user specific models and averaging them to form the next global model. \n\nThe proposal is practical. However, I am not convinced that this is novel enough for publication at ICLR. \n\nOne major question. The authors assume that the global model will depict general english. However, it is not necessary that the population of users will adhere to general English and hence the averaged model at the next time step t+1 might be significantly different from general English. It is not clear to me as how this mechanism guarantees that it will not over-fit or that there will be no catastrophic forgetting.", "Thank you for your review!\nI would like to make some clarifications and remarks.\n\nYou write:\n\"- The paper implicitly assumes that the statistics from all the users must be collected to improve \"general English\". Why is this necessary? Why not just using better enough basic English and the text of the target user?\"\n\nThere is strong evidence that the language of SMS and/or private messaging is sufficiently different from what we can collect in publicly available resources. Since language changes constantly we need to update the models and we cannot just make a single fine-tuning of basic LM on device. On the other hand the data from a single user is not sufficient for model update so we need data from many different users. The problem is that we cannot (or at least don't want to) collect user data. We've proposed a method of continuous update of language models without need to collect private data. \n\n\"- To achieve the goal above, huge data (not the \"portion of the general English\") should be communicated over the network. Is this really worth doing? If only the portion of\" general English must be communicated, why is it validated?\"\n\nAs we mention in the paper the volume of the user generated data is small. Actually users generate approx. 600 bytes/day. In our experiments we proceeded from the assumption that fine-tuning starts as soon as 10 Kb of text data is accumulated on device. Our experiments showed that random rehearsal with volume of the rehearsal data should be to the volume of the fine-tuning data. So it is 10 Kb. This number is very small compared to the volume of model weights which are communicated in Federated Learning algorithms. We also discussed the communication efficiency in the answers to other reviewers (see above).\n\n\"- For measuring performance, authors employ keystroke saving rate. For the purpose of mobile input, this is ok: but the use of language models will cover much different situation where keystrokes are not necessarily available, such as speech recognition or machine translation. Since this paper is concerned with a general methodology of language modeling, perplexity improvement (or other criteria generally applicable) is also important.\" \n\nBasically, we agree. And perplexity is reported in all our experiments. We just wanted to emphasize that target metrics should also be evaluated in language modeling like it is done in speech recognition (ASR) or machine translation (BLEU). Also, in (McMahan et al. 2017, https://openreview.net/pdf?id=B1EPYJ-C-) word prediction accuracy is evaluated which is relative to KSS.\n\n\"- There are huge number of previous work on context dependent language models, let alone a mixture of general English and specific models. Are there any comparison with these previous efforts?\"\n\n\nThe term context is a bit vague. E.g. in (Mikolov, Zweig 2014, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.258.5120&;rep=rep1&type=pdf) term \"context\" refers to the longer left context which is unavailable to standard RNN. Anyway longer left contexts are reasonably well catched by LSTM. If \"context\" refers to the running environment (e.g. application context) it is not the exact scope of our work. The standard approach to model adaptaion for the user is either model fine-tuning or interpolation with simpler language model (e.g. Knesser-Ney smoothed n-gram). We tried approaches similar to the proposed in (Ma et al. 2017, https://static.googleusercontent.com/media/research.google.com/ru//pubs/archive/46439.pdf) but they performed significantly worse.\n\nI also would like to draw your attention to the privacy analysys part of the paper which we included in the list of our contributions. We consider our contribution significant at least for the following reason. To our knowledge deep neural networks have never been checked for differential privacy coming from the randomness of the training algorithm (combination of SGD, dropout regularization and model averaging in our case). Existing approaches (e.g. Papernot et al. 2017, https://arxiv.org/pdf/1610.05755.pdf) suggest adding random noise at different stages of training leading to the tradeoff between accuracy and privacy. At the same time our experiments show that the differential privacy can be guaranteed even without special treatment of the neural networks at least in some situations.", "Thank you for your review!\nBelow I will try to answer to your remarks.\n\n1) You write\n\"my main concern is the relevance of this paper to ICLR\"\n\"This paper is much related not to representation learning but to user-interface\"\nWe think that our paper is relevant for ICLR because the papers on the same or adjacent topics were or will be presented: \n1. https://openreview.net/forum?id=SkhQHMW0W (ICLR 2018, federated learning)\n2. https://arxiv.org/pdf/1610.05755.pdf (ICLR 2017, differential privacy)\n3. https://openreview.net/forum?id=BJ0hF1Z0b&;noteId=Bkg5_kcxG (ICLR 2018, differentially private RNN)\nWe admit that our field of study is not as broad because we work only with language models, but the approach proposed in the paper may be used to different types of data and ML tasks. Our method gives good privacy guarantees and provides low communication cost (compared to previous results).\n\nLet me cite my answer to the previous reviewer.\n\n\"In the works on Federated Learning issued so far each node is considered only as a client in distributed learning system for gradient calculation. In our approach we guarantee that the model sent to the aggregation server is at the same time the actual model used for typing and gives the best performance for the end user. It is guaranteed by the forgetting prevention mechanism. It has at least following advantages: 1) No need for synchronization after every iteration as in standard Federated learning scheme. Standard Federated learning uses no more than 20 iterations on each device for reduction of the communication cost (McMahan et al. 2016, https://arxiv.org/abs/1602.05629) while we send our models only once in an epoch thus significantly reducing the communication cost. ; 2) Simpler synchronization scheme on the server; 3) Faster convergence; 4) Only 1 model is stored on the disk. We think that these results may be interesting to many ML practitioners.\"\n\n2) \"KSS seems to need the user study\". - KSS is measured according to the formula (5) given in the paper. The process of testing automatization seems obvious. No user testing is needed. (comp. WER calculation for ASR). In any case perplexity results are also given for all experiments.\n\n3) As far as you didn't discuss the privacy analysis part which was included into the contributions list I'll cite my previous comment again:\n\n\"We consider our contribution significant at least for the following reason. To our knowledge deep neural networks have never been checked for differential privacy coming from the randomness of the training algorithm (combination of SGD, dropout regularization and model averaging in our case). Existing approaches (e.g. Papernot et al. 2017, https://arxiv.org/pdf/1610.05755.pdf) suggest adding random noise at different stages of training leading to the tradeoff between accuracy and privacy. At the same time our experiments show that the differential privacy can be guaranteed even without special treatment of the neural networks at least in some situations.\"", "Thank you for your review!\nI would like to make some clarifications and remarks.\n\n1) In the review you write \n\"One major question. The authors assume that the global model will depict general english. However, it is not necessary that the population of users will adhere to general English and hence the averaged model at the next time step t+1 might be significantly different from general English. It is not clear to me as how this mechanism guarantees that it will not over-fit or that there will be no catastrophic forgetting.\"\n\nIn our problem statement we consider general English to be the common language i.e. the commonly used language with statistically insignificant portion of user-specific expressions. It is NOT necessarily the language induced by our in-house corpus. As far as we have a model averaged on many users on the server we can treat this model as general language model. So at each stage T the server-side model represents general language. When the model is sent to the device it is updated according to the user data so there is a risk of catastrophic forgetting. We prevent it by using (eventually) random rehearsal on device (only!). \n\n2) I also would like to draw your attention to the original and practically relevant problem statement. In the works on Federated Learning issued so far each node is considered only as a client in distributed learning system for gradient calculation. In our approach we guarantee that the model sent to the aggregation server is at the same time the actual model used for typing and gives the best performance for the end user. It is guaranteed by the forgetting prevention mechanism. It has at least following advantages: 1) No need for synchronization after every iteration as in standard Federated learning scheme. Standard Federated learning uses no more than 20 iterations on each device for reduction of the communication cost (McMahan et al. 2016, https://arxiv.org/abs/1602.05629) while we send our models only once in an epoch thus significantly reducing the communication cost. ; 2) Simpler synchronization scheme on the server; 3) Faster convergence; 4) Only 1 model is stored on the disk. We think that these results may be interesting to many ML practitioners. \n\n3) You didn't discuss the privacy analysis part of the paper which we included in the list of our contributions. We consider our contribution significant at least for the following reason. To our knowledge deep neural networks have never been checked for differential privacy coming from the randomness of the training algorithm (combination of SGD, dropout regularization and model averaging in our case). Existing approaches (e.g. Papernot et al. 2017, https://arxiv.org/pdf/1610.05755.pdf) suggest adding random noise at different stages of training leading to the tradeoff between accuracy and privacy. At the same time our experiments show that the differential privacy can be guaranteed even without special treatment of the neural networks at least in some situations.", "Thank you for your review!\nI would like to make some clarifications and remarks.\n\nYou write:\n\"- The paper implicitly assumes that the statistics from all the users must be collected to improve \"general English\". Why is this necessary? Why not just using better enough basic English and the text of the target user?\"\n\nThere is strong evidence that the language of SMS and/or private messaging is sufficiently different from what we can collect in publicly available resources. Since language changes constantly we need to update the models and we cannot just make a single fine-tuning of basic LM on device. On the other hand the data from a single user is not sufficient for model update so we need data from many different users. The problem is that we cannot (or at least don't want to) collect user data. We've proposed a method of continuous update of language models without need to collect private data. \n\n\"- To achieve the goal above, huge data (not the \"portion of the general English\") should be communicated over the network. Is this really worth doing? If only the portion of\" general English must be communicated, why is it validated?\"\n\nAs we mention in the paper the volume of the user generated data is small. Actually users generate approx. 600 bytes/day. In our experiments we proceeded from the assumption that fine-tuning starts as soon as 10 Kb of text data is accumulated on device. Our experiments showed that random rehearsal with volume of the rehearsal data should be to the volume of the fine-tuning data. So it is 10 Kb. This number is very small compared to the volume of model weights which are communicated in Federated Learning algorithms. We also discussed the communication efficiency in the answers to other reviewers (see above).\n\n\"- For measuring performance, authors employ keystroke saving rate. For the purpose of mobile input, this is ok: but the use of language models will cover much different situation where keystrokes are not necessarily available, such as speech recognition or machine translation. Since this paper is concerned with a general methodology of language modeling, perplexity improvement (or other criteria generally applicable) is also important.\" \n\nBasically, we agree. And perplexity is reported in all our experiments. We just wanted to emphasize that target metrics should also be evaluated in language modeling like it is done in speech recognition (ASR) or machine translation (BLEU). Also, in (McMahan et al. 2017, https://openreview.net/pdf?id=B1EPYJ-C-) word prediction accuracy is evaluated which is relative to KSS.\n\n\"- There are huge number of previous work on context dependent language models, let alone a mixture of general English and specific models. Are there any comparison with these previous efforts?\"\n\n\nThe term context is a bit vague. E.g. in (Mikolov, Zweig 2014, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.258.5120&rep=rep1&type=pdf) term \"context\" refers to the longer left context which is unavailable to standard RNN. Anyway longer left contexts are reasonably well catched by LSTM. If \"context\" refers to the running environment (e.g. application context) it is not the exact scope of our work. The standard approach to model adaptaion for the user is either model fine-tuning or interpolation with simpler language model (e.g. Knesser-Ney smoothed n-gram). We tried approaches similar to the proposed in (Ma et al. 2017, https://static.googleusercontent.com/media/research.google.com/ru//pubs/archive/46439.pdf) but they performed significantly worse.\n\nI also would like to draw your attention to the privacy analysys part of the paper which we included in the list of our contributions. We consider our contribution significant at least for the following reason. To our knowledge deep neural networks have never been checked for differential privacy coming from the randomness of the training algorithm (combination of SGD, dropout regularization and model averaging in our case). Existing approaches (e.g. Papernot et al. 2017, https://arxiv.org/pdf/1610.05755.pdf) suggest adding random noise at different stages of training leading to the tradeoff between accuracy and privacy. At the same time our experiments show that the differential privacy can be guaranteed even without special treatment of the neural networks at least in some situations.", "Thank you for your review!\nI would like to make some clarifications and remarks.\n\n1) In the review you write \n\"One major question. The authors assume that the global model will depict general english. However, it is not necessary that the population of users will adhere to general English and hence the averaged model at the next time step t+1 might be significantly different from general English. It is not clear to me as how this mechanism guarantees that it will not over-fit or that there will be no catastrophic forgetting.\"\n\nIn our problem statement we consider general English to be the common language i.e. the commonly used language with statistically insignificant portion of user-specific expressions. It is NOT necessarily the language induced by our in-house corpus. As far as we have a model averaged on many users on the server we can treat this model as general language model. So at each stage T the server-side model represents general language. When the model is sent to the device it is updated according to the user data so there is a risk of catastrophic forgetting. We prevent it by using (eventually) random rehearsal on device (only!). \n\n2) I also would like to draw your attention to the original and practically relevant problem statement. In the works on Federated Learning issued so far each node is considered only as a client in distributed learning system for gradient calculation. In our approach we guarantee that the model sent to the aggregation server is at the same time the actual model used for typing and gives the best performance for the end user. It is guaranteed by the forgetting prevention mechanism. It has at least following advantages: 1) No need for synchronization after every iteration as in standard Federated learning scheme. Standard Federated learning uses no more than 20 iterations on each device for reduction of the communication cost (McMahan et al. 2016, https://arxiv.org/abs/1602.05629) while we send our models only once in an epoch thus significantly reducing the communication cost. ; 2) Simpler synchronization scheme on the server; 3) Faster convergence; 4) Only 1 model is stored on the disk. We think that these results may be interesting to many ML practitioners. \n\n3) You didn't discuss the privacy analysis part of the paper which we included in the list of our contributions. We consider our contribution significant at least for the following reason. To our knowledge deep neural networks have never been checked for differential privacy coming from the randomness of the training algorithm (combination of SGD, dropout regularization and model averaging in our case). Existing approaches (e.g. Papernot et al. 2017, https://arxiv.org/pdf/1610.05755.pdf) suggest adding random noise at different stages of training leading to the tradeoff between accuracy and privacy. At the same time our experiments show that the differential privacy can be guaranteed even without special treatment of the neural networks at least in some situations.", "Thank you for your review!\nBelow I will try to answer to your remarks.\n\n1) You write\n\"my main concern is the relevance of this paper to ICLR\"\n\"This paper is much related not to representation learning but to user-interface\"\nWe think that our paper is relevant for ICLR because the papers on the same or adjacent topics were or will be presented: \n1. https://openreview.net/forum?id=SkhQHMW0W (ICLR 2018, federated learning)\n2. https://arxiv.org/pdf/1610.05755.pdf (ICLR 2017, differential privacy)\n3. https://openreview.net/forum?id=BJ0hF1Z0b&noteId=Bkg5_kcxG (ICLR 2018, differentially private RNN)\nWe admit that our field of study is not as broad because we work only with language models, but the approach proposed in the paper may be used to different types of data and ML tasks. Our method gives good privacy guarantees and provides low communication cost (compared to previous results).\n\nLet me cite my answer to the previous reviewer.\n\n\"In the works on Federated Learning issued so far each node is considered only as a client in distributed learning system for gradient calculation. In our approach we guarantee that the model sent to the aggregation server is at the same time the actual model used for typing and gives the best performance for the end user. It is guaranteed by the forgetting prevention mechanism. It has at least following advantages: 1) No need for synchronization after every iteration as in standard Federated learning scheme. Standard Federated learning uses no more than 20 iterations on each device for reduction of the communication cost (McMahan et al. 2016, https://arxiv.org/abs/1602.05629) while we send our models only once in an epoch thus significantly reducing the communication cost. ; 2) Simpler synchronization scheme on the server; 3) Faster convergence; 4) Only 1 model is stored on the disk. We think that these results may be interesting to many ML practitioners.\"\n\n2) \"KSS seems to need the user study\". - KSS is measured according to the formula (5) given in the paper. The process of testing automatization seems obvious. No user testing is needed. (comp. WER calculation for ASR). In any case perplexity results are also given for all experiments.\n\n3) As far as you didn't discuss the privacy analysis part which was included into the contributions list I'll cite my previous comment again:\n\n\"We consider our contribution significant at least for the following reason. To our knowledge deep neural networks have never been checked for differential privacy coming from the randomness of the training algorithm (combination of SGD, dropout regularization and model averaging in our case). Existing approaches (e.g. Papernot et al. 2017, https://arxiv.org/pdf/1610.05755.pdf) suggest adding random noise at different stages of training leading to the tradeoff between accuracy and privacy. At the same time our experiments show that the differential privacy can be guaranteed even without special treatment of the neural networks at least in some situations.\"" ]
[ 5, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HkgNdt26Z", "iclr_2018_HkgNdt26Z", "iclr_2018_HkgNdt26Z", "ByBJfdKxM", "H1ywUV9gf", "ryEbgBTxG", "ByBJfdKxM", "ryEbgBTxG", "H1ywUV9gf" ]
iclr_2018_SkT5Yg-RZ
Intrinsic Motivation and Automatic Curricula via Asymmetric Self-Play
We describe a simple scheme that allows an agent to learn about its environment in an unsupervised manner. Our scheme pits two versions of the same agent, Alice and Bob, against one another. Alice proposes a task for Bob to complete; and then Bob attempts to complete the task. In this work we will focus on two kinds of environments: (nearly) reversible environments and environments that can be reset. Alice will "propose" the task by doing a sequence of actions and then Bob must undo or repeat them, respectively. Via an appropriate reward structure, Alice and Bob automatically generate a curriculum of exploration, enabling unsupervised training of the agent. When Bob is deployed on an RL task within the environment, this unsupervised training reduces the number of supervised episodes needed to learn, and in some cases converges to a higher reward.
accepted-poster-papers
I fully agree with strong positive statements in the reviews. All reviewers agree that the paper introduces a novel and elegant twist on standard RL, wherein one agent proposes a sequence of diverse tasks to a second agent so as to accelerate the second agent's learning models of the environment. I also concur that the empirical testing of this method is quite good. There are strong and/or promising results in five different domains (Hallway, LightKey, MountainCar, Swimmer Gather and TrainingMarines in StartCraft). This paper would make for a strong poster at ICLR.
train
[ "S1Fy0bqlG", "H14gZYsgG", "SkaVNc2gz", "rkKgampmz", "ByzcQscQf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "In this paper, the authors describe a new formulation for exploring the environment in an unsupervised way to aid a specific task later. Using two “minds”, Alice and Bob, where the former proposes increasingly difficult tasks and the latter tries to accomplish them as fast as possible, the learning agent Bob can later perform a given task faster having effectively learned the environment dynamics from playing the game with Alice. \n\nThe idea of unsupervised exploration has been visited before. However, the paper presents a novel way to frame the problem, and shows promising results on several tasks. The ideas are well-presented and further expounded in a systematic way. Furthermore, the crux of the proposal and simple and elegant yet leading to some very interesting results. My only complaint is that some of the finer implementation details seems to have been omitted. For example, the parameter update equation is section 4 is somewhat opaque and requires more discussion than the motivation presented in the preceding paragraph.\n\nTypos and grammatical errors: let assume (section 2.2), it is possible show (section 5).\n\nOverall, I think the paper presents a novel and unique idea that would be interesting to the wider research community. ", "This paper proposes an interesting model of self-play where one agent learns to propose tasks that are easy for her but difficult for an opponent. This creates a moving target of self-play objectives and learning curriculum.\n\nThe idea is certainly elegant and clearly described. \nI don't really feel qualified to comment on the novelty, since this paper is somewhat out of my area of expertise, but I did notice that the authors' own description of Baranes and Oudeyer (2013) seems very close to the proposal in this paper. Given the existence of similar forms of self-play the key issue with paper I see is that there is no strong self-play baseline in the experimental evaluation. It is hard to tell whether this neat idea is really an improvement.\n\nIs progress guaranteed? Is it not possible for Alice to imemdiately find an easy task for her where Bob times out, gets no reward signal, and therefore is unable to learn anything? Then repeating that task will loop forever without progress. This suggests that the adversarial setting is quite brittle.\n\nI also find that the paper is a little light on the technical side.", "The paper presents a method for learning a curriculum for reinforcement learning tasks.The approach revolves around splitting the personality of the agent into two parts. The first personality learns to generate goals for other personality for which the second agent is just barely capable--much in the same way a teacher always pushes just past the frontier of a student’s ability. The second personality attempts to achieve the objectives set by the first as well as achieve the original RL task. \n\nThe novelty of the proposed method is introduction of a teacher that learns to generate a curriculum for the agent.The formulation is simple and elegant as the teacher is incentivised to widen the gap between bob but pays a price for the time it takes which balances the adversarial behavior. \n\nPrior and concurrent work on learning curriculum and intrinsic motivation in RL rely on GANs (e.g., automatic goal generation by Held et al.), adversarial agents (e.g., RARL by Pinto et al.), or algorithmic/heuristic methods (e.g., reverse curriculum by Florensa et al. and HER Andrychowicz et al.). In the context of this work, the contribution is the insight that an agent can be learned to explore the immediate reachable space but that is just within the capabilities of the agent. HER and goal generation share the core insight on training to reach goals. However, HER does generate goals beyond the reachable it instead relies on training on existing reached states or explicitly consider the capabilities of the agent on reaching a goal. Goal generation while learning to sample from the achievable frontier does not ensure the goal is reachable and may not be as stable to train. \n\nAs noted by the authors the above mentioned prior work is closely related to the proposed approach. However, the paper only briefly mentions this corpus of work. A more thorough comparison with these techniques should be provided even if somewhat concurrent with the proposed method. The authors should consider additional experiments on the same domains of this prior work to contrast performance.\n\nQuestions:\nDo the plots track the combined iterations that both Alice and Bob are in control of the environment or just for Bob? \n", "“Do the plots track the combined iterations that both Alice and Bob are in control of the environment or just for Bob?”\n- The plots track the iterations/steps of Bob during target task episodes where a supervision from the environment given as a reward signal. The paradigm we consider is the RL equivalent of semi-supervised learning, with self-play being the unsupervised learning component. In this context, what matters is the number of labeled examples (analogously: target task episodes) used, rather than the number of unlabeled points (i.e. self-play episodes). This RL paradigm was introduced in Finn et al. 2016 https://arxiv.org/abs/1612.00429 and we note that they also use this convention. We will clarify this in the final version. ", "We thank the reviewer for the constructive review. However, we would like to address several points raised:\n\n“Baranes and Oudeyer (2013) seems very close to the proposal in this paper. Given the existence of similar forms of self-play the key issue with paper I see is that there is no strong self-play baseline in the experimental evaluation”.\n- In B & O, one needs to construct a set of all possible tasks in the environment, and parameterize this set in such a way that it can be reasonably partitioned and sampled. It is not obvious how to do this in our problems without using extra domain knowledge. In our approach, however, tasks are discovered by an agent acting in the environment, thus eliminating the need of domain knowledge about the environment. For this reason, we cannot directly compare to the approach of B & O and no other forms of self-play exist, as far as we are aware. \nAlso note that it is not clear how to obtain the same sorts of guarantees we get in the tabular setting (and the related intuitions about what the learning protocol is achieving) with their method.\n\n“Is it not possible for Alice to immediately find an easy task for her where Bob times out, gets no reward signal, and therefore is unable to learn anything? Then repeating that task will loop forever without progress. This suggests that the adversarial setting is quite brittle”.\n- An easy task means it only requires few actions for Alice to succeed. In repeat self-play, this means that Bob would only need a few actions to succeed also. So it is unlikely that Bob will keep failing on such easy tasks since even taking random actions would sometimes yield success on such easy tasks. The same is true for the reverse self-play because of the reversibility assumption (Bob just needs to perform the opposite of Alice's actions in reverse order).\nIn general however, our adversarial setting does assume that Alice and Bob are trained in sync. Similar to generative adversarial networks, if one of them gets too far ahead of the other, then it can impede training. However, our experiments demonstrate that the two of them can be successfully training is possible on non-trivial problems.\n\n“I also find that the paper is a little light on the technical side”.\n-We will add further technical details in the final revision." ]
[ 8, 5, 8, -1, -1 ]
[ 4, 3, 4, -1, -1 ]
[ "iclr_2018_SkT5Yg-RZ", "iclr_2018_SkT5Yg-RZ", "iclr_2018_SkT5Yg-RZ", "SkaVNc2gz", "H14gZYsgG" ]
iclr_2018_SyoDInJ0-
Reinforcement Learning Algorithm Selection
This paper formalises the problem of online algorithm selection in the context of Reinforcement Learning (RL). The setup is as follows: given an episodic task and a finite number of off-policy RL algorithms, a meta-algorithm has to decide which RL algorithm is in control during the next episode so as to maximize the expected return. The article presents a novel meta-algorithm, called Epochal Stochastic Bandit Algorithm Selection (ESBAS). Its principle is to freeze the policy updates at each epoch, and to leave a rebooted stochastic bandit in charge of the algorithm selection. Under some assumptions, a thorough theoretical analysis demonstrates its near-optimality considering the structural sampling budget limitations. ESBAS is first empirically evaluated on a dialogue task where it is shown to outperform each individual algorithm in most configurations. ESBAS is then adapted to a true online setting where algorithms update their policies after each transition, which we call SSBAS. SSBAS is evaluated on a fruit collection task where it is shown to adapt the stepsize parameter more efficiently than the classical hyperbolic decay, and on an Atari game, where it improves the performance by a wide margin.
accepted-poster-papers
The reviewers are unanimous in accepting the paper. They generally view it as introducing an original approach to online RL using bandit-style selection from a fixed portfolio of off-policy algorithms. Furthermore, rigorous theoretical analysis shows that the algorithm achieves near-optimal performance. The only real knock on the paper is that they use a weak notion of regret i.e. short-sighted pseudo regret. This is considered inevitable, given the setting.
train
[ "rkTUeUKef", "SJX3im5ez", "ryC1UZsgz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "SUMMARY\nThe paper considers a meta-algorithm, in the form of a UCB algorithms, that selects base-learners in a pool of reinforcement learning agents.\n\nHIGH LEVEL COMMENTS\nIn this paper, T refers to the total number of meta-decisions. This is very different from the total number of interactions with the system that corresponds to D_T=sum_{\\tau=1}^T |\\epsilon_tau|. Wouldn't it make more sense to optimize regret accumulated on this global time?\nThe proposed strategy thus seems a bit naive since different algorithms from the set \\cal P may generate trajectories of different length. \nFor instance, one algorithm may obtain relatively high rewards very fast with short trajectories and another one may get slightly higher cumulative rewards but on much longer trajectories.\nIn this case, the meta-algorithm will promote the second algorithm, while repeatedly selecting the first one would yield higher cumulative reward in total over all decision (and not meta-decision) time steps.\nThis also means that playing T1 meta-decision steps, where T1>>T, may corresponds to a total number of decision steps sum_{\\tau=1}^{T_1} |\\epsilon'_tau| still not larger than D_T (where \\epsilon'_\\tau are other trajectories).\nNow the performance of a specific learner with T1 trials may be much higher than with T trials, and thus even though the regret of the meta-learner is higher, the overall performance of the recommended policy learned at that point may be better than the one output with T meta-decisions.\n\nThus, it seems to me that a discussion about the total number of decision steps (versus meta-deciion steps) is missing in order to better motivate the choise of performance measure, and generates possibly complicated situations, with a non trivial trade-off that needs to be adressed. This also suggests the proposed algorithm may be quite sub-optimal in terms of total number of decision steps.\nMy feeling is that the reason you do not observe a too bad behavior in practice may be due to the discount factor.\n\nOTHER COMMENTS:\nPage 4: What value of \\xi do you use in ESBAS ? I guess it should depend on R/(1-\\gamma)?\n\nPage 5: \"one should notice that the first two bounds are obtained by summming up the gaps\": which bounds? which gaps? Can you be more precise?\nNext sentence also needs to be clarified. What is the budget issue involved here?\n\nCan you comment on the main reason why you indeed get O(sqrt{T}) and not O(\\sqrt{T poly-log(T)}) for instance?\n\nTheorem 3: \"with high probability delta_T in O(1/T)\": do you mean with probability higher than 1-delta_T, with delta_T = O(1/T) ?\n\nParagraph on Page 15 : Do you have a proof for the claim that such algorithms indeed satisfy these assumptions ?\nEspecially proving that assumption 3 holds may not be obvious since one may consider an algorithm may better learn using data collected from its played polocy rather than from other policies.\n\n(14b-c, 15d): should u be u^\\alpha ? I may simply be confused with the notations.\n\nDECISION\nEven though there seems to be an important missing discussion regarding optimization of performance with respect to the total number of decision steps rather than the total number of meta-decision steps,\nI would tend to accept the paper. Indeed, if we left apart the choice for this performance measure, the paper is relatively well written and provides both theoretical and practical results that are of interest. But this has to be clarified.\n", "The authors consider the problem of dynamically choosing between several reinforcement learning algorithms for solving a reinforcement learning with discounted rewards and episodic tasks. The authors propose the following solution to the problem:\n- During epochs of exponentially increasing size (this technique is well known in the bandit litterature and is called a \"doubling trick\"), the various reinforcement learning algorithms are \"frozen\" (i.e. they do not adapt their policy) and the K available algorithms are sampled using the UCB1 algorithm in order to discover the one which yields the highest mean reward.\n\nOverall the paper is well written, and presents some interesting novel ideas on aggregating reinforcement learning algorithms. Below are some remarks:\n\n- An alternative and perhaps simpler formalization of the problem would be learning with expert advice (using algorithms such as \"follow the perturbed leader\"), where each of the available reinforcement learning algorithms acts as an expert. What is more, these algorithms usually yield O(sqrt(T)log(T)), which is the regret obtained by the authors in the worse case (where all the learning algorithms do converge to the optimal policy at the optimal speed O(1/sqrt(T)). It would have been good to see how those approaches perform against the proposed algorithms. \n- The authors use UCB1, but they did not try KL-UCB, which is stricly better (in fact it is optimal for bounded rewards). In particular the numerical performance of the latter is usually vastly better than the former, especially when rewards have a small variance.\n- The performance measure used by the authors is rather misleading (\"short sighted regret\"): they compare what they obtain to what the policy discovered by the best reainforcement learning algorithm \\underline{based on the trajectories they have seen}, and the trajectories themselves are generated by the choices made by the algorthms at previous time. Ie in general, there might be cases in which one does not explore enough with this approach (i.e one does not try all state-action pairs enough), so that while this performance measure is low, the actual regret is very high and the algorithm does not learn the optimal policy at all (while this could be done by simply exploring at random log(T) times ...).\n", "The paper considers the problem of online selection of RL algorithms. An algorithm selection (AS) strategy called ESBAS is proposed. It works in a sequence of epochs of doubling length, in the following way: the algorithm selection is based on a UCB strategy, and the parameters of the algorithms are not updated within each epoch (in order that the returns obtained by following an algorithm be iid). This selection allows ESBAS to select a sequence of algorithms within an epoch to generate a return almost as high as the return of the best algorithm, if no learning were made. This weak notion of regret is captured by the short-sighted pseudo regret. \n\nNow a bound on the global regret is much harder to obtain because there is no way of comparing, without additional assumption, the performance of a sequence of algorithms to the best algorithm had this one been used to generate all trajectories from the beginning. Here it is assumed that all algorithms learn off-policy. However this is not sufficient, since learning off-policy does not mean that an algorithm is indifferent to the behavior policy that has generated the data. Indeed even for the most basic off-policy algorithms, such as Q-learning, the way data have been collected is extremely important, and collecting transitions using that algorithm (such as epsilon-greedy) is certainly better than following an arbitrary policy (such as uniformly randomly, or following an even poorer policy which would not explore at all). However the authors seem to state an equivalence between learning off-policy and fairness of learning (defined in Assumption 3). For example in their conclusion they mention “Fairness of algorithm evaluation is granted by the fact that the RL algorithms learn off-policy”. This is not correct. I believe the main assumption made in this paper is the Assumption 3 (and not that the algorithms learn off-policy) and this should be dissociated from the off-policy learning. \n\nThis fairness assumption is very a strong assumption that does not seem to be satisfied by any algorithm that I can think of. Indeed, consider D being the data generated by following algorithm alpha, and let D’ be the data generated by following algorithm alpha’. Then it makes sense that the performance of algorithm alpha is better when trained on D rather than D’, and alpha’ is better when trained on D’ than on D. This contradicts the fairness assumption. \n\nThis being said, I believe the merit of this paper is to make explicit the actual assumptions required to be able to derive a bound on the global regret. So although the fairness assumption is unrealistic, it has the benefit of existing…\n\nSo in the end I liked the paper because the authors tried to address this difficult problem of algorithmic selection for RL the best way they could. Maybe future work will do better but at least this a first step in an interesting direction.\n\nNow, I would have liked a comparison with other algorithms for algorithm selection, like:\n- explore-then-exploit, where a fraction of T is used to try each algorithm uniformly, then the best one is selected and played for the rest of the rounds.\n- Algorithms that have been designed for curriculum learning, such as some described in [Graves et al., Automated Curriculum Learning for Neural Networks, 2017], where a proxy for learning progress is used to estimate how much an algorithm can learn from data.\n\nOther comments:\n- Table 1 is really incomprehensible. Even after reading the Appendix B, I had a hard time understanding these results.\n- I would suggest adding the fairness assumption in the main text and discussing it, as I believe this is crucial component for the understanding of how the global regret can be controlled.\n- you may want to include references on restless bandits in Section 2.4, as this is very related to AS of RL algorithms (arms follow Markov processes).\n- The reference [Best arm identification in multi-armed bandits] is missing an author.\n\n" ]
[ 6, 6, 7 ]
[ 5, 3, 4 ]
[ "iclr_2018_SyoDInJ0-", "iclr_2018_SyoDInJ0-", "iclr_2018_SyoDInJ0-" ]
iclr_2018_S1vuO-bCW
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a considerable amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires considerable human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and backward policy, with the backward policy resetting the environment for a subsequent attempt. By learning a value function for the backward policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the backward policy can greatly reduce the number of manual resets required to learn a task and can reduce the number of unsafe actions that lead to non-reversible states.
accepted-poster-papers
This paper is an easy accept -- three reviewers have above threshold scores, while one reviewer is slightly below threshold, but based on the submitted manuscript. It appears that the paper has substantially improved based on reviewer comments. Pros: All reviews had positive sentiment: "very elegant and general idea" (Reviewer4); "idea is interesting and potentially very useful" (Reviewer2); "method is novel, the explanation is clear, and has good experimental results" (Reviewer3); "a good way to learn a policy for resetting while learning a policy for solving the problem. Seems like a fairly small but well considered and executed piece of work." (Reviewer1) Cons: One reviewer found that testing in only three artificial tasks was a limitation. The initial reviews noted several issues where clarification of the text and/or figures was needed. There were also a bunch of statements where the reviewers questioned the technical correctness / accuracy of the discussion. Most of these points appear to have been adequately addressed in the revised manuscript.
train
[ "BJ0qmr9xf", "ByMUO4qxG", "Hko8GaGff", "SyqiSbomf", "BkOs5gCmz", "Hy80b8TXz", "By-F6QpmM", "B1Hs_K37G", "HyhCJN3Xf", "HkNYoQhQf", "H1ufXR3MM", "ByCgNR6Zz", "SJ067R6Zf", "By2lMCaWf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "The paper solves the problem of how to do autonomous resets, which is an important problem in real world RL. The method is novel, the explanation is clear, and has good experimental results.\n \nPros:\n1. The approach is simple, solves a task of practical importance, and performs well in the experiments. \n2. The experimental section performs good ablation studies wrt fewer reset thresholds, reset attempts, use of ensembles.\n\nCons:\n1. The method is evaluated only for 3 tasks, which are all in simulation, and on no real world tasks. Additional tasks could be useful, especially for qualitative analysis of the learned reset policies.\n2. It seems that while the method does reduce hard resets, it would be more convincing if it can solve tasks which a model without a reset policy couldnt. Right now, the methods without the reset policy perform about equally well on final reward.\n3. The method wont be applicable to RL environments where we will need to take multiple non-invertible actions to achieve the goal (an analogy would be multiple levels in a game). In such situations, one might want to use the reset policy to go back to intermediate “start” states from where we can continue again, rather than the original start state always.\n\nConclusion/Significance: The approach is a step in the right direction, and further refinements can make it a significant contribution to robotics work.\n\nRevision: Thanks to the authors for addressing the issues I raised, I revise my review to 7", "If one is committed to doing value-function or policy-based RL for an episodic task on a real physical system, then one has to come up with a way of resetting the domain for new trials. This paper proposes a good way of doing this: learn a policy for resetting at the same time as learning a policy for solving the problem. As a side effect, the Q values associated with the reset policy can be used to predict when the system is about to enter an unrecoverable state and \"forbid\" the action.\n\nIt is, of course, necessary that the domain be, in fact, reversible (or, at least, that it be possible to reach a starting state from at least one goal state--and it's better if that goal state is not significantly harder to reach than other goal states.\n\nThere were a couple of places in the paper that seemed to be to be not strictly technically correct.\n\nIt says that the reset policy is designed to achieve a distribution of final states that is equivalent to a starting distribution on the problem. This is technically fairly difficult, as a problem, and I don't think it can be achieved through standard RL methods. Later, it is clearer that there is a set of possible start states and they are all treated as goal states from the perspective of the reset policy. That is a start set, not a distribution. And, there's no particular reason to think that the reset policy will not, for example, always end up returning to a particular state.\n\nAnother point is that training a set of Q functions from different starting states generates some kind of an ensemble, but I don't think you can guarantee much about what sort of a distribution on values it will really represent. Q learning + function approximation can go wrong in a variety of ways, and so some of these values might be really gross over or under estimates of what can be achieved even by the policies associated with those values. \n\nA final, higher-level, methodological concern is that, it seems to me, as the domains become more complex, rather than trying to learn two (or more) policies, it might be more effective to take a model-based approach, learn one model, and do reasoning to decide how to return home (and even to select from a distribution of start states) and/or to decide if a step is likely to remove the robot from the \"resettable\" space.\n\nAll this aside, this seems like a fairly small but well considered and executed piece of work. I'm rating it as marginally above threshold, but I indeed find it very close to the threshold.", "(This delayed review is based on the deadline version of the paper.)\n\nThis paper proposes to learn by RL a reset policy at the same time that we learn the forward policy, and use the learned reset Q-function to predict and avoid actions that would prevent reset — an indication that they are \"unsafe\" in some sense.\n\nThis idea (both parts) is interesting and potentially very useful, particularly in physical domains where reset is expensive and exploration is risky. While I'm sure the community can benefit from ideas of this kind, it really needs clearer presentations of such ideas. I can appreciate the very intuitive and colloquial style of the paper, however the discussion of the core idea would benefit from some rigor and formal definitions.\n\nExamples of intuitive language that could be hiding the necessary complexities of a more formal treatment:\n\n1. In the penultimate paragraph of Section 1, actions are described as \"reversible\", while a stochastic environment may be lacking such a notion altogether (i.e. there's no clear inverse if state transitions are not deterministic functions).\n\n2. It's not clear whether the authors suggest that the ability to reset is a good notion of safety, or just a proxy to such a notion. This should be made more explicit, making it clearer what this proxy misses: states where the learned reset policy fails (whether due to limited controllability or errors in the policy), that are nonetheless safe.\n\n3. In the last paragraph of Section 3, a reset policy is defined as reaching p_0 from *any* state. This is a very strong requirement, which isn't even satisfiable in most domains, and indeed the reset policies learned in the rest of the paper don't satisfy it.\n\n4. What are p_0 and r_r in the experiments? What is the relation between S_{reset} and p_0? Is there a discount factor?\n\n5. In the first paragraph of Section 4.1, states are described as \"irreversible\" or \"irrecoverable\". Again, in a stochastic environment a more nuanced notion is needed, as there may be policies that take a long time to reset from some states, but do so eventually.\n\n6. A definition of a \"hard\" reset would make the paper clearer.\n\n7. After (1), states are described as \"allowed\". Again, preventing actions that are likely to hinder reset cannot completely prevent any given state in a stochastic environment. It also seems that (2) describes states where some allowed action can be taken, rather than states reachable by some allowed action. For both reasons, Algorithm 1 does not prevent reaching states outside S*, so what is the point of that definition?\n\n8. The paper is not explicit about the learning dynamics of the reset policy. It should include a figure showing the learning curve of this policy (or some other visualization), and explain how the reset policy can ever gain experience and learn to reset from states that it initially avoids as unsafe.\n\n9. Algorithm 1 is unclear on how a failed reset is identified, and what happens in such case — do we run another forward episode? Another reset episode?", "This paper proposes the idea of having an agent learning a policy that resets the agent's state to one of the states drawn from the distribution of starting states. The agent learns such policy while also learning how to solve the actual task. This approach generates more autonomous agents that require fewer human interventions in the learning process. This is a very elegant and general idea, where the value function learned in the reset task also encodes some measure of safety in the environment.\n\nAll that being said, I gave this paper a score of 6 because two aspects that seem fundamental to me are not clear in the paper. If clarified, I'd happily increase my score.\n\n1) *Defining state visitation/equality in the function approximation setting:* The main idea behind the proposed algorithm is to ensure that \"when the reset policy is executed from any state, the distribution over final states matches the initial state distribution p_0\". This is formally described, for example, in line 13 of Algorithm 1.\nThe authors \"define a set of safe states S_{reset} \\subseteq S, and say that we are in an irreversible state if the set of states visited by the reset policy over the past N episodes is disjoint from S_{reset}.\" However, it is not clear to me how one can uniquely identify a state in the function approximation case. Obviously, it is straightforward to apply such definition in the tabular case, where counting state visitation is easy. However, how do we count state visitation in continuous domains? Did the authors manually define the range of each joint/torque/angle that characterizes the start state? In a control task from pixels, for example, would the exact configuration of pixels seen at the beginning be the start state? Defining state visitation in the function approximation setting is not trivial and it seems to me the authors just glossed over it, despite being essential to your work.\n\n2) *Experimental design for Figure 5*: This setup is not clear to me at all and in fact, my first reaction is to say it is wrong. An episodic task is generally defined as: the agent starts in a state drawn from the distribution of starting states and at the moment it reaches the goal state, the task is reset and the agent starts again. It doesn't seem to be what the authors did, is that right? The sentence: \"our method learns to solve this task by automatically resetting the environment after each episode, so the forward policy can practice catching the ball when initialized below the cup\" is confusion. When is the task reset to the \"status quo\" approach? Also, let's say an agent takes 50 time steps to reach the goal and then it decides to do a soft-reset. Are the time steps it is spending on its soft-reset being taken into account when generating the reported results?\n\n\nSome other minor points are:\n\n- The authors should standardize their use of citations in the paper. Sometimes there are way too many parentheses in a reference. For example: \"manual resets are necessary when the robot or environment breaks (e.g. Gandhi et al. (2017))\", or \"Our methods can also be used directly with any other Q-learning methods ((Watkins & Dayan, 1992; Mnih et al., 2013; Gu et al., 2017; Amos et al., 2016; Metz et al., 2017))\"\n\n- There is a whole line of work in safe RL that is not acknowledged in the related work section. Representative papers are:\n [1] Philip S. Thomas, Georgios Theocharous, Mohammad Ghavamzadeh: High-Confidence Off-Policy Evaluation. AAAI 2015: 3000-3006\n [2] Philip S. Thomas, Georgios Theocharous, Mohammad Ghavamzadeh: High Confidence Policy Improvement. ICML 2015: 2380-2388\n\n- In the Preliminaries Section the next state is said to be drawn from s_{t+1} ~ P(s'| s, a). However, this hides the fact the next state is dependent on the environment dynamics and on the policy being followed. I think it would be clearer if written: s_{t+1} ~ P(s'| s, \\pi(a|s)).\n\n- It seems to me that, in Algorithm 1, the name 'Act' is misleading. Shouldn't it be 'ChooseAction' or 'EpsilonGreedy'? If I understand correctly, the function 'Act' just returns the action to be executed, while the function 'Step' is the one that actually executes the action.\n\n- It is absolutely essential to depict the confidence intervals in the plots in Figure 3. Ideally we should have confidence intervals in all the plots in the paper.", "Thanks for these clarifications. The new section in the Appendix clarifies things a little bit more (although I'd recommend the authors to get rid of sentences such as 'certain dimensions' and to be more specific about it).\n\nI'll increase my score (from 6 to 7) after our discussion. You have ensured me that the agent is using the exact same number of transitions in both settings in Figure 5. I still think the paragraph is poorly written, it is confusing. Sentences such as: Once the forward-only approach catches the ball, it gets maximum reward by keeping the ball in the cup\" still throw me off. The episode should be reset and that's it (hard reset). But the authors have ensured me this comparison is fair.\n\nI'll not drastically increase my score because I think the writing could be improved substantially to make the paper clearer. Also, the need to define a distance metric and a start state is not as general as I'd like it to be. For example, in vision-based tasks, would the difference be measured as distance between all pixels? It seems wrong. Naturally, I don't expect the authors to address all these questions, but since the method is not so general, I'm not thrilled about it. All that being said, I think the paper has an interesting idea so it should be accepted.", "1. We directly use Euclidean distance between observations. The start state was chosen by sampling from the initial state distribution once. We use the same start state for all experiments with a given environment. We have added these details to Appendix F.2\n\n\n2. The agent trained with the reset policy does *not* get twice as much data -- it gets the same number of transitions, and has to spend these transitions on both the reset and forward policy.\n\nAll methods compared observe the same amount of data (Section 6 paragraph 1). For example, in Figure 5, all approaches observe 1 million steps. While the “status quo” baseline and the “forward-only” baseline spend all 1 million steps trying to catch the ball, our approach spends a total of 0.5 million steps trying the catch the ball, and 0.5 million steps trying to reset the ball. Our experiments show that given equal amounts of data and using identical evaluation procedures (Section 6 paragraph 1), our method performs significantly fewer manual resets, while still learning to complete the task.", "1. So, did you use the Euclidean distance between two states using what coordinates? The absolute coordinates (x, y, z, and speeds, for example), between the initial state and the current state? Or did you do so through the representation learned by the network? That is not clear, what makes these results irreproducible. Can you add a paragraph after each section explicitly describing what you used as start state/representation/distance metric? Otherwise I can't judge it because I wouldn't be able to reproduce the obtained results.\n\n2. But following this procedure, how can you disentangle the benefit of simply training the agent for more time steps from the fact that you are actually resetting the policy? Let's say that the agent can reliably solve the task in 50 episodes. One of the agents (the one with the reset policy) would get twice as much data when learning how to solve the task. It seems an unfair comparison that generates misleading results. As far as I understand, it could have nothing to do with the task itself, but with the amount of data being used. This suggests a much deeper discussion in fact, that is related to how to evaluate agents. You are comparing the performance of an agent evaluated at training time with an agent that has a test time. That's unfair, isn't it?", "1. You correctly note that a reset reward function r_r(s) must be specified for each task. In practice, we found that a very simple design worked well for our experiments. We used the negative distance to some start state, plus any reward shaping included in the forward reward. We clarified Section 4 paragraph 1 to note this.\n\n\n2. To ensure fair evaluation of all approaches, we used a different procedure for evaluation than for training. For all the figures showing reward in the paper (including Fig 5), we evaluate the performance of a policy at a checkpoint by creating a copy of the policy in a separate thread, running only the forward policy (not the reset policy) for a fixed number of steps T and computing the average per-step reward (cumulative reward divided by T). We added these details to Section 6 paragraph 1.\n\nFor example, if the “forward-only” approaches catches the ball at t=40 and keeps the ball in the cup through t=100, then the average per-step reward is (100 - 40) / 100 = 0.6. For our approach, we only run the forward policy during evaluation. It it catches the ball at t=40 and keeps the ball in the cup through t=100, its average per-step reward is also 0.6.", "1. (State visitation) Thanks for clarifying that. However, it seems to me then that each domain needs to have an $r_r$ reward function hand-engineered to describe the proximity to the start state, right? In the authors' example, \"in locomotion experiments, the reset reward is large when the agent is standing upright\", what \"standing upright\" means? Did the authors had to specify a specific joint configuration? Did they use the initial state as joint configuration? Even if they didn't, I assume a distance/divergence metric had to be defined when generating $r_r$, right? Ideally I'd like to see that described in the main paper, so we can judge how hard it is to obtain $r_r$.\n\n2. (Figure 5) I'm sorry, the clarification didn't help me. When the authors say: \"Once the forward-only approach catches the ball, it gets maximum reward by keeping the ball in the cup\", does it mean that the agent receives $R_max$ reward at each time step after the ball is in the cup? Let's say episodes are 100 time steps long, and that achieving the goal leads to a reward of +1. If \"forward-only\" approach reaches the goal in the time step 40, what is going to be the return of that episode? Will it be +1? What if the reset approach does the same thing, will it get a +1 in the time step 40, reset, and get another +1 in time step, let's say, 80? I hope I was able to clarify my question. It seems to me that the number of steps each agent is using is being counted in an unfair way.\n\nMinor points: Thanks, for addressing those, I think the paper is clearer after these changes.", "We thank Reviewer 4 for the comments and for finding our paper a “very elegant and general idea.” The main comments had to do with clarity - we have addressed these in the revised version. We would appreciate of the reviewer would reevaluate the paper given the new clarifications.\n\n1. (State visitation) We implement our algorithm in a way that avoids having to test whether two states are equal (indeed, a challenging problem). In Equation 4, we define the set of safe states S_{reset} implicitly using the reset reward function r_r(s). In particular, we say a state is safe if the the reset reward is greater than some threshold (0.7 in our experiments). For example, in the pusher task, the reset reward is the distance from a certain (x, y, z) point, so S_{reset} is the set of points within some distance of this point. We added a comment to line 13 of the algorithm clarifying how we test if a state is in S_{reset}.\n\n2. (Figure 5) We clarified our description of the environments in Section 6 paragraph 1 to note that the episode is not terminated when the agent reaches a goal state. For the experiment in Figure 5, the ‘forward-only’ baseline and our method are non-episodic - the environment is never reset and no hard resets are used (Section 6.1). The ‘status quo’ baseline is episodic, doing a hard reset every T time steps (for ball in cup, T = 200 steps). We reworded the confusing sentence about our method as follows: “In contrast, our method learns to solve this task by automatically resetting the environment after each attempt, so the forward policy can practice catching the ball without hard resets.” All results in the paper include time steps for both the forward task and the reset task. We clarified this in Section 6.1 paragraph 1. This highlights a strength of our method: even though the agent spends a considerable amount of time learning the reset task, it still learns to do the forward task in roughly the same number of total steps (steps for forward task + steps for reset task).\n\n\nMinor points:\n\n1. Citations. We’ve removed the extra parenthesis for the Q-learning references. Generally, we use “e.g.” in citations when the citation is an example of the described behavior.\n\n2. Thanks for the additional references! We’ve included them in Section 2 paragraph 1.\n\n3. We chose to separate out the policy from the transition dynamics. Action a_{t} is sampled from \\pi(a_{t} | s_{t}) and depends on the policy; next state s_{t} is sampled from P(s_{t+1} | s_{t}, a_{t}) and depends on the transition dynamics.\n\n4. Good idea. We’ve changed “Act()” to “ChooseAction()” in Algorithm 1.\n\n5. For Figure 3, we agree confidence intervals would be helpful. We can’t regenerate the plot in the next 24 hours before the rebuttal deadline, but will include confidence intervals in the camera-ready version.", "Thank you for the comments! It seems that all the concerns have to do with the writing in the paper and are straightforward to fix. We have addressed all the concerns raised about the paper in this review. Given that all issues have been addressed, we would appreciate if the reviewer could take another look at the paper.\n\n1. We have clarified our definition of reversible action in Section 1 paragraph 4. For deterministic MDPs, we say an action is reversible if it leads to a state from which there exists a reset policy that can return to a state with high density under the initial state distribution. For stochastic MDPs, we say an action is reversible if the probability that an oracle reset policy that can reset from the next state is greater than some safety threshold. Note that definition for deterministic MDPs is a special case of the definition for stochastic MDPs.\n\n2. The ability of an oracle reset policy to reset is a good notion of safety. In our algorithm, we approximate this notion of safety, assuming that whether our learned reset policy can reset in N episodes is a good proxy for whether an oracle reset policy can reset. We have clarified Section 1 paragraph 4 to make this distinction clear. We also added Appendix B to discuss handling errors in Q value estimation. In this section, we describe how Leave No Trace copes with overestimates and underestimates of Q values.\n\n3. We have corrected this technical error in Section 3 paragraph 2 by redefining the reset policy as being able to reach p_0 from any state reached by the forward policy. That our learned reset policy only learns to reset from states reached by the forward policy is indeed a limitation of our method. However, note that early aborts help the forward policy avoid visiting states from which the reset policy is unable to reach p_0.\n\n4. For the continuous control environments, the initial state distribution p_0 is uniform distribution centered at a “start pose.” We use a discount factor \\gamma = 0.99. Both details have been noted in Appendix F.3 paragraph 2. The reset reward r_r is a hand-crafted approximation to p_0. For example, in the Ball in Cup environment, r_r is proportional to the negative L2 distance from the ball to the origin (below the cup). For cliff cheetah, r_r includes one term that is proportional to the distance of the cheetah to the origin, and another term indicating whether the cheetah is standing. S_{reset} is the set of states where r_r(s) is greater than 0.7 (Appendix C.3 paragraph 2)\n\n5. We have clarified Section 4.1 paragraph 1 to explain how our proposed algorithm handles both cases: states from which it is impossible to reset and states from which resetting would take prohibitively many steps. In both cases, the cumulative discounted reward (and hence the value function) will be low. By performing an early abort when the value function is low, we avoid both cases.\n\n6. We added a definition of “hard reset” to Section 4.2 paragraph 1: A hard reset is an action that resamples that state from the initial state distribution. Hard resets are available to an external agent (e.g. a human) but not the learned agent.\n\n7. We acknowledge that the proposed algorithm does not guarantee that we never visit unsafe states. In Appendix A, we have added a proof that Leave No Trace would only visit states that are safe in expectation if it had access to the true Q values. Appendix A.3 discusses the approximations we make in practice that can cause Leave No Trace to visit unsafe states. Finally, Appendix B discusses how Leave No Trace handles errors incurred by over/under-estimates of Q values.\n\n8. Newly added Appendix D visualizes the training dynamics by plotting the number of time steps in each episode before an early abort. Initially, early aborts occur near the initial state distribution, so the forward episode lengths are quite short. As the reset policy improves, early aborts occur further from the initial state, as indicated by longer forward episode lengths. Newly added Appendix B discusses how Leave No Trace handles errors incurred by over/under-estimates of Q values. It describes how Leave No Trace learns that an “unsafe” state is actually safe.\n\n9. We detect failed resets in line 12 of Algorithm 1. We have added a comment to help clarify this. When a failed reset is detected, a hard reset occurs (line 13).", "We thank AnonReviewer1 for noting the main goal of our paper and recognizing how we incorporate safety using the learned reset policy, and further thank the reviewer for finding our paper a “well considered and executed piece of work.” We’ve addressed the concerns raised by the reviewer with clarifications in the main text, which we detail below.\n\nAssumption that environment is reversible - This assumption is indeed a limitation of our approach, which we note in the paper (Section 4 paragraph 2). We have expanded this section to clarify this detail. We show experimentally that our method can be applied to a number of realistic tasks, such as locomotion and manipulation. Extending our work to tasks with irreversible goal states is a great idea, and would make for interesting future work.\n\nInitial state distribution - You correctly note that the reset policy might always reset to the same state, thus failing to sample from the full initial state distribution. We have corrected this technical error in the revised version of the paper (Section 4, paragraph 2) by adding the additional assumption that the initial state distribution be unimodal and have narrow support. We also expanded the discussion of “safe sets” in Section 4.2 paragraph 1 to clarify the difference between the initial state distribution, the reset policy’s reward, and the safe set. We also describe a method to detect if there is mismatch between the the initial state distribution and the reset policy final state distribution.\n\nQ functions - We learn an ensemble of Q functions, each of which is a sampled from the posterior distribution over Q functions given the observed data. We expanded Section 4.4 paragraph 1 to note how this technique has been established in previous work (“Deep Exploration via Bootstrapped DQN” [Osband 2016] and “UCB Exploration via Q-Ensembles” [Chen 2017]). In general, we are not guaranteed that samples from a distribution are close to its mean. However, our experiments on ensemble aggregation (taking the min, mean or max over the Q functions) had little effect on policy reward. If “gross under/over-estimation” had occurred, taking the min/max over the ensemble would have resulted in markedly lower reward. We expanded Appendix A paragraph 2 to explain this finding.\n\nModel-based alternative - We appreciate the comment regarding a potential model-based alternative to our method. However, we are not aware of any past model-based methods for solving this task. We would be happy to attempt a comparison or add a discussion if the reviewer has a particular prior method in mind. The early aborts in our method provide one method of identifying irreversible states. A model-based alternative could also serve this function. We believe that our early aborts, which only require learning a single reset policy, are simpler than learning a model of the environment dynamics and hypothesize that our approach will scale better to complex environments.", "We thank AnonReviewer3 for recognizing the importance of the problem we aim to solve, and for noting that our simple method is supported with “good ablation studies.” We have addressed the issues raised by the reviewer, as discussed below:\n\n1. We have run experiments on two additional environments (ball in cup and peg insertion), so the revised version of the paper shows experiments on 5 simulated environments (Section 6, paragraph 1). Videos of the learned policies visualize the learned forward + reset policies are available on the project website: https://sites.google.com/site/mlleavenotrace/\nExperimental evaluation of five distinct domains compares favorably to most RL papers that have appeared in ICLR in the past. While we agree that real-world evaluation of our method would be excellent, this is going substantially beyond the typical evaluation for ICLR RL work.\n\n2. We ran additional environments that show that, in certain difficult situations, our method can solve tasks which a model without a reset policy cannot. Newly-added Sections 6.1 and 6.6 demonstrate this result in two settings. We summarize these results below in a separate post entitled “Additional Experiments.”\n\n3. We expanded Section 4 paragraph 2 to clarify our assumption that there exists a reversible goal state. This is indeed a limitation of our approach, which we note in the paper (Section 4 paragraph 2). We show experimentally that our method can be applied to a number of realistic tasks, such as locomotion and manipulation. Extending our work to tasks with irreversible goal states by resetting to intermediate goals is a great idea, and would make for interesting future work.", "We ran two additional experiments for the revised version of the paper. These experiments motivate learning a reset controller in two settings: when hard resets are available and when hard resets are not available.\n\nFirst, we consider the setting where hard resets are not available (Section 6.1). We use an environment (“ball in cup”) where the agent controls a cup attached to a ball with a string, and attempts to swing the ball up into the cup. The agent receives a reward of 1 if the ball is in the cup, and 0 otherwise. We don’t terminate the episode when the ball is caught, so the agent receives a larger reward the longer it keeps the ball in the cup. Using Leave No Trace, the agent repeatedly catches the ball and then pops the ball out of the cup to reset. In contrast, we compare to a baseline (“forward only”) that simply maximizes its environment reward. Once this baseline catches the ball once during random exploration, it trivially maximizes its reward by doing nothing so the ball stays in the cup. However, this baseline has failed to learn to catch the ball when the ball is initialized below the cup. The video below illustrates the training dynamics of the baseline (left) and Leave No Trace (right): https://www.youtube.com/watch?v=yDcFMR59clI\n\nSecond, we consider the setting where hard resets are allowed (but ideally avoided) (Section 6.6). We use the peg insertion task, where the agent controls an arm with a peg at the end. The agent receives a reward of 1 when the peg is in the hole and 0 otherwise (plus a control penalty). Learning this task is very challenging because the agent receives no reward during exploration to guide the peg towards the hole. The baseline (“status quo”) starts each episode with the peg far from the hole, and is hard reset to this pose after each episode. Our approach uses Leave No Trace to solve this task in reverse, so the peg insertion task corresponds to the reset policy. (We compare to reverse curriculum learning [Florensa 2017]) in Section 2 paragraph 3.) We start each episode with the peg inside the hole. The forward policy learns to remove the peg until an early abort occurs, at which point the reset policy inserts the peg back in the hole. As learning progresses, early aborts occur further and further from the hole. During evaluation, we initialize both the baseline and Leave No Trace with the peg far from the hole. Our method successfully inserts the peg while the baseline fails. Importantly, this result indicates that learning a reset policy enables our method to learn tasks it would otherwise be unable to solve, even when avoiding hard resets is not the primary goal." ]
[ 7, 6, 5, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1vuO-bCW", "iclr_2018_S1vuO-bCW", "iclr_2018_S1vuO-bCW", "iclr_2018_S1vuO-bCW", "Hy80b8TXz", "By-F6QpmM", "B1Hs_K37G", "HyhCJN3Xf", "HkNYoQhQf", "SyqiSbomf", "Hko8GaGff", "ByMUO4qxG", "BJ0qmr9xf", "iclr_2018_S1vuO-bCW" ]
iclr_2018_BkabRiQpb
Consequentialist conditional cooperation in social dilemmas with imperfect information
Social dilemmas, where mutual cooperation can lead to high payoffs but participants face incentives to cheat, are ubiquitous in multi-agent interaction. We wish to construct agents that cooperate with pure cooperators, avoid exploitation by pure defectors, and incentivize cooperation from the rest. However, often the actions taken by a partner are (partially) unobserved or the consequences of individual actions are hard to predict. We show that in a large class of games good strategies can be constructed by conditioning one's behavior solely on outcomes (ie. one's past rewards). We call this consequentialist conditional cooperation. We show how to construct such strategies using deep reinforcement learning techniques and demonstrate, both analytically and experimentally, that they are effective in social dilemmas beyond simple matrix games. We also show the limitations of relying purely on consequences and discuss the need for understanding both the consequences of and the intentions behind an action.
accepted-poster-papers
The reviewer reactions to the initial manuscript were generally positive. They considered the paper to be well written and clear, providing an original contribution to learning to cooperate in multi-agent deep RL in imperfect domains. The reviewers raised a number of specific issues to address, including improved definitions and descriptions, and proper citations of related work. The authors have substantially revised the manuscript to address most or all of these issues. At this point, the only knock on this paper is that the findings seemed unsurprising from a game-theoretic or deep learning point of view. Pros: algorithmic contribution, technical quality, clarity Cons: no real surprises
train
[ "r1CyEt8Nf", "r1uWD5txM", "HkCdXWqlM", "B1GljO9xM", "Sy0tFIhMf", "rJYL9Uhzf", "Byb0FUnfz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Thanks for clarifying some of the mentioned issues. With the introduced revision, you cover your ground very well. I believe that this paper offers a great basis for interesting further studies in this direction.", "This paper proposes a novel adaptive learning mechanism to improve results in ergodic cooperation games. The algorithm, tagged 'Consequentialist Conditional Cooperation', uses outcome-based accumulative rewards of different strategies established during prior training. Its core benefit is its adaptiveness towards diverse opposing player strategies (e.g. selfish, prosocial, CCC) while maintaining maximum reward.\n\nWhile the contribution is explored in all its technical complexity, fundamentally this algorithm exploits policies for selfish and prosocial strategies to determine expected rewards in a training phase. During operation it then switches its strategy depending on a dynamically-calculated threshold reward value (considering variation in agent-specific policies, initial game states and stochasticity of rewards) relative to the total reward of the played game instance. The work is contrasted to tit-for-tat approaches that require complete observability and operate based on expected future rewards. In addition to the observability, approximate Markov TFT (amTFT) methods are more processing-intense, since they fall back on a game's Q-function, as opposed to learned policies, making CCC a lightweight alternative. \n\nComments:\n\nThe findings suggest the effectiveness of that approach. In all experiments CCC-based agents fare better than agents operating based on a specific strategy. While performing worse than the amTFT approach and only working well for larger number of iterations, the outcome-based evaluation shows benefits. Specifically in the PPD game, the use of CCC produces interesting results; when paired with cooperate agents in the PPD game, CCC-based players produce higher overall reward than pairing cooperative players (see Figure 2, (d) & (e)). This should be explained. To improve the understanding of the CCC-based operation, it would further be worthwhile to provide an additional graph that shows the action choices of CCC agents over time to clarify behavioural characteristics and convergence performance.\n\nHowever, when paired with non-cooperative players in the risky PPD game, CCC players lead to an improvement of pay-offs by around 50 percent (see Figure 2, (e)), compared to payoff received between non-cooperative players (-28.4 vs. -18, relative to -5 for defection). This leads to the question: How much CCC perform compared to random policy selection? Given its reduction of processing-intensive and need for larger number of iterations, how much worse is the random choice (no processing, independent of iterations)? This is would be worthwhile to appreciate the benefit of the proposed approach.\n\nAnother point relates to the fishing game. The game is parameterized with the rewards of +1 and +3. What is the bases for these parameter choices? What would happen if the higher reward was +2, or more interestingly, if the game was extended to allow agents to fish medium-sized fish (+2), in addition to small and large fish. Here it would be interesting to see how CCC fares (in all combinations with cooperators and defectors).\n\nOverall, the paper is well-written and explores the technical details of the presented approach. The authors position the approach well within contemporary literature, both conceptually and using experimental evaluation, and are explicit about its strengths and limitations.\n\nPresentation aspects:\n- Minor typo: Page 2, last paragraph of Introduction: `... will act act identically.'\n- Figure 2 should be shifted to the next page, since it is not self-explanatory and requires more context.\n", "This paper studies learning to play two-player general-sum games with state (Markov games) with imperfect information. The idea is to learn to cooperate (think prisoner's dilemma) but in more complex domains. Generally, in repeated prisoner's dilemma, one can punish one's opponent for noncooperation. In this paper, they design an apporach to learn to cooperate in a more complex game, like a hybrid pong meets prisoner's dilemma game. This is fun but I did not find it particularly surprising from a game-theoretic or from a deep learning point of view. \n\nFrom a game-theoretic point of view, the paper begins with a game-theoretic analysis of a cooperative strategy for these markov games with imperfect information. It is basically a straightforward generalization of the idea of punishing, which is common in \"folk theorems\" from game theory, to give a particular equilibrium for cooperating in Markov games. Many Markov games do not have a cooperative equilibrium, so this paper restricts attention to those that do. Even in games where there is a cooperative solution that maximizes the total welfare, it is not clear why players would choose to do so. When the game is symmetric, this might be \"the natural\" solution but in general it is far from clear why all players would want to maximize the total payoff. \n\nThe paper follows with some fun experiments implementing these new game theory notions. Unfortunately, since the game theory was not particularly well-motivated, I did not find the overall story compelling. It is perhaps interesting that one can make deep learning learn to cooperate with imperfect information, but one could have illustrated the game theory equally well with other techniques.\n\nIn contrast, the paper \"Coco-Q: Learning in Stochastic Games with Side Payments\" by Sodomka et. al. is an example where they took a well-motivated game theoretic cooperative solution concept and explored how to implement that with reinforcement learning. I would think that generalizing such solution concepts to stochastic games and/or deep learning might be more interesting.\n\nIt should also be noted that I was asked to review another ICLR submission entitled \"MAINTAINING COOPERATION IN COMPLEX SOCIAL DILEMMAS USING DEEP REINFORCEMENT LEARNING\" which amazingly introduced the same \"Pong Player’s Dilemma\" game as in this paper. \n\nNotice the following suspiciously similar paragraphs from the two papers:\n\nFrom \"MAINTAINING COOPERATION IN COMPLEX SOCIAL DILEMMAS USING DEEP REINFORCEMENT LEARNING\":\nWe also look at an environment where strategies must be learned from raw pixels. We use the method\nof Tampuu et al. (2017) to alter the reward structure of Atari Pong so that whenever an agent scores a\npoint they receive a reward of 1 and the other player receives −2. We refer to this game as the Pong\nPlayer’s Dilemma (PPD). In the PPD the only (jointly) winning move is not to play. However, a fully\ncooperative agent can be exploited by a defector.\n\nFrom \"CONSEQUENTIALIST CONDITIONAL COOPERATION IN SOCIAL DILEMMAS WITH IMPERFECT INFORMATION\":\nTo demonstrate this we follow the method of Tampuu et al. (2017) to construct a version of Atari Pong \nwhich makes the game into a social dilemma. In what we call the Pong Player’s Dilemma (PPD) when an agent \nscores they gain a reward of 1 but the partner receives a reward of −2. Thus, in the PPD the only (jointly) winning\nmove is not to play, but selfish agents are again tempted to defect and try to score points even though\nthis decreases total social reward. We see that CCC is a successful, robust, and simple strategy in this\ngame.", "The main result specifies a (trigger) strategy (CCC) and corresponding algorithm that leads to an efficient outcome in social dilemmas, the theoretical basis of which is provided by theorem 1. This underscores an algorithm that uses a prosocial adjustment of the agents rewards to encourage efficient behaviour. The paper makes a useful contribution in demonstrating that convergence to efficient outcomes in social dilemmas without the need for agents to observe each other's actions. The paper is also clearly written and the theoretical result is accompanied by some supporting experiments. The numerical experiments show that using CCC strategy leads to an increase in the proportion of efficient equilibrium outcomes. However, in order to solidify the experimental validation, the authors could consider a broader range of experimental evaluations. There are also a number of items that could be added that I believe would strengthen the contribution and novelty, in particular:\n\nSome highly relevant references on (prosocial) reward shaping in social dilemmas are missing, such as Babes, Munoz de cote and Littman, 2008 and for the (iterated) prisoner's dilemma; Vassiliades and Christodoulou, 2010 which all provide important background material on the subject. In addition, it would be useful to see how the method put forward in the paper compares with other (reward-shaping) techniques within MARL (especially in the perfect information case in the pong players' dilemma (PPD) experiment) such as those already mentioned. The authors could, therefore, provide more detail in relating the contribution to these papers and other relevant past work and existing algorithms. \n\nThe paper also omits any formal discussion on the equilibrium concepts being used in the Markov game setting (e.g. Markov Perfect Equilibrium or Markov-Nash equilibrium) which leaves a notable gap in the theoretical analysis. \n\nThere are also some questions that to me, remain unaddressed namely:\n\ni. the model of the experiments, particularly a description of the structure of the pong players' dilemma in terms of the elements of the partially observed Markov game described in definition 1. In particular, what are the state space and transitions?\n\nii. the equilibrium concepts being considered i.e. does the paper consider Markov perfect equilibria. Some analysis on the conditions that under which the continuation equilibria e.g. cooperation in the social dilemma is expected to arise would also be beneficial.\n\niii. Although the formal discussion is concerned with Markov games (i.e. repeated games with stochastic transitions with multiple states) the experiments (particularly the PPD) appear to apply to repeated games (this could very much be cleared up with a formal description of the games in the experimental sections and the equilibrium concept being used). \n\niv. In part 1 of the proof of the main theorem, it seems unclear why the sign of the main inequality has changed after application of Cauchy convergence in probability (equation at the top of the page). As this is an important component of the proof of the main result, the paper would benefit from an explanation of this step?\n", "We thank the reviewer for their comments. They have pointed out weaknesses in our presentation of our results. We have edited the text substantially and hope that our contributions and claims are much clearer.\n\n**>>> R3 asks about the relationship of our work to prior work on reward shaping in MARL. **\n \nWe are happy to add these references to the main text and discuss them. One important thing to note is that the prior work mentioned by the reviewer has dealt with perfectly observed games rather than partially observed ones. \n \nWe have added a longer discussion of how our work is related to existing work on MARL, equilibrium finding, and reward shaping. \n\nWe specifically discuss one of the examples the reviewer gives: Babes et al. 2008 use reward shaping in the repeated Prisoner's Dilemma to construct an agent that does well against fixed opponents as well as can lead learner “followers” to cooperate with it. However, in order to do this they first need to compute a value function for a known “good” strategy (they use Pavlov, a variant of tit-for-tat) and use this for shaping. This is possible for the basic one (or multi-memory) PD but doesn't scale well to general Markov games (in particular partially observed ones). \n\nBy contrast, the CCC agent creates similar incentives by switching between two pre-computed strategies in a predictable way. The computation of these two strategies does not require anything other than standard self-play.\n\nCombining these ideas is an interesting direction for future research but beyond the scope of our paper.\n \n** **\n**>> R3 asks for formalization for the state spaces/transition functions/etc… in our games.**\n \nWe are happy to add more details of the games to the paper (as well as release the code upon publication). Our games do not permit a compact enumeration of the states and transitions (which is precisely why we are interested in moving beyond tabular methods to eg. deep RL). For example, in the PPD the full set of states is the set of RAM states in Atari Pong.\n\n**>> R3 asks about equilibrium concepts in our Markov game setting **\n\nWhile much existing work is framed in terms of finding good equilibria, our work is more related to questions raised by Axelrod (1984) who asks: if one is to enter a social dilemma with an (unknown) partner, how should one behave? \n\nThe work on tit-for-tat (TFT, and related strategies such as Win-Stay-Lose-Shift/Pavlov) comes up with the answer that one should play a strategy that is:\n\n * simple\n * nice (begins by cooperating)\n * not exploitable\n * forgiving (provides a way to return to cooperation after a defection)\n * incentivizes cooperation from its partner (that is, a partner who commits to cooperation will get a higher payoff than a partner who commits to defection)\n\nOur main contribution is to find a way to construct a strategy which satisfies the Axelrod desiderata in *partially observed* Markov games which require deep RL for function approximation.\n\nNote that these desiderata are different from equilibrium desiderata. For example, tit-for-tat, one of the most heavily studied strategies in the Prisoner's Dilemma, is actually not a symmetric equilibrium strategy (because the best response to TFT is always cooperate). Rather, these desiderata are about agent design or about good strategies to commit to.\n\nWe do not claim that CCC forms an equilibrium with itself as there may be local deviations to improve one's payoff, however, our theorem shows that the partner of a CCC agent maximizes its asymptotic payoff by playing a policy that cooperates with the CCC agent (thus we preserve TFT-like incentive properties).\n\nWe have edited the text to make our problem statement / results clearer.\n\nWe only focus on equilibrium for computational reasons in the case of the D policy for which we have made an assumption that (D,D) forms an equilibrium in policies which only condition on agent's observations (this is related to the notion of a belief free equilibrium in repeated games Ely et al. 2005). \n\n\n**>> R3 asks “it seems unclear why the sign of the main inequality has changed after application of Cauchy convergence in probability (equation at the top of the page)”**\n** **\nWe apologize for any confusion. The equation at the top of the page uses the convention \n\nP(X) < epsilon \n \nwhile the next equations use the notation \n \nP(~X) > (1-epsilon)\n \nWe have changed these to both use P(~X)>(1-epsilon) so that it is more clear.\n", "We have made several additions to the paper that were suggested by the reviewer (the paper updated paper can be viewed via the PDF link above). We think these suggestions make the contribution more clear. We thank the reviewer for these comments.\n\n>>> Specifically in the PPD game, the use of CCC produces interesting results; when paired with cooperate agents in the PPD game, CCC-based players produce higher overall reward than pairing cooperative players (see Figure 2, (d) & (e)). This should be explained. **\n \nThis is a good catch! Actually this is due to variance in the payoffs.\n \nThe 2 cooperator payoff is -.74 with a standard error (calculated assuming that each matchup is independent so using the empirical standard deviation / sqrt(n-1)) of +/- .56 while the 2 CCC payoff is -.22 with a CI of +/- .36. Thus, these payoffs are statistically indistinguishable. \n \nOn the other hand 2 defectors get a payoff of -5.8 with a standard error of 2.2, so (D,D) is quite statistically distinguishable from (C,C) or (CCC,CCC). \n\nThis stochasticity only occurs in the rPPD because of the random nature of the defect-payoff. A defection in the standard PPD means the partner loses 2 points deterministically whereas in the rPPD the partner only realizes a large loss of -20 with a relatively small probability of .1. \n\nIn other words, the standard errors of the mean payoffs in the other games (eg. Fishery, standard PPD) are tiny and can be ignored but do need to be acknowledged in the rPPD.\n \nIn the current version we have relegated the full tournament results to the appendix and edited what we show in the text to better reflect our problem definition.\n \n**>>> worthwhile to provide an additional graph that shows the action choices of CCC agents over time to clarify behavioural characteristics and convergence performance.**\n\nThis is a good suggestion. We have added a figure that shows trajectories of behavior of a CCC agent with a C or D partner for the Fishery game.\n\n** >>> How much CCC perform compared to random policy selection? **\n \nWe note that while random policy selection will yield approximately a payoff of .5*(D,D) + .5*(C,D) when paired with a defect partner this random policy will no longer have the incentive properties of CCC are such that if our agent commits to CCC then their partner does better by cooperating than by defecting.\n\nBy comparison, if our agent commits to a random policy this incentive property no longer holds and so the best response for a partner is to always defect.\n \n**The game is parameterized with the rewards of +1 and +3. What is the bases for these parameter choices? What would happen if the higher reward was +2, or more interestingly, if the game was extended to allow agents to fish medium-sized fish (+2), in addition to small and large fish. Here it would be interesting to see how CCC fares (in all combinations with cooperators and defectors). **\n** **\nThe choice of +1/ +3 was made rather arbitrarily (in most behavioral studies of cooperation the key question is one of the benefit/cost ratio, typically a ratio of 2:1 or 3:1 is used, we simply copied that here). \n \nWe agree that more games are important for testing the robustness of CCC and other strategies for solving social dilemmas but we leave this for future work.\n\n**>> Typos/figure 2 clarity**\n** **\nWe thank the reviewer for these catches, we have fixed both of them.\n", "We thank the reviewer for their thorough review. We have made several changes in exposition and presentation. We hope these address the reviewer's concerns.\n\n>> The paper follows with some fun experiments implementing these new game theory notions. Unfortunately, since the game theory was not particularly well-motivated, I did not find the overall story compelling. It is perhaps interesting that one can make deep learning learn to cooperate with imperfect information, but one could have illustrated the game theory equally well with other techniques.\n\nFrom reading the reviewers' comments, we realize that we should have been much clearer front with our problem definition. We have edited the text substantially to make this clearer.\n\nOur goal is to consider a question of agent design: how should we build agents that can enter into social dilemmas and achieve high payoffs with partners that are themselves trying to achieve high payoffs?\n\nIn particular, we seek to answer this question for social dilemmas where actions of a partner are not observed.\n\nThis question is quite different from just making agents that cooperate all the time (since those agents are easily exploited by defectors). It is related to, but also different from, the question of computing cooperative equilibria. \n\nSee the reply to R3 above for a more in depth discussion about the desiderata from past literature that we seek to satisfy in order to construct a “good” strategy for imperfect information social dilemmas.\n\n>> The reviewer asks whether maximizing the sum of the payoffs is the “right” solution\nWe agree with this criticism. While in symmetric games (eg. bargaining) behavioral experiments show that often view the symmetric sum of payoffs to be a natural focal point while in asymmetric games they do not (see eg. the chapter on bargaining in Kagel & Roth Handbook of Experimental Economics or more recent work on inequality in social dilemmas eg. Hauser, Kraft-Todd, Rand, Nowak & Norton 2016).\n \nThe question of how to choose the “correct” focal points for playing with humans comes down to asking what should the “right” utility function be for training the cooperative strategies (see Charness & Rabin (2002) for a generic utility function that can express many social goals). Note that CCC can then be applied using these new C strategies just as in the current work.\n\nHowever, figuring out the correct utility function to use here is far beyond the scope of this paper and is likely quite context dependent. This is an important direction for future research and we have made this point clear the paper.\n\n\n>> Similar paragraphs\nWe are also the authors of the other paper, it is earlier/related work to this paper (in the sense that it asks about designing agents that can solve social dilemmas), though it covers substantially different ground (perfectly observed games).\n\nWe apologize if there is something unclear from the current text. We do not mean to imply that this paper (CCC) introduces the PPD. Rather, it is the earlier paper (amTFT) that introduces the PPD as an environment and the CCC paper uses it for robustness checks.\n\nThe point of the PPD in this paper is to ask whether the other work is superceded by the CCC. Indeed, the techniques proposed in the amTFT paper can ONLY work in perfectly observed games. \n\nBy contrast, CCC is a good strategy for imperfectly observed games. Since any perfectly observed game is trivially also an imperfectly observed one one may think that CCC is just a strictly better strategy than amTFT (and thus the other paper is subsumed by this one).\n \nThe point of the PPD experiments in this paper is to show that there are classes of perfectly observed games where CCC performs similarly to amTFT (normal PPD) but there are also those where CCC fails but amTFT succeeds (risky PPD). \n\nWe have changed the text to make these points clearer and to attribute credit transparently.\n" ]
[ -1, 7, 5, 6, -1, -1, -1 ]
[ -1, 3, 4, 4, -1, -1, -1 ]
[ "rJYL9Uhzf", "iclr_2018_BkabRiQpb", "iclr_2018_BkabRiQpb", "iclr_2018_BkabRiQpb", "B1GljO9xM", "r1uWD5txM", "HkCdXWqlM" ]
iclr_2018_SkZxCk-0Z
Can Neural Networks Understand Logical Entailment?
We introduce a new dataset of logical entailments for the purpose of measuring models' ability to capture and exploit the structure of logical expressions against an entailment prediction task. We use this task to compare a series of architectures which are ubiquitous in the sequence-processing literature, in addition to a new model class---PossibleWorldNets---which computes entailment as a ``convolution over possible worlds''. Results show that convolutional networks present the wrong inductive bias for this class of problems relative to LSTM RNNs, tree-structured neural networks outperform LSTM RNNs due to their enhanced ability to exploit the syntax of logic, and PossibleWorldNets outperform all benchmarks.
accepted-poster-papers
This paper studies the problem of modeling logical structure in a neural model. It introduces a data set for probing various existing models and proposes a new model that addresses shortcomings in existing ones. The reviewers point out that there is a bit of a tautology in introducing a new task and a new model that solves it. The revised version addresses some of those concerns. Overall, it is a thought-provoking and well-written study that will be interesting to discuss at ICLR.
train
[ "HyKlaaFxf", "ByyL5O_ef", "Synyi3YxM", "SJaK9o5fz", "B19KA55GM", "HJitaq9zz", "rk5QwLh1f", "BJiNJMsyf", "Skm7Jfokz", "ryDJkMoyz", "rJWTC-s1G", "BkaIlVcCZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "Overall, the paper is well-written and the proposed model is quite intuitive. Specifically, the idea is to represent entailment as a product of continuous functions over possible worlds. Specifically, the idea is to generate possible worlds, and compute the functions that encode entailment in those worlds. The functions themselves are designed as tree neural networks to take advantage of logical structure. Several different encoding benchmarks of the entailment task are designed to compare against the performance of the proposed model, using a newly created dataset. The results seem very impressive with > 99% accuracy on tests sets.\n\nOne weakness with the paper was that it was only tested on 1 dataset. Also, should some form of cross-validation be applied to smooth out variance in the evaluation results. I am not sure if there are standard \"shared\" datasets for this task, which would make the results much stronger.\nAlso how about the tradeoff, i.e., does training time significantly increase when we \"imagine\" more worlds. Also, in general, a discussion on the efficiency of training the proposed model as compared to TreeNN would be helpful.\nThe size of the world vectors, I would believe is quite important, so maybe a more detailed analysis on how this was chosen is important to replicate the results.\nThis problem, I think, is quite related to model counting. There has been a lot of work on model counting. a discussion on how this relates to those lines of work would be interesting.\n\n\nAfter revision\n\nI think the authors have improved the experiments substantially.", "SUMMARY \n\nThe paper is fairly broad in what it is trying to achieve, but the approach is well thought out. The purpose of the paper is to investigate the effectiveness of prior machine learning methods with predicting logical entailment and then provide a new model designed for the task. Explicitly, the paper asks the following questions: \"Can neural networks understand logical formula well enough to detect entailment?\", and \"Which architectures are best at inferring, encoding, and relating features in a purely structural sequence-based problem?\". The goals of the paper is to understand the learning bias of current architectures when they are tasked with learning logical entailment. The proposed network architecture, PossibleWorldNet, is then viewed as an improvement on an earlier architecture TreeNet.\n\nPOSITIVES \n\nThe structure of this paper was very well done. The paper attempts to do a lot, and succeeds on most fronts. The generated dataset used for testing logical entailment is given a constructive description which allows for future replication. The baseline benchmark networks are covered in depth and the reader is provided with a deep understanding on the limitations of some networks with regard to exploiting structure in data. The PossibleWorldNets is also given good coverage, and the equations provided show the means by which it operates.\n• A clear methodological approach to the research. The paper covers how they created a dataset which can be used for logical entailment learning, and then explains clearly all the previous network models which will be used in testing as well as their proposed model.\n• The background information regarding each model was exceptionally thorough. The paper went into great depth describing the pros and cons of earlier network models and why they may struggle with recognizing logical entailment.\n• The section describing the creation of a dataset captures the basis for the research, learning logical entailment. They describe the creation of the data, as well as the means by which they increase the difficulty for learning.\n• The paper provides an in depth description of their PossibleWorldNet model, and during experimentation we see clear evidence of the models capabilities.\n\nNEGATIVES\n\nOne issue I had with the paper is regarding the creation of the logical entailment dataset. Not so much for how they explained the process of creating the dataset, that was very thorough, but the fact that this dataset was the only means to test the previous network models and their new proposed network model. I wonder if it would be better to find non-generated datasets which may contain data that have entailment relationships. It is questionable if their hand crafted network model is learned best on their hand crafted dataset.\n\nThe use of a singular dataset for learning logical entailment. The dataset was also created by the researchers for the express purpose of testing neural network capacity to learn logical entailment. I am hesitant to say their proposed network is an incredible achievement since PossibleWorldNet effectively beat out other methods on a dataset that they created expressly for it.\n\nRELATED WORK \n\nThe paper has an extensive section dedicated to covering related work. I would say the research involved was very thorough and the researchers understood how their method was different as well as how it was improving on earlier approaches.\n\nCONCLUSION \n\nGiven the thorough investigation into previous networks’ capabilities in logical entailment learning, I would accept this paper as a valid scientific contribution. The paper performs a thorough analysis on the limitations that previous networks face with regard to exploiting structure from data. The paper also covers results of the experiments by not only pointing out their proposed network’s success, but by analyzing why certain earlier network models were able to achieve competitive learning results. The structure of the PossibleWorldNet was also explained well, and during ex- perimentation demonstrated its ability to learn structure from data. The paper would have been improved through testing of multiple datasets, and not just on there self generated dataset, but the contribution of their research on their network and older networks is still justification enough for this paper.", "This is a wonderful and a self-contained paper. In fact, it introduces a very important problem and it solves it. \n\nThe major point of the paper is demonstrating that it is possible to model logical entailment in neural networks. Hence, a corpus and a NN model are introduced. The corpus is used to demonstrate that the model, named PossibleWorld, is nearly perfect for the task. A comparative analysis is done with respect to state of the art recurrent NN. So far, so good.\n\nYet, what is the take home message? In my opinion, the message is that generic NN should not be used for specific formal tasks whereas specific neural networks that model the task are desirable. This seems to be a trivial claim, but, since the PossibleWorld nearly completely solves the task, it is worth to be investigated. \n\nThe point that the paper leaves unexplained is: what is in the PossibleWorld Network that captures what we need? The description of the network is in fact very criptic. No examples are given and a major effort is required to the reader. Can you provide examples and insights on why this is THE needed model?\n\nFinally, the paper does not discuss a large body of research that has been done in the past by Plate. Plate has investigated how symbolic predicates can be described in distributed representations. This is strictly related to the problem this paper investigates. As discussed in \"Symbolic, Distributed and Distributional Representations for Natural Language Processing in the Era of Deep Learning: a Survey\", 2017, the link between symbolic and distributed representations has to be better investigated in order to propose innovative NN models. Your paper can be one of the first NN model that takes advantage of this strict link.", "Thank you for your kind words at the beginning of the review, and for your excellent questions and comments. We find the topics addressed in your questions, and your critical points are–we think–fair ones. We confess we were a little surprised by the low score given, considering the generally positive tone of the first half of the review, but this (along with the comments of other reviewers) has prompted us to rethink the evaluation of the models in order to address your specific points and hopefully assuage your concerns. We have made revisions to the manuscript to include further tests of our already-trained models, and we respond to parts of your review below.\n\n“The point that the paper leaves unexplained is: what is in the PossibleWorld Network that captures what we need? The description of the network is in fact very criptic. No examples are given and a major effort is required to the reader. “\n\nLimitations of space prevented us providing examples in the body of the text. Here is a simple example from propositional logic. To check whether p ∨ q entails p, we consider all possible truth-value assignments to the variables p and q. We get four assignments: \np → ⊥, q → ⊥\np → ⊥, q → ⊤\np → ⊤, q → ⊥\np → ⊤, q → ⊤\n\nNow p ∨ q entails p if, for each of these assignments, if the assignment satisfies p ∨ q, then it also satisfies p. In this example, the entailment is false, since the second assignment (p → ⊥, q → ⊤) satisfies p ∨ q but does not satisfy p.\n\nWe will endeavour to add such examples to the appendix if the paper is accepted.\n\n“Can you provide examples and insights on why this is THE needed model?”\n\nConsider the standard model-theoretic definition of entailment: A entails B if, for every possible world w, if sat(w, A) then sat(w, B):\n\nA ⊧ B iff for every world w ∈ W, sat(w, A) implies sat(w, B)\n\nWe replace possible worlds with random vectors, transform the universal quantification into a product, and provide a neural network that implements sat(w, A). The reason we believe this is *the* needed model is that it is a continuous relaxation of the standard model-theoretic definition of entailment. \n\n\"In my opinion, the message is that generic NN should not be used for specific formal tasks whereas specific neural networks that model the task are desirable. This seems to be a trivial claim.\"\n\nAt this high level of generality, the claim is, indeed, trivial. But our claim is more specific. We provide implementation details of a particular model that outperforms other models on this task. This model is also applicable outside the particular domain of logic.\n\nThe PossibleWorldNet was inspired by the model-theoretic definition of entailment, in terms of truth in all possible worlds. But it is not a specific model that is only useful for this particular task. It is a general model based on the following simple idea: first, evaluate the same model multiple times using different vectors of random noise as inputs; second, combine the results from these multiple runs using a product. This general model is applicable outside the domain of logical entailment; it could be useful for building robust image classifiers, for example. \n\nSince the initial submission, we have run a number of experiments that are significantly more ambitious . See the updated Table 1 on page 3. In the “Big” and “Massive” test sets, the expected number of truth-table rows needed to exhaustively verify entailment is 3000 and 800,000. Our PossibleWorldNet continues to out-perform the other models on these harder test-sets, but it does not completely solve the task. In particular, in the massive test set, it achieves 73%. This score is significantly better than the other models, but it is not a complete solution.\n\nWe hope that these explanations, which have been integrated into the revised manuscript, alongside the inclusion of further tests, without need to change the training protocol or model definitions, on significantly more complex logic problems and \"real-world\" exam data, will help convince you that this work merits publication.\n\nIf you persist in your assessment, we will understand, but would be grateful if you could highlight what is lacking given this further empirical evidence provided in this revision, so that we may continue to improve the paper.", "Thank you for your supportive review and your kind comments. Based on questions you have raised with other reviewers, we have run further tests which we hope confirms your positive sentiment about the paper, and addressed any concerns you had about the testing regime used in the paper. We address here your specific, and very fair, criticism of our paper.\n\n“One issue I had with the paper is regarding the creation of the logical entailment dataset. Not so much for how they explained the process of creating the dataset, that was very thorough, but the fact that this dataset was the only means to test the previous network models and their new proposed network model. I wonder if it would be better to find non-generated datasets which may contain data that have entailment relationships. It is questionable if their hand crafted network model is learned best on their hand crafted dataset.”\n\nThis is a good point. Since the initial submission, we have run a number of further experiments. In particular, we mined standard logic textbooks (e.g., Holbach’s “The Logic Manual”, Mendelson’s “Introduction to Mathematical Logic”) to find a set of entailment questions that were not produced from our synthetic generative process. We then held-out these entailments (and all entailments that were equivalent up to variable-renaming) from the training sets. The new test-set is called “Test (Exam)” in the revised Table 1 on page 3. We were gratified that our model achieved 96% on this “real-world” test-set. See the updated Table 2, reproduced below, including a variety of new larger test sets also described in this revision.\n\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| | | test | test | test | test | test |\n| model | valid|easy | hard | big | mass. | exam|\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| Linear BoW | 52.6 | 51.4 | 50.0 | 49.7 | 50.0 | 52.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| MLP BoW | 57.8 | 57.1 | 51.0 | 55.8 | 49.9 | 56.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| ConvNet Enc. | 59.3 | 59.7 | 52.6 |54.9 | 50.4 | 54.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| LSTM Enc. | 68.3 | 68.3 | 58.1 | 61.1 | 52.7 | 70.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| BiLSTM Enc. | 66.6 | 65.8 | 58.2 | 61.5 | 51.6 | 78.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| TreeNet Enc. | 72.7 | 72.2 | 69.7 | 67.9 | 56.6 | 85.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| TreeLSTM Enc| 79.1 | 77.8 | 74.2 | 74.2 | 59.3 | 75.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| LSTM Trav. | 62.5 | 61.8 | 56.2 | 57.3 | 50.6 | 61.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| TreeLSTM Tr. | 63.3 | 64.0 | 55.0 | 57.9 | 50.5 | 66.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| PWN | 98.7 | 98.6 | 96.7 | 93.9 | 73.4 | 96.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n", "Thank you for your comments and fair criticisms. We have run substantial further evaluation of the previously trained models, which we hope will strengthen the case for this paper. We reply to some of the points you made in your review below, and hope you will find that the empirical evidence satisfactorily addressed the concerns you have raised.\n\n“One weakness with the paper was that it was only tested on 1 dataset. Also, should some form of cross-validation be applied to smooth out variance in the evaluation results. I am not sure if there are standard \"shared\" datasets for this task, which would make the results much stronger.\n\nThis is a good point. To address this question, we have generated two other test sets. The first one, Test (big) has 1-20 variables and 10-30 operators per formula. The second, Test (massive) has 20-26 variables with 20-30 operators per formula. Finally, we collected a \"real world\" test set, Test (exam) from formulas found in textbook and exam questions, pruning sequents from the training set that were alpha-equivalent to sequents found in exam data. See the updated Table 1 for the new test-sets, and Table 2 for the updated results. In particular, there is still a gap between what is achieved by our best models and what is theoretically possible (> 25% accuracy gap) for the massive dataset, showing that further research on this topic is needed, and is hopefully enabled by this dataset.\n\n“Also how about the tradeoff, i.e., does training time significantly increase when we \"imagine\" more worlds. “\n\nYes, the model takes longer to run (in terms of time) as we increase the number of worlds, since we need to evaluate the formulas in every world. But in terms of the number of training epochs, it does not take longer to run. One of the interesting things about the PossibleWorldNet is that the number of parameters (trainable variables) does not increase as we increase the number of worlds, nor does the model see more data. It just does more parallel computation per data point.\n\n“This problem, I think, is quite related to model counting. There has been a lot of work on model counting. a discussion on how this relates to those lines of work would be interesting.”\n\nThanks, this is a good suggestion. We will certainly look into this for the final version.\n", "There is a typo in our response:\n\"For the test (huge) test set, there are nearly 8500,000 truth table rows\"\nThis should read\n\"For the test (huge) test set, there are nearly 850,000 truth table rows\"", "4. In section 3.3, you mention that $w_i \\in \\mathbb{R}^{k}$. Is $k$\nequal to the number of variables in each formula? If so, does this\nmean that $w_i$ is a binary vector that indicates the possible values\nof each of the variables in the formula?\n\n$w_i$ is not a binary vector of truth-value assignments. It is a vector of reals, where each real value is uniformly sampled. Each world $w_i$ is just a vector of random noise. \n\nIn an early experiment, we tried setting the $w_i$s to be vectors of Booleans, corresponding directly to truth-value assignments. We moved away from Boolean vectors because we wanted a neural model that was not tied specifically to propositional logic, that should be applicable to other logics (e.g. modal logics, first-order logic).\n\nThe size of the world vectors, $k$, is a hyper-parameter. It is currently arbitrarily set to 26, but it doesn’t have to be. In future experimental runs, we would be curious to vary this hyper-parameter and plot performance as a function of its size, but have not had the time to do this yet.\n\n5. When creating Figure 1, which formulas were considered in training?\nIf my understanding is correct in question 4, then you need formulas\nthat have at least 8 variables to be able to generate 256 different\nworlds for each formula. This covers all possible configurations of\nthe input formula for 8 variables. Is this a correct understanding?\n\nThe number of worlds is not determined by the number of variables in any particular formula. Rather, the number of worlds is a hyperparameter.\n\n6. As you have mentioned in the paper, TreeLSTMs reveal the best\nresults among your benchmarks. But your possibleWorldNets use\ntreeNets. Was there a particular reason you chose TreeNets? Did you\nalso try out TreeLSTMs with your PossibleWorldNets?\n\nWe chose the TreeNet because it was the best performing benchmark we had. The central idea behind the PossibleWorldNet, implementing a continuous relaxation of model-checking, is applicable to any architecture. In future work, we plan to combine the PossibleWorldNet with other architectures, and compare the results.\n\n7. In section 5 you mention that the reason why BiDirLSTM is doing\nworse than LSTM Encoders is the fact that the BiDirLSTM is\noverfitting. Was this the case for even less number of parameters for\nthe BiDirLSTM? Is it also possible to conclude that the reason for\nBiDirLSTMs not being as effective, is the fact that it might not be\nuseful to parse the formulas in reverse order for logical entailment?\nIf I am not mistaken the reason why BiDirLSTMs do well compared to\nLSTMs, on e.g. NLP tasks is that words that appear later in the\nsentence might have a connection to words that appear earlier in the\nsentence. Is it correct to conclude that this is not the case for\nlogical entailment?\n\nWe are uncertain how to interpret this particular result, as architecturally a bidirectional LSTM subsumes a unidirectional LSTM. We are not committed to this being an overfitting issue in particular, but clearly this architectural variant is more difficult to optimise in this setting. The same intuition as to why they work well as encoders in NLP tasks should apply here, and we hope further work will help to elucidate how to properly optimise bidirectional models on this task.", "3. In section 5, the last paragraph, you compute the total number of\npossible truth-value assignments using 26 variables. If there are on\naverage 4.5 variables in each formula, shouldn't the number of\npossible truth-values on average be $2^{4.5}$?\n\nWe acknowledge that the last paragraph in section 5 is misleading. This paragraph will be rewritten based on the new results below.\n\nWe report, in Table 1, the mean number of variables for each section of dataset, as well as the mean number of rows needed to compute the truth tables for expressions from that section (2^{# Vars}). We note that the average number of rows is the average of 2^{# Vars}, rather than two to the power of the average number of variables. (The former is larger than the latter). At the end of the paragraph, we place an upper bound on the number of value assignments that the PossibleWorldNet (PWN) would need to iterate over (in the discrete case) to properly check each case.\n\nYour question serves to highlight that the average number of rows a truth table-based method would need to compute is, for our test sets, lower than the number of possible worlds the PWN views when making predictions. To this end, we have run-all models on larger test sets described in our answer to question 2, alongside an extra test set of human-answerable questions. We tabulate here the new results, which differ in places from the results presented in the paper as the training set has had simple examples removed from it.\n\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| | | test | test | test | test | test |\n| model | valid|easy | hard | big | mass. | exam|\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| Linear BoW | 52.6 | 51.4 | 50.0 | 49.7 | 50.0 | 52.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| MLP BoW | 57.8 | 57.1 | 51.0 | 55.8 | 49.9 | 56.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| ConvNet Enc. | 59.3 | 59.7 | 52.6 |54.9 | 50.4 | 54.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| LSTM Enc. | 68.3 | 68.3 | 58.1 | 61.1 | 52.7 | 70.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| BiLSTM Enc. | 66.6 | 65.8 | 58.2 | 61.5 | 51.6 | 78.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| TreeNet Enc. | 72.7 | 72.2 | 69.7 | 67.9 | 56.6 | 85.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| TreeLSTM Enc| 79.1 | 77.8 | 74.2 | 74.2 | 59.3 | 75.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| LSTM Trav. | 62.5 | 61.8 | 56.2 | 57.3 | 50.6 | 61.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| TreeLSTM Tr. | 63.3 | 64.0 | 55.0 | 57.9 | 50.5 | 66.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n| PWN | 98.7 | 98.6 | 96.7 | 93.9 | 73.4 | 96.0 |\n+--------------------+-------+--------+--------+-------+-----------+--------+\n\nWe observe that for the larger datasets, the tree-structured networks stay clearly ahead of the other benchmarks, and the PossibleWorldNet takes the top spot for all test sets, including the exams. For the test (huge) test set, there are nearly 8500,000 truth table rows to check on average, and yet the possible world net performs competitively with only 256 \"worlds\" per forward pass, showing that it is not replicating the brute-force verification method for detecting entailment within its activations. Even in the test (big) test set, with 3310 rows to check per sequent on average, a 256 world PWN nearly solves the problem.", "2. I noticed that your test sets do not have beyond 10 variables. Is\nthere a particular reason why you don't test on the more complex\nformulas including more variables?\n\nThe only reason for choosing 1-10 variables in the paper was to limit the time it takes to generate the training/test data. Recall, that to generate a hard/unbiased dataset, we wanted to find four-tuples satisfying the four entailment conditions which ensures that the task cannot be solved with simply finding structural biases. As the number of variables increases, the chance of finding a four-tuple (A₁, B₁, A₂, B₂) satisfying the four conditions drops. Having said that, the 4-tuple constraint need not be imposed on test data, as it is only a useful constraint during training to prevent degenerate solutions. \n\nTo address your question, we have generated two other test sets. The first one, Test (big) has 1-20 variables and 10-30 operators per formula. The second, Test (massive) has 20-26 variables with 20-30 operators per formula. Finally, we collected a \"real world\" test set, Test (exam) from formulas found in textbook and exam questions, pruning sequents from the training set that were alpha-equivalent to sequents found in exam data. This has the effect of removing some of the simplest examples from the training data, requiring us to re-run training and evaluation for all models. The statistics of these new datasets are found in the table below (which will hopefully stay somewhat formatted in the comment), which will replace Table 1 in future versions of the paper. We present and discuss model evaluation against the tests sets in our answer to question 3, below. As can be seen from the results, the proposed PWN model continues to be far superior to all other baselines on the more challenging datasets.\n\n+----------------+--------+--------+-------+--------+------------+\n| | |Mean|Mean|Mean|Mean |\n| | Size |#Vars|#Ops|Len |2^#Vars|\n+----------------+--------+--------+-------+--------+------------+\n| Train |99,876| 4.5 | 5.3 | 11.3 | 52.2 |\n+----------------+--------+--------+-------+--------+------------+\n| Validate | 5,000 | 5.1 | 6.8 | 13.0 | 75.7 |\n+----------------+--------+--------+-------+--------+------------+\n| Test (easy) | 5,000| 5.2 | 6.9 | 13.1 | 81.0 |\n+----------------+--------+--------+-------+--------+------------+\n| Test (hard)| 5,000 | 5.8 | 17.4 | 31.5 | 184.4 |\n+----------------+--------+--------+-------+--------+------------+\n| Test (big) | 5,000 | 8.0 | 20.9 | 38.7 | 3310.8 |\n+----------------+--------+--------+-------+--------+------------+\n| Test (mass.)| 2,230| 18.4 | 49.4 | 88.8 |848,670.0|\n+----------------+--------+--------+-------+--------+------------+\n| Test (exam)| 100 | 2.4 | 3.9 | 8.6 | 5.8 |\n+----------------+--------+--------+-------+--------+------------+", "Thank you for these thoughtful questions, and for your patience in awaiting our response. This gives us the opportunity to clarify some of the things that should have been clearer in the paper. We copy your questions followed by our answers, for readability. These will be split across four comments due to OpenReview's comment character limits.\n\n1. Variables in the formulas: in section 2.1 you mention that you have\n26 variables in total. Does this mean that each formula has 26\nvariables? I suspect that this is not the case since you state in\nTable 1 that the average number of variables is 4.5 in the train set.\nIf this assumption is correct, then can you point out how many of your\nformulas out of the 100,000 have 26 variables in them?\n\nYes, no formula has all 26 variables in it. We have a pool of 26 variables: {a, …, z} in total. A formula is generated by (i) first sampling between 1 and 10 propositional variables from {a, …, z}; call this temporary set of variables V and then (ii) generating a formula of the desired operator-complexity; each time the sampling procedure needs a variable, it chooses one of the variables in V. It is possible (in fact frequently the case) when generating a formula that the sampling procedure will not use all the variables in V.\n\nFor example, we generate a set of 10 variables V = [b, c, d, j, m, p, q, s, v, x], and then generate a formula with 10 operators in it, using V. We get: ¬(((p ∨ m) ∧ (p ∨ d)) → ¬((j → c) ∧ ¬¬p))\nThis only uses five of the ten variables in it. ", "Dear authors,\nThank you for your nice work, I enjoyed reading your paper. I have several questions about your paper and I appreciate if you can answer them to make things more clear.\n\n1. Variables in the formulas: in section 2.1 you mention that you have 26 variables in total. Does this mean that each formula has 26 variables? I suspect that this is not the case since you state in Table 1 that the average number of variables is 4.5 in the train set. If this assumption is correct, then can you point out how many of your formulas out of the 100,000 have 26 variables in them? \n\n2. I noticed that your test sets do not have beyond 10 variables. Is there a particular reason why you don't test on the more complex formulas including more variables?\n\n3. In section 5, the last paragraph, you compute the total number of possible truth-value assignments using 26 variables. If there are on average 4.5 variables in each formula, shouldn't the number of possible truth-values on average be $2^{4.5}$?\n\n4. In section 3.3, you mention that $w_i \\in \\mathbb{R}^{k}$. Is $k$ equal to the number of variables in each formula? If so, does this mean that $w_i$ is a binary vector that indicates the possible values of each of the variables in the formula?\n\n5. When creating Figure 1, which formulas were considered in training? If my understanding is correct in question 4, then you need formulas that have at least 8 variables to be able to generate 256 different worlds for each formula. This covers all possible configurations of the input formula for 8 variables. Is this a correct understanding?\n\n6. As you have mentioned in the paper, TreeLSTMs reveal the best results among your benchmarks. But your possibleWorldNets use treeNets. Was there a particular reason you chose TreeNets? Did you also try out TreeLSTMs with your possibleWorldNets?\n\n7. In section 5 you mention that the reason why BiDirLSTM is doing worse than LSTM Encoders is the fact that the BiDirLSTM is overfitting. Was this the case for even less number of parameters for the BiDirLSTM? Is it also possible to conclude that the reason for BiDirLSTMs not being as effective, is the fact that it might not be useful to parse the formulas in reverse order for logical entailment? If I am not mistaken the reason why BiDirLSTMs do well compared to LSTMs, on e.g. NLP tasks is that words that appear later in the sentence might have a connection to words that appear earlier in the sentence. Is it correct to conclude that this is not the case for logical entailment?" ]
[ 7, 7, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkZxCk-0Z", "iclr_2018_SkZxCk-0Z", "iclr_2018_SkZxCk-0Z", "Synyi3YxM", "ByyL5O_ef", "HyKlaaFxf", "Skm7Jfokz", "BkaIlVcCZ", "BkaIlVcCZ", "BkaIlVcCZ", "BkaIlVcCZ", "iclr_2018_SkZxCk-0Z" ]
iclr_2018_HyRVBzap-
Cascade Adversarial Machine Learning Regularized with a Unified Embedding
Injecting adversarial examples during training, known as adversarial training, can improve robustness against one-step attacks, but not for unknown iterative attacks. To address this challenge, we first show iteratively generated adversarial images easily transfer between networks trained with the same strategy. Inspired by this observation, we propose cascade adversarial training, which transfers the knowledge of the end results of adversarial training. We train a network from scratch by injecting iteratively generated adversarial images crafted from already defended networks in addition to one-step adversarial images from the network being trained. We also propose to utilize embedding space for both classification and low-level (pixel-level) similarity learning to ignore unknown pixel level perturbation. During training, we inject adversarial images without replacing their corresponding clean images and penalize the distance between the two embeddings (clean and adversarial). Experimental results show that cascade adversarial training together with our proposed low-level similarity learning efficiently enhances the robustness against iterative attacks, but at the expense of decreased robustness against one-step attacks. We show that combining those two techniques can also improve robustness under the worst case black box attack scenario.
accepted-poster-papers
This paper forms a good contribution to the active area of adversarial training. The main issue with the original submission was presentation quality and excessive length. The revised version is much improved. However, it still needs some work on the writing, in large part in the transferability section but also to clean up a large number of non-native formulations like missing/extra determiners and some awkward phrasing. It should be carefully proofread by a native English speaker if possible. Also, the citation formatting is incorrect (frequently using \citet instead of \citep).
train
[ "Hy7Gjh9eM", "HyPfhQ2ez", "HkdIMPhez", "HkBR-RcXz", "Hko4LECMG", "S1nCrEAMM", "SJ0PSV0GM", "SywnZV0MG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "The authors proposed to supplement adversarial training with an additional regularization that forces the embeddings of clean and adversarial inputs to be similar. The authors demonstrate on MNIST and CIFAR that the added regularization leads to more robustness to various kinds of attacks. The authors further propose to enhance the network with cascaded adversarial training, that is, learning against iteratively generated adversarial inputs, and showed improved performance against harder attacks. \n\nThe idea proposed is fairly straight-forward. Despite being a simple approach, the experimental results are quite promising. The analysis on the gradient correlation coefficient and label leaking phenomenon provide some interesting insights. \n\nAs pointed out in section 4.2, increasing the regularization coefficient leads to degenerated embeddings. Have the authors consider distance metrics that are less sensitive to the magnitude of the embeddings, for example, normalizing the inputs before sending it to the bidirectional or pivot loss, or use cosine distance etc.?\n\nTable 4 and 5 seem to suggest that cascaded adversarial learning have more negative impact on test set with one-step attacks than clean test set, which is a bit counter-intuitive. Do the authors have any insight on this? \n\nComments:\n1. The writing of the paper could be improved. For example, \"Transferability analysis\" in section 1 is barely understandable;\n2. Arrow in Figure 3 are not quite readable;\n3. The paper is over 11 pages. The authors might want to consider shrink it down the recommended length. ", "This paper improves adversarial training by adding to its traditional objective a regularization term forcing a clean example and its adversarial version to be close in the embedding space. This is an interesting idea which, from a robustness point of view (Xu et al, 2013) makes sense. Note that a similar strategy has been used in the recent past under the name of stability training. The proposed method works well on CIFAR and MNIST datasets. My main concerns are:\n\n\t- The adversarial objective and the stability objective are potentially conflicting. Indeed when the network misclassifies an example, its adversarial version is forced to be close to it in embedding space while the adversarial term promotes a different prediction from the clean version (that of the ground truth label). Have the authors considered this issue? Can they elaborate more on how they with this?\n\n\t- It may be significantly more difficult to make this work in such setting due to the dimensionality of the data. Did the authors try such experiment? It would be interesting to see these results. \n\nLastly, The insights regarding label leaking are not compelling. Label leaking is not a mysterious phenomenon. An adversarially trained model learns on two different distributions. Given the fixed size of the hypothesis space explored (i.e., same architecture used for vanilla and adversarial training), It is natural that the statistics of the simpler distribution are captured better by the model. Overall, the paper contains valuable information and a method that can contribute to the quest of more robust models. I lean on accept side. \n\n\n", "The paper presents a novel adversarial training setup, based on distance based loss of the feature embedding.\n\n+ novel loss\n+ good experimental evaluation\n+ better performance\n- way too long\n- structure could be improved\n- pivot loss seems hacky\n\nThe distance based loss is novel, and significantly different from prior work. It seems to perform well in practice as shown in the experimental section.\nThe experimental section is extensive, and offers new insights into both the presented algorithm and baselines. Judging the content of the paper alone, it should be accepted.\n\nHowever, the exposition needs significant improvements to warrant acceptance. First, the paper is way too long and unfocused. The recommended length is 8 pages + 1 page for citations. This paper is 12+1 pages long, plus a 5 page supplement. I'd highly recommend the authors to cut a third of their text, it would help focus the paper on the actual message: pushing their new algorithm. Try to remove any sentence or word that doesn't serve a purpose (help sell the algorithm).\nThe structure of the paper could also be improved. For example the cascade adversarial training is buried deep inside the experimental section. Considering that it is part of the title, I would have expected a proper exposition of the idea in the technical section (before any results are presented). While condensing the paper, consider presenting all technical material before evaluation.\nFinally, the pivot \"loss\" seems a bit hacky. First, the pivot objective and bidirectional loss are exactly the same thing. While the bidirectional loss is a proper loss and optimized as such (by optimizing both E^adv and E), the pivot objective is no loss function, as it does not correspond to any function any optimization algorithm could minimize. I'd recommend the just remove the pivot objective, or at least not call it a loss.\n\nIn summary, the results and presented method are good, and eventually deserve publication. However the exposition needs to significantly improve for the paper to be ready for ICLR.", "We would like to mention that the submission similar to our work in OpenReview.\n\n- Ensemble Adversarial Training: Attacks and Defenses (avg. score of 6.0)\nhttps://openreview.net/forum?id=rkZvSe-RZ\nThis work is essentially the work we've used as a reference in our paper.\nWe showed that combining low-level similarity loss and ensemble adversarial training results in superior accuracy against adversarial attacks under both white/black box attacks compared to the vanilla ensemble adversarial training approach.\n\nIn table 3 (Accuracy under the white box attack)\nEnsemble adversarial training 24% @ CW e=2\nEnsemble adversarial training with pivot loss (Ours) 38% @ CW e=2\n\nIn table 4 (Worst case accuracy under the black box attack)\nEnsemble adversarial training 54.7%\nEnsemble adversarial training with pivot loss (Ours) 67.7%\n\n", "We have updated manuscript to make it concise. The contents are essentially the same with the previous version. We only changed structure/length of the paper. As the reviewers suggested, we have included exposition of the idea in the technical section before the experimental results. Only the transferability analysis has been included before the experimental results since we found it is necessary to show ‘higher transferability of the iterative adversarial examples between defended networks’ to introduce our proposed cascade adversarial training. The table of contents are changed as follows. \n\n1. introduction\n2. Background on Adversarial Attacks\n 2.1 attack methods\n 2.2 defense methods\n3. Proposed Approach\n 3.1 Trasferability analysis\n 3.2 Cascade adversarial training\n 3.3 Regularization with a unified embedding\n4. Low Level Similarity Learning Analysis\n 4.1 Experimental results on MNIST\n 4.2 Embedding space visualization\n 4.3 Effect of lambda_2 on CIFAR10\n5. Cascade Adversarial Training Analysis\n 5.1 Source network selection\n 5.2 White box attack results\n 5.3 Black box attack results\n6. Conclusion\n\nMaterials moved to Appendix\n- experimental setup\n- label leaking analysis (We think it gives fruitful insights into the adversarial training, however, we have decided to move this analysis to Appendix since it is not directly related with our proposed idea.)\n\nMaterials removed in the manuscript\n- ResNet 20-layer white box attack results on CIFAR10\n\nAdditional comments (practical importance of low-level similarity loss)\n- We did not touch on the importance of the low-level similarity learning enough in the main paper. However, we would like to emphasize the role of low-level similarity learning (regularization) for practical reason. This very simple technique can be used as a knob for controlling the trade off between accuracy for the clean and that for the adversarial inputs. Thus, one can train a network with high lambda_2 for enhanced robustness for critical applications like autonomous driving. And our method can be combined with any other orthogonal approaches (non-differentiable input transformation, feature squeezing) for further improved robustness.\n\nAgain, we would like to mention that the technical contents are exact the same with those in the previous version. We would like to kindly ask reviewers to re-evaluate the paper focusing more on the technical contribution of the paper. Thank you.\n", "Thank you for the valuable reviews.\n\nQ1 - As pointed out in section 4.2, increasing the regularization coefficient leads to degenerated embeddings. Have the authors consider distance metrics that are less sensitive to the magnitude of the embeddings, for example, normalizing the inputs before sending it to the bidirectional or pivot loss, or use cosine distance etc.?\n(Ans) We would like to mention that every regularization technique including weight decay (L1/L2 regularization) has the problem of degenerated embeddings if we weigh more on the regularized term. We have applied regularization after normalizing embeddings (divided by the standard deviation of the embeddings). As we increase lambda_2, the mean of the embeddings remains the same, but, the standard deviation of the embeddings becomes large. That means intra class variation becomes large. We eventually observed degenerated embeddings (lower accuracy on the clean example) for large lambda_2 which is the same phenomenon with our original version of implementation.\n\nQ2- Table 4 and 5 seem to suggest that cascaded adversarial learning have more negative impact on test set with one-step attacks than clean test set, which is a bit counter-intuitive. Do the authors have any insight on this? \n(Ans) The purpose of cascade adversarial training is to improve the robustness against “iterative” attack. And we showed the effectiveness of the method showing increased accuracy against iterative attack, but, at the expense of decreased accuracy against one-step attack.\nWe have observed this phenomenon (the networks shown to be robust against iterative attacks tends to less robust against one-step attacks) in various conditions including ensemble adversarial training. We feel that in somehow, there is trade-off between them. Based on our extensive empirical experiments, it was very hard to increase robustness against both one-step attack and iterative attack. It is a still open question why this is so difficult, but, we assume that high dimensionality of the input is one reason for this. That means once we find good defense for some adversarial direction, there exists another adversarial direction which breaks the defense.\nFinally, we would like to mention that even though we have observed decreased accuracy for the one-step attack from cascade adversarial training, accuracy gain on the iterative attacks in white box setting helps increase in robustness against black box attack (for both one-step and iterative attacks.)\n\nQ3 - The writing of the paper could be improved. For example, \"Transferability analysis\" in section 1 is barely understandable;\n(Ans) We’ve updated the manuscript. Essentially, the detailed analysis can be found in new section 3.1.\n\nQ4 - Arrow in Figure 3 are not quite readable;\n(Ans) We’ve re-drawn the arrow from e to e+8 instead of e+4 to increase readability in revised version.\n\nQ5 - The paper is over 11 pages. The authors might want to consider shrink it down the recommended length. \n(Ans) We’ve updated the manuscript. Thanks for your recommendation.\n", "Thank you for the valuable reviews.\n\nQ1.- The adversarial objective and the stability objective are potentially conflicting. Indeed when the network misclassifies an example, its adversarial version is forced to be close to it in embedding space while the adversarial term promotes a different prediction from the clean version (that of the ground truth label). Have the authors considered this issue? Can they elaborate more on how they with this?\n(Ans) Yes, we totally agree with that the objective of the image classification and similarity objective can be potentially conflicting each other. We would like to address that our distance based loss can be considered as a way of regularization. Like every regularization, for example, weight L1/L2 regularization which penalizes the large value of the weights, too much regularization can always damage the training process.\nIf the classifier misclassifies the clean version, that means the example is hard example. Adversarial version of the hard example will also be misclassified. Instead of trying to always encourage to produce ground truth, the similarity loss will encourage adversarial version to mimic the clean version. We can think this is somewhat analogous to student-teacher learning where student network is trained with the soft target from the teacher network instead of conventional hard target.\n\nQ2- It may be significantly more difficult to make this work in such setting due to the dimensionality of the data. Did the authors try such experiment? It would be interesting to see these results.\n(Ans) We have applied low level similarity learning for ImageNet dataset. We observed similar results (A network trained with pivot loss showed improved accuracy for white box iterative attacks compared to the network trained with adversarial training only.) We will augment those results in the final version. Due to the lack of computing resources, however, we haven’t tried training several similar networks with different initialization and testing those networks under white box and black box scenario. \nIf you are talking about the dimensionality of the embedding, we actually have applied similarity loss on un-normalized logits (the layer right before the softmax layer) where the dimension of the embedding is exact the same with the number of labels which we don’t think a problem. We have tried applying similarity loss on intermediate neurons (that means different size of the embeddings), and found that applying similarity loss is efficient when we apply this at the very end of the network.\n\nQ3- Lastly, The insights regarding label leaking are not compelling\n(Ans) There is no clear evidence that the distribution of the one-step adversarial images is simpler than the distribution of the clean images. Adversarial images have meant to be created to fool the network. If we think the images with random noise, and train a network with clean and noisy images, we observe the decreased accuracy for noisy images since we lost the information due to the noise. Considering the fact that the adversarial images can be viewed as images with additive noise, label leaking phenomenon is not well understood since we actually added some noise which is intentional. Correlation analysis reveals the reason behind this effect.\n", "Thank you for the valuable reviews.\n\nQ1 – Way too long, structure could be improved\n(Ans) Thanks for your feedback. We have changed the structure/length of the paper in revised version. In revised version, the contents are essentially the same with those in the previous version. We only changed structure/length of the paper. As the reviewer suggested, we have included exposition of the idea in the technical section before the experimental results.\nOnly the transferability analysis has been included before the experimental results since we found it is necessary to show ‘higher transferability of the iterative adversarial examples between defended networks’ to introduce our proposed cascade adversarial training.\nThank you very much for your valuable feedback which greatly improved the quality of the revised manuscript.\n\nQ2 - pivot loss seems hacky\n(Ans) Initially, we didn’t want to hurt the accuracy for the clean data, and that motivated us to invent pivot loss. In pivot loss, we assume clean embeddings as ground truth embeddings that adversarial embeddings have to mimic. \nBidirectional loss and pivot loss have the same mathematical form, however, in pivot loss, embeddings from the clean examples are treated as constants (non-trainable). For pivot loss, it actually computes gradients with back-propagation, but only through embeddings computed from adversarial images.\nWe can also think this is somewhat analogous to student-teacher learning where student network (adversarial embedding) is trained with the soft target from the teacher network (clean embedding) instead of conventional hard target.\n" ]
[ 6, 6, 5, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyRVBzap-", "iclr_2018_HyRVBzap-", "iclr_2018_HyRVBzap-", "iclr_2018_HyRVBzap-", "iclr_2018_HyRVBzap-", "Hy7Gjh9eM", "HyPfhQ2ez", "HkdIMPhez" ]
iclr_2018_Sk9yuql0Z
Mitigating Adversarial Effects Through Randomization
Convolutional neural networks have demonstrated high accuracy on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. For example, imperceptible perturbations added to clean images can cause convolutional neural networks to fail. In this paper, we propose to utilize randomization at inference time to mitigate adversarial effects. Specifically, we use two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input images in a random manner. Extensive experiments demonstrate that the proposed randomization method is very effective at defending against both single-step and iterative attacks. Our method provides the following advantages: 1) no additional training or fine-tuning, 2) very few additional computations, 3) compatible with other adversarial defense methods. By combining the proposed randomization method with an adversarially trained model, it achieves a normalized score of 0.924 (ranked No.2 among 107 defense teams) in the NIPS 2017 adversarial examples defense challenge, which is far better than using adversarial training alone with a normalized score of 0.773 (ranked No.56). The code is public available at https://github.com/cihangxie/NIPS2017_adv_challenge_defense.
accepted-poster-papers
Paper proposes adding randomization steps during inference time to CNNs in order to defend against adversarial attacks. Pros: - Results demonstrate good performance, and the team achieve a high rank (2nd place) on a public benchmark. - The benefit of the proposed approach is that it does not require any additional training or retraining. Cons: - The approach is very simple, common sense would tend to suggest that adding noise to images would make adversarial attempts more difficult. Though perhaps simplicity is a good thing. - Update: Paper does not cite related and relevant work, which takes a similar approach of requiring no retraining, but rather changing the inference stage: https://arxiv.org/pdf/1709.05583.pdf 

 Grammatical Suggestions: This paper would benefit from polishing. For example: - Abstract: sentence 1: replace “their powerful ability” to “high accuracy” - Abstract: sentence 3: replace “I.e., clean images…” with “For example, imperceptible perturbations added to clean images can cause convolutional neural networks to fail” - Abstract: sentence 4: replace “utilize randomization” to “implement randomization at inference time” or something similar to make more clear that this procedure is not done during training. - Abstract: sentence 7: replace “also enjoys” with “provides” Main Text: Capitalize references to figures (i.e. “figure 1” to “Figure 1”). Introduction: Paragraph 4: Again, please replace “randomization” with “randomization at inference time” or something similar to better address reviewer concerns.
val
[ "BJDaCQFxM", "ByRWmWAxM", "B104VQCgM", "BJrUXqS7f", "BJpj_KSXG", "Hkxu_YH7f", "SJkSdtSmG", "rJ70Io8kf", "rJXvqfIJG", "Sy3pQrByM", "r13-LSB1M", "rJkSBVByG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "author", "author", "public" ]
[ "The authors propose a simple defense against adversarial attacks, which is to add randomization in the input of the CNNs. They experiment with different CNNs and published adversarial training techniques and show that randomized inputs mitigate adversarial attacks. \n\nPros:\n(+) The idea introduced is simple and flexible to be used for any CNN architecture\n(+) Experiments on ImageNet1k prove demonstrate its effectiveness\nCons:\n(-) Experiments are not thorougly explained\n(-) Novelty is extremely limited\n(-) Some baselines missing\n\n\nThe experimental section of the paper was rather confusing. The authors should explain the experiments and the settings in the table, as those are not very clear. In particular, it was not clear whether the defense model was trained with the input randomization layers? Also, in Tables 1-6, how was the target model trained? How do the training procedures of target vs. defense model differ? In those tables, what is the testing procedure for the target model and how does it compare to the defense model? \n\nThe gap between the target and defense model in Table 4 (ensemble pattern attack scenario) shrinks for single step attack methods. This means that when the attacker is aware of the randomization parameters, the effect of randomization might diminish. A baseline that reports the performance when the attacker is fully aware of the randomization of the defender (parameters, patterns etc.) is missing but is very useful.\n\nWhile the experiments show that the randomization layers mitigate the effect of randomization attacks, it's not clear whether the effectiveness of this very simple approach is heavily biased towards the published ways of generating adversarial attacks and the particular problem (i.e. classification). The form of attacks studied in the paper is that of additive noise. But there is many types of attacks that could be closely related to the randomization procedure of the input and that could lead to very different results.", "This paper proposes an extremely simple methodology to improve the network's performance by adding extra random perturbations (resizing/padding) at evaluation time.\n\nAlthough the paper is very basic, it creates a good baseline for defending about various types of attacks and got good results in kaggle competition.\n\nThe main merit of the paper is to study this simple but efficient baseline method extensively and shows how adversarial attacks can be mitigated by some extent.\n\nCons of the paper: there is not much novel insight or really exciting new ideas presented.\n\nPros: It gives a convincing very simple baseline and the evaluation of all subsequent results on defending against adversaries will need to incorporate this simple defense method in addition to any future proposed defenses, since it is very easy to implement and evaluate and seems to improve the defense capabilities of the network to a significant degree. So I assume that this paper will be influential in the future just by the virtue of its easy applicability and effectiveness.\n\n", "The paper basically propose keep using the typical data-augmentation transformations done during training also in evaluation time, to prevent adversarial attacks. In the paper they analyze only 2 random resizing and random padding, but I suppose others like random contrast, random relighting, random colorization, ... could be applicable.\n\nSome of the pros of the proposed tricks is that it doesn't require re-training existing models, although as the authors pointed out re-training for adversarial images is necessary to obtain good results.\n\n\nTypically images have different sizes, however in the Dataset are described as having 299x299x3 size, are all the test images resized before hand? How would this method work with variable size images?\n\nThe proposed defense requires increasing the size of the input images, have you analyzed the impact in performance? Also it would be good to know how robust is the method for smaller sizes.\n\nSection 4.6.2 seems to indicate that 1 pixel padding or just resizing 1 pixel is enough to get most of the benefit, please provide an analysis of how results improve as the padding or size increase. \n\nIn section 5 for the challenge authors used a lot more evaluations per image, could you provide how much extra computation is needed for that model?\n\n", "We would like to thank the reviewers for their thoughtful responses, and are glad to see that there is a consensus among the reviewers to accept this work. In order to address the concerns from reviewers, we have conducted more experiments (shown in Appendix) and updated the paper to describe the experiments more clearly. We are grateful to each of the reviewers to help us improve the work. Please find individual replies to each of the reviews in the respective threads.", "Thank you very much for the comments. We have updated our paper, especially the experiment section. Below are the detailed answers to your concerns.\n\n“experiment confusing”: Sorry for the confusion, and we have made this clearer in the updated paper. The defense model is simply adding two randomization layers to the beginning of the original classification networks. There is no re-training and fine-tuning needed. This is an advantage of our method. We choose Inception-v3, ResNet-v2, Inception-ResNet-v2 and ens-adv-Inception-ResNet-v2 as the original CNN models, and these models are public available under Tensorflow github repo. The target models are the models used by attackers to generate adversarial examples. The target models differ under different attack scenarios: (1) vanilla attack: the target model is the original CNN model, e.g. Inception-v3; (2) single-pattern attack: target model is the original CNN model + randomization layers with only one predefined pattern; (3) ensemble-pattern attack: the target model is the original CNN model + randomization layers with an ensemble of predefined patterns. Note that the structure and weights of the classification network in target model and defense model are exactly the same. In tables 2-6, the attackers first use target model to generate adversarial examples, and then tests the top-1 accuracy on target model and defense model. Specifically, (1) for target model, a lower accuracy indicates a more successful attack; (2) for defense model, a higher accuracy indicates a more successful defense. \n\n“stronger baseline when the attacker is fully aware the patterns”: We agree that the performance gap between the target and defense model will shrink as more randomization patterns are considered in the attack process. This is expected. Here we want to emphasize that during defense, the padding and resizing are done randomly, so there is no way for both the attacker and the defender to know the exact instantiated patterns. The strongest possible attack would be that the attackers consider ALL possible patterns when generating the adversarial examples. However, this is not possible. Failing all patterns takes extremely long time, and may not even converge. For example, under our randomization setting, the total number of patterns (resizing + padding) is 12528. Thus, instead of choosing such a large number, we choose 21 representative patterns in our ensemble attack scenario, which becomes computationally manageable. Increasing the number of ensembled patterns means: (1) more computation time (take C&W for example, it takes around 0.56 min to generate an adversarial example under vanilla attack, but takes around 8 min to generate an adversarial example under ensemble attack); (2) more memory consumption (at most an ensemble of 30 different patterns can be utilized as one batch to generated adversarial examples for one 12GB GPU, more patterns indicates more GPUs or the GPU with larger memory); (3) larger magnitude of adversarial perturbation.\n\n“biased towards the published adversarial attacks”: Our defense method is not trained using any adversarial examples, so we don’t think it is biased towards any attacks. We extensively test our method on the most popular attacks (one single-step attack FGSM, and two representative iterative attacks DeepFool and C&W), with various network structures, and using large-scale ImageNet datasets. Moreover, we submit this method to a public adversarial defense challenge. Our method is evaluated against 156 different attacks and we are ranked Top 2, which indicates the effectiveness of our method.\n\n“particular problem (e.g. classification) and additive noise”: Currently most works on this topic focus on classification problem and assume additive noise as adversarial perturbation. We follow this setting in this paper. We have two future directions to explore: 1) apply randomization to other vision tasks, 2) apply randomization to other types of attack instead of additive noise. Thanks for the comments.\n", "Thank you very much for the appreciation of our work. The method is indeed simple and effective. Although the randomization idea is not new, we in this paper apply it to mitigate adversarial effects at test time systematically. And we demonstrate the effectiveness on large-scale ImageNet dataset, which is very challenging. Very few defense papers worked on ImageNet before. We hope our method could be served as a simple new baseline for adversarial example defense in the future works.", "Thank you very much for the comments, which significantly improve the quality of our paper. We have conducted additional experiments to answer the concerns. These experiments results are included as appendix in the updated paper.\n\n“Other operations”: Yes, other random operations also apply. We tried four operations separately: random brightness, random contrast, random saturation, and random hue. For each individual operation, we add it to the beginning of the original classification network. We found that these operations nearly have no hurts on the performance of clean images (shown in table 7), but they are not as effective as the proposed randomization layers on defending adversarial examples (shown in table 8-11). By combining these random operations with the proposed randomization layers, the performance on defending adversarial examples can be slightly improved. We have updated these new results in the Appendix A.\n\n“resized beforehand”: Yes, the test images are resized beforehand. There are two reasons: (1) easy to form a batch (e.g., one batch contains 100 images) for classification; (2) stay aligned with the format of the public competition, where the test dataset are all of the size 299x299x3. For the images with variable sizes, we can first resize them to 299x299x3, and then applied the proposed method to defend adversarial examples.\n\n“impact of size in performance”: Adding two randomization layers (increasing size from 299 to 331) slightly downgrades the performance on clean images, as shown in Table 1. This decrease becomes negligible for stronger models. In addition, we also tried applying randomization to smaller-sized images. Specifically, we first resize the images to a size randomly sampled from the range [267, 299), and then randomly pad it to 299x299x3. We evaluate the performance on both the 5000 clean images and the adversarial examples generated under the vanilla attack scenario (shown in table 12). We see that the randomization method works well with smaller sizes, but using larger sizes produces slightly better results. We hypothesize that this is because resizing an image to smaller sizes may lose some information. We have updated the new results in the Appendix B.\n\n“padding or resizing increase”: As the padding size or resizing size increase, there will be a lot more random patterns. So it becomes much harder for the attackers to generate the adversarial example that can fail all the patterns at the same time. Thus, larger size and more paddings will significantly increase the robustness. Notice that the motivation for the experiments in Sec 4.6 is to decouple the effect of padding and resizing. We want to show that (1) adversarial example generated on one padding pattern is hard to transfer to another padding pattern; (2) adversarial example generated on one size is hard to transfer to another size. Using 1-pixel padding and resizing provide a controllable way to verify these two points.\n\n“multiple iterations per image”: The computation time increases linearly with number of iteration per image (e.g., 30x time in our challenge submission). We argue that one iteration is enough to get the most benefits, and additional evaluations only provide marginal gain (as shown in figures 3-5), which is good for the challenge. The experiments that show the relationship between the classification performance and iteration number is included in Appendix C.\n", "Thanks for your comments.\n\nFirst of all, we would like to highlight two important things in our work. 1). This work is done on large-scale datasets like ImageNet, and only a few defenses (including adversarial training) have demonstrated the effectiveness before. Though MNIST is an interesting dataset on which to test defense ideas, the conclusions may not be readily applied to ImageNet. 2). The attack scenarios considered in our paper are much stronger than black-box attacks. I.e., the network structures and parameters are completely known by the attackers.\n\nIn the paper, we demonstrate the effectiveness of our method on basic C&W attacks, which are very challenging already, and were not well studied on ImageNet before. In order to overcome the problem that randomization models are not differentiable, we considered single-pattern attacks and ensemble-pattern attacks in the experiments. The experimental results indicate that adversarial examples generated under ensemble-pattern attacks are stronger than others. Note that the C&W attacks are very slow. Take the basic C&W attack against inception-resnet-v2 for example. It takes ~17 mins to generate adversarial examples for a batch of 30 images under vanilla attack scenario, and takes ~8 mins to generate adversarial examples for 1 image under ensemble-pattern attack scenario. Generating higher-confidence adversarial examples will significantly increase the time consumption even further, and thus, may not be practical. So we focus on basic C&W attacks in our experiments at current stage. \n\nFor the baseline included in this paper, we want to point out three things. 1). To the best of our knowledge, adversarial training is the most effectiveness method on large-scale dataset like ImageNet. We are confused by the words “the state-of-the-art defense”, please refer to it explicitly. 2). The adversarially trained model is not robust to iterative attacks, and is used more like a network backbone rather than baseline in our paper. We combine the adversarially trained model, which is robust to single-step attacks, and randomization layers, which improve network robustness to iterative attacks, together to form our best defense model. 3). There are 100+ defense teams and 150+ attacks teams participate in this public adversarial defense challenge, and our model is ranked top 2. We argue that this challenge provides us sufficient baselines (including very strong ones) to compare with, which convincingly demonstrates the effectiveness of our method in real world scenario.\n", "I still think it is problematic if you do not evaluate high-confidence transferable adversarial examples generated by C&W attacks. Since you use randomization, the model is no longer differentiable. Therefore, high-confidence transferable adversarial examples should be used to attack the defense. If such adversarial examples are not evaluated, the experimental results may be misleading. You can take a look at this paper:\n\nAdversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods.\n\nThe reference [1] you pointed out for detecting adversarial examples is actually broken.\n\nAlso, it is useful to compare with state-of-the-art defense instead of adversarial training alone, because adversarial training is known to be not robust for state-of-the-art attacks. \n ", "Thanks for your comments.\n\n(1) C&W attacks is a strong attack, and we follow other papers, e.g., [1], to evaluate basic C&W attacks at current stage. Furthermore, the attacks scenarios considered here are much stronger than black-box attack, while other papers have not studied these before. We will conduct experiments to see how our defense model performs under vanilla attack, single-pattern attack and ensemble attack when confidence increases.\n\nWe want to highlight our defense is evaluated on large-scale real image dataset, e.g., ImageNet, which is much harder than defense on small dataset, like MNIST and CIFAR. Meanwhile, the conclusions on small dataset may not be valid on large dataset. For example, adversarial training helps model get better performance on MNIST, but causes performance to drop on ImageNet (see table 1 at [2])\n\n(2) To the best of my knowledge, there are no randomization-based defense methods available on ImageNet (except some concurrent submissions at ICLR). If you know such reference on ImageNet, please send it to us.\n\n(3) We are not aware of such defense on ImageNet. If you know such reference on ImageNet, please send it to us. Meanwhile, the performance drop on clean images (see table 1) of our best defense model, ens-adv-Inception-ResNet-v2, is only from 100% to 99.2%, which is an acceptable degradation. \n \n[1] Feinman, Reuben, et al. \"Detecting Adversarial Samples from Artifacts.\" arXiv preprint arXiv:1703.00410 (2017).\n[2] Kurakin, Alexey, Ian Goodfellow, and Samy Bengio. \"Adversarial machine learning at scale.\" arXiv preprint arXiv:1611.01236 (2016).", "Our submission ranked Top 2 (among 100+ teams) at the final round of a public adversarial defense challenge, where the number of test images is increased to 5000, and the number of different attack methods is increase to 150+. It reached a normalized score of 0.924, which is far better than using adversarial training alone with a normalized score of 0.773 (ranked No.56). \n\nWe will reveal the URL of the challenge once the revision period is over.", "Mitigating adversarial manipulations to deep neural networks via randomization is a promising direction. However, I found evaluations in this paper are not convincing.\n\n1. High-confidence transferable adversarial examples generated by C&W attacks are not evaluated. The paper only evaluated basic C&W attacks.\n\n2. The paper did not compare with recent randomization-based defense. For instance, the paper did not compare with the following paper \"Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification\", arxiv 2017. \n\n3. The method proposed in this paper decreases classification accuracy for normal examples in order to increase robustness against adversarial examples. However, there already exists defense that does not decrease classification accuracy for normal examples, but has the same or even better robustness than the proposed method. " ]
[ 6, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Sk9yuql0Z", "iclr_2018_Sk9yuql0Z", "iclr_2018_Sk9yuql0Z", "iclr_2018_Sk9yuql0Z", "BJDaCQFxM", "ByRWmWAxM", "B104VQCgM", "rJXvqfIJG", "Sy3pQrByM", "rJkSBVByG", "iclr_2018_Sk9yuql0Z", "iclr_2018_Sk9yuql0Z" ]
iclr_2018_BkpiPMbA-
Decision Boundary Analysis of Adversarial Examples
Deep neural networks (DNNs) are vulnerable to adversarial examples, which are carefully crafted instances aiming to cause prediction errors for DNNs. Recent research on adversarial examples has examined local neighborhoods in the input space of DNN models. However, previous work has limited what regions to consider, focusing either on low-dimensional subspaces or small balls. In this paper, we argue that information from larger neighborhoods, such as from more directions and from greater distances, will better characterize the relationship between adversarial examples and the DNN models. First, we introduce an attack, OPTMARGIN, which generates adversarial examples robust to small perturbations. These examples successfully evade a defense that only considers a small ball around an input instance. Second, we analyze a larger neighborhood around input instances by looking at properties of surrounding decision boundaries, namely the distances to the boundaries and the adjacent classes. We find that the boundaries around these adversarial examples do not resemble the boundaries around benign examples. Finally, we show that, under scrutiny of the surrounding decision boundaries, our OPTMARGIN examples do not convincingly mimic benign examples. Although our experiments are limited to a few specific attacks, we hope these findings will motivate new, more evasive attacks and ultimately, effective defenses.
accepted-poster-papers
Authors propose an approach to generation of adversarial examples that jointly examine the effects to classification within a local neighborhood, to yield a more robust example. This idea is taken a step further for defense, whereby the classification boundaries within a local neighborhood of a presented example are examined to determine if the data was adversarially generated or not. Pro: - The idea of examining local neighborhoods around data points appears new and interesting. - Evaluation and investigation is thorough and insightful. - Authors made reasonable attempts to address reviewer concerns. Con - Generation of adversarial examples an incremental improvement over prior methods
train
[ "SkxHS_vlz", "rJNyagigz", "ryhMljRxz", "HyQTSJyEG", "S1wjr_a7G", "H1tdBO6Xz", "SkwUBd6Qz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Summary of paper:\n\nThe authors present a novel attack for generating adversarial examples, deemed OptMargin, in which the authors attack an ensemble of classifiers created by classifying at random L2 small perturbations. They compare this optimization method with two baselines in MNIST and CIFAR, and provide an analysis of the decision boundaries by their adversarial examples, the baselines and non-altered examples. \n\nReview summary:\n\nI think this paper is interesting. The novelty of the attack is a bit dim, since it seems it's just the straightforward attack against the region cls defense. The authors fail to include the most standard baseline attack, namely FSGM. The authors also miss the most standard defense, training with adversarial examples. As well, the considered attacks are in L2 norm, and the distortion is measured in L2, while the defenses measure distortion in L_\\infty (see detailed comments for the significance of this if considering white-box defenses). The provided analysis is insightful, though the authors mostly fail to explain how this analysis could provide further work with means to create new defenses or attacks.\n\nIf the authors add FSGM to the batch of experiments (especially section 4.1) and address some of the objections I will consider updating my score.\n\nA more detailed review follows.\n\n\nDetailed comments:\n\n- I think the novelty of the attack is not very strong. The authors essentially develop an attack targeted to the region cls defense. Designing an attack for a specific defense is very well established in the literature, and the fact that the attack fools this specific defense is not surprising.\n\n- I think the authors should make a claim on whether their proposed attack works only for defenses that are agnostic to the attack (such as PGD or region based), or for defenses that know this is a likely attack (see the following comment as well). If the authors want to make the second claim, training the network with adversarial examples coming from OptMargin is missing.\n\n- The attacks are all based in L2, in the sense that the look for they measure perturbation in an L2 sense (as the paper evaluation does), while the defenses are all L_\\infty based (since the region classifier method samples from a hypercube, and PGD uses an L_\\infty perturbation limit). This is very problematic if the authors want to make claims about their attack being effective under defenses that know OptMargin is a possible attack.\n\n- The simplest most standard baseline of all (FSGM) is missing. This is important to compare properly with previous work.\n\n- The fact that the attack OptMargin is based in L2 perturbations makes it very susceptible to a defense that backprops through the attack. This and / or the defense of training to adversarial examples is an important experiment to assessing the limitations of the attack. \n\n- I think the authors rush to conclude that \"a small ball around a given input distance can be misleading\". Wether balls are in L2 or L_\\infty, or another norm makes a big difference in defense and attacks, given that they are only equivalent to a multiplicative factor of sqrt(d) where d is the dimension of the space, and we are dealing with very high dimensional problems. I find the analysis made by the authors to be very simplistic.\n\n- The analysis of section 4.1 is interesting, it was insightful and to the best of my knowledge novel. Again I would ask the authors to make these plots for FSGM. Since FSGM is known to be robust to small random perturbations, I would be surprised that for a majority of random directions, the adversarial examples are brought back to the original class.\n\n- I think a bit more analysis is needed in section 4.2. Do the authors think that this distinguishability can lead to a defense that uses these statistics? If so, how?\n\n- I think the analysis of section 5 is fairly trivial. Distinguishability in high dimensions is an easy problem (as any GAN experiment confirms, see for example Arjovsky & Bottou, ICLR 2017), so it's not surprising or particularly insightful that one can train a classifier to easily recognize the boundaries.\n\n- Will the authors release code to reproduce all their experiments and methods?\n\nMinor comments:\n- The justification of why OptStrong is missing from Table2 (last three sentences of 3.3) should be summarized in the caption of table 2 (even just pointing to the text), otherwise a first reader will mistake this for the omission of a baseline.\n\n- I think it's important to state in table 1 what is the amount of distortion noticeable by a human.\n\n=========================================\n\nAfter the rebuttal I've updated my score, due to the addition of FSGM added as a baseline and a few clarifications. I now understand more the claims of the paper, and their experiments towards them. I still think the novelty, significance of the claims and protocol are still perhaps borderline for publication (though I'm leaning towards acceptance), but I don't have a really high amount of experience in the field of adversarial examples in order to make my review with high confidence.", "Compared to previous studies, this paper mainly claims that the information from larger neighborhoods (more directions or larger distances) will better characterize the relationship between adversarial examples and the DNN model.\n\nThe idea of employing ensemble of classifiers is smart and effective. I am curious about the efficiency of the method.\n\nThe experimental study is extensive. Results are well discussed with reasonable observations. In addition to examining the effectiveness, authors also performed experiments to explain why OPTMARGIN is superior. Authors are suggested to involve more datasets to validate the effectiveness of the proposed method.\n\nTable 5 is not very clear. Authors are suggested to discuss in more detail.\n", "The paper presents a new approach to generate adversarial attacks to a neural network, and subsequently present a method to defend a neural network from those attacks. I am not familiar with other adversarial attack strategies aside from the ones mentioned in this paper, and therefore I cannot properly assess how innovative the method is.\n\nMy comments are the following:\n\n1- I would like to know if benign examples are just regular examples or some short of simple way of computing adversarial attacks.\n\n2- I think the authors should provide a more detailed and formal description of the OPTMARGIN method. In section 3.2 they explain that \"Our attack uses existing optimization attack techniques to...\", but one should be able to understand the method without reading further references. Specially a formal representation of the method should be included.\n\n3- Authors mention that OPTSTRONG attack does not succeed in finding adversarial examples (\"it succeeds on 28% of the samples on MNIST;73% on CIFAR-10\"). What is the meaning of success rate in here? Is it the % of times that the classifier is confused?\n\n4- OPTSTRONG produces images that are notably more distorted than OPTBRITTLE (by RMS and also visually in the case of MNIST). So I actually cannot tell which method is better, at least in the MNIST experiment. One could do a method that completely distort the image and therefore will be classified with as a class. But adversarial images should be visually undistinguishable from original images. Generated CIFAR images seem similar than the originals, although CIFAR images are very low resolution, so judging this is hard.\n\n4- As a side note, it would be interesting to have an explanation about why region classification is providing a worse accuracy than point classification for CIFAR-10 benign samples.\n\nAs a summary, the authors presented a method that successfully attacks other existing defense methods, and present a method that can successfully defend this attack. I would like to see more formal definitions of the methods presented. Also, just by looking at RMS it is expected that this method works better than OPTBRITTLE, since the images are more distorted. It would be needed to have a way of visually evaluate the similarity between original images and generated images.", "# Added experiments\n- Expanded the experiments with the addition of FGSM as an attack method, as a well known baseline. (throughout)\n- Repeated our experiments on a small subset of ImageNet, as an example of a more realistic dataset. (Appendix D)\n\n# Writing changes\n- Added a detailed description of how OptMargin works (Section 3.2).\n- Added a note to the distortion levels (Table 1) to describe them qualitatively.\n- Clarified why OptStrong (a high-confidence Carlini & Wagner attack) sometimes does not output a perturbed image (Section 3.3).\n- Added a note to the accuracy of point- and region classification under attack (Table 2) to recap why OptStrong is omitted from comparison.\n- Reduced a broader claim that “examining a small ball around a given input instance can be misleading ...” to be specific to evidence: “examining ... may not adequately distinguish OptMargin adversarial examples,” (Section 4).\n- Added interpretation guidelines to the average purity of adjacent classes plots (Figure 3, formerly Table 5).\n\n# Other changes\n- Added links to the classification models we used. (Section 2.4)\n- Fixed some grammatical errors.\n- Added color to legend-like information in figure captions.\n- Improved the styling of tables.\n- Improved the parallelism in table headings, now “Normal” vs “Adv tr.,” previously “No defense” vs “PGD adv.”\n- Changed collections of graphs to be Figures instead of Tables.\n", "We thank the reviewer for the helpful suggestions and comments.\n[Comparison with FGSM] In our updated version, we’ve added the corresponding experiments with FGSM adversarial examples throughout the paper. Thanks for the suggestion. In summary, FGSM was able to create some robust adversarial examples, but it also had higher distortion and lower success rate, especially on adversarially trained models. \n[Standard defense, adversarial training] The adversarially trained model we used in this paper (with PGD adversarial training) is already intended to defend against gradient-based attacks, such as OptMargin and the other attacks we experiment with in this paper. We do not refute Madry et al.’s claim that PGD adversarial training is effective against attacks bounded in L_inf distortion (2017), although we do not find that threat model to be realistic. \n[Attacks in L2, defenses in L_infinity] The discrepancy between the attacks’ focus on L_2 distance and the defenses’ focus on L_inf distance is definitely not the most elegant thing in this paper. However, our analysis of random orthogonal directions is simplest in a distance metric where all directions are uniform. Cao & Gong’s defense, in particular, was not specialized for L_inf-bounded attacks, and their paper evaluated it successfully against previous L_0- and L_2 attacks. Nevertheless, we find it interesting that an adversarial example which satisfies points sampled from a hypersphere would be generally robust enough against a defense that checks in a hypercube.\n> Designing an attack for a specific defense is very well established in the literature, and the fact that the attack fools this specific defense is not surprising.\nThe choice of defenses and attacks used in this paper are meant to cover a variety of scenarios for looking at decision boundaries. The proposal of OptMargin as an attack intends, in part, to demonstrate an adaptive attack, but primarily to create images farther from decision boundaries. The result that OptMargin bypasses region classification at all, we think is interesting because (i) research in nondeterministic defenses has picked up recently, and it is expected to have an advantage of unpredictability even in white-box settings; and (ii) it succeeds with less distortion than previously known methods, including OptStrong and Cao & Gong’s CW-L_*-A.\n> If the authors want to make the second claim [that OptMargin would work against other, specialiazed defenses], training the network with adversarial examples coming from OptMargin is missing.\nWe do not aim to make that claim. We claim that OptMargin creates adversarial examples that are robust to small random perturbations and that have high attack success rate against region classification, a specific defense that examines the decision regions around the input.\n> I think the authors rush to conclude that \"a small ball around a given input distance can be misleading\". Wether balls are in L2 or L_\\infty, or another norm makes a big difference ...\nThanks for bringing this up—we intentionally evaluate classifier boundaries in terms of both metric, and we see evidence for this in our decision boundary distance plots, both for the hypersphere (Figure 2, formerly Table 4) and the hypercube (Figure 7, formerly Table 9). Additionally, for the hypercube, we experimentally validate that the ball is consistent enough to fool a region classifier.\n> Do the authors think that this distinguishability can lead to a defense that uses these statistics?\nWe’re not sure whether these, or even the mechanisms of Section 5 would be a good enough defense. The summary statistics in Sections 4.1.2 and 4.2 are not separable with good accuracy in some settings. There is a fraction of high-distortion FGSM attacks that can fool a classifier and have a convincing distribution of surrounding decision boundaries. We have yet to find out if an attacker can achieve this with a high success rate and, ideally, lower distortion.\n> Distinguishability in high dimensions is an easy problem\nYes, it was our intention to use more data to make the problem easier. We agree, though, that once we have the decision boundary data and see how different it is across different kinds of examples, it is not as big of a leap to run it through a classifier and to expect it to work. But we definitely wanted to have an application to showcase how such information can be used and improve over previous techniques.\n> release code\nYes, we intend to release our code, as well as the random values we used in our experiments upon acceptance.\nThanks for you minor comments as well. We have made changes in our updated draft to address them. We have updated the caption of Table 2 regarding the omission of OptStrong, and we have added an explanation to Table 1 about the visibility of perturbations.\n", "Thanks for reviewing our paper.\n\nIn our updated draft, we’ve added some small-scale experiments on ImageNet data, in Appendix D. Thanks for the suggestion. The OptMargin attack is effective against region classification in these new experiments too.\n\nWe’ve added some interpretation guidelines to the caption of Table 5 (the adjacent class purity plots, changed to Figure 3 in the updated version).", "Thanks for the comments and questions. Here are our responses and the corresponding changes we’ve made in our updated draft.\n\n1. (What are benign examples?) The benign examples are just taken directly from the test set.\n\n2. (More detail on how OptMargin works) Agreed. We’ve added an overview of the technique to Section 3.2, which should cover the relevant parts of the cited work.\n\n3. (What is the success rate for OptStrong?) It’s the fraction of cases when the attack generates an image that’s misclassified *with high enough of a confidence margin* (40 logits in our experiments). Note that the official implementation of Carlini & Wagner’s high-confidence attack (referred to as OptStrong in in this paper) attack outputs a blank black image if the internal optimization procedure does not encounter a satisfactory image, even if it does encounter an image that would fool the classifier but with a lower confidence margin.\n\n4. (OptStrong is heavily distorted) That’s a good point that an indistinguishable example would be better (more stealthy, lower cost by some measure, etc.). But the defender wouldn’t have an unmodified version of the image to compare against. The distortion numbers (Table 1) tell part of the story of how much the adversarial examples are changed. Internally, we like to look at sample images (Appendix A) and make sure the original class is still clearly visible to humans. Our opinion is that OptStrong images are less recognizable than OptMargin’s.\n\n4. (Why is region classification worse on CIFAR-10?) Our intuition is that hypercubes, L_inf distances, and the like are especially well suited for MNIST, because the dataset is black and white. Random perturbations that change black to dark gray can be systematically ignored by a model that’s smart enough. CIFAR-10 deals with colors that use the whole gamut of pixel values, so it should be more sensitive to small changes bounded by an L_inf distance. Experimentally, we evaluated a model made with PGD adversarial training, which is trained not to be sensitive to these small perturbations, and the result is that the accuracy is lower than that of the model without adversarial training, but there’s no accuracy drop between point classification and region classification (Table 2).\n" ]
[ 6, 6, 6, -1, -1, -1, -1 ]
[ 3, 2, 3, -1, -1, -1, -1 ]
[ "iclr_2018_BkpiPMbA-", "iclr_2018_BkpiPMbA-", "iclr_2018_BkpiPMbA-", "iclr_2018_BkpiPMbA-", "SkxHS_vlz", "rJNyagigz", "ryhMljRxz" ]
iclr_2018_HJWLfGWRb
Matrix capsules with EM routing
A capsule is a group of neurons whose outputs represent different properties of the same entity. Each layer in a capsule network contains many capsules. We describe a version of capsules in which each capsule has a logistic unit to represent the presence of an entity and a 4x4 matrix which could learn to represent the relationship between that entity and the viewer (the pose). A capsule in one layer votes for the pose matrix of many different capsules in the layer above by multiplying its own pose matrix by trainable viewpoint-invariant transformation matrices that could learn to represent part-whole relationships. Each of these votes is weighted by an assignment coefficient. These coefficients are iteratively updated for each image using the Expectation-Maximization algorithm such that the output of each capsule is routed to a capsule in the layer above that receives a cluster of similar votes. The transformation matrices are trained discriminatively by backpropagating through the unrolled iterations of EM between each pair of adjacent capsule layers. On the smallNORB benchmark, capsules reduce the number of test errors by 45\% compared to the state-of-the-art. Capsules also show far more resistance to white box adversarial attacks than our baseline convolutional neural network.
accepted-poster-papers
Authors present a new multi-layered capsule network architecture, implemented an EM routing procedure, and introduced "Coordinate Addition". Capsule architectures are gaining interest because of their ability to achieve equivariance of parts, and employ a new form of pooling called "routing" (as opposed to max pooling) which groups parts that make similar predictions of the whole to which they belong, rather than relying on spatial co-locality. New state-of-art performances are being achieved on focused datasets, for which the authors have continued the trend. Pros: - New significant improvement to state-of-art performance is obtained on smallNORB, both in comparison to CNN structure as well as the most recent previous implementation of capsule network. Cons: - Some concern arose regarding the writing of the paper and the ability to understand the material, which authors have made an effort to address. Given the general consensus of the reviewers that this work should be accepted, the general applicability of the technology to multiple domains, and the potential impact that improvements to capsule networks may have on an early field, area chair recommends this work be accepted as a poster presentation.
train
[ "HyvJKULxM", "Hykw8iKxG", "ry1nhoKgM", "ByZRu4ClG", "HyguZD-Vf", "SJAWbD-4G", "HJkSzIZEf", "rk5MadsMf", "rJFRG6oWG", "rJUY2VdbM", "ryTPZJd-f", "SJQqV1JWz", "HkAVUc3gz", "rJnxEL2xf", "r17t2UIgf", "BkFS5LLxf", "Hy9EvktkG", "Hkc2c4HyM", "ryM_Fi4JM", "ByAqs7VJf", "ByVzDRDRW", "rk1Am1uRW", "S1uPsnwR-" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "public", "author", "public", "public", "public", "public", "author", "author", "public", "public", "public", "public", "author", "public", "public" ]
[ "The objective function in details is:\n\\sum_c a'_c (-\\beta_a) + a'_c ln(a'_c) + (1-a'_c)ln(1-a'_c)+\\sum_h cost_{ch} + \\sum_i a_i * r_{ic} * ln(r_{ic})\n\na'_c is the activation for capsule c in layer L+1 and a_i is the activation probability for capsule i in layer L. The rest of the notations follow paper. \n\nPlots showing the decay of objective function and the absolute difference between two routing iterations in the above objective function can be found at:\nhttps://imgur.com/a/eeD2X", "The paper proposes a novel architecture for capsule networks. Each capsule has a logistic unit representing the presence of an entity plus a 4x4 pose matrix representing the entity/viewer relationship. This new representation comes with a novel iterative routing scheme, based on the EM algorithm.\nEvaluated on the SmallNORB dataset, the approach proves to be more accurate than previous work (beating also the recently proposed \"routing-by-agreement\" approach for capsule networks by Sabour et al.). It also generalizes well to new, unseen viewpoints and proves to be more robust to adversarial examples than traditional CNNs.\n\nCapsule networks have recently gained attention from the community. The paper addresses important shortcomings exhibited by previous work (Sabour et al.), introducing a series of valuable technical novelties.\nThere are, however, some weaknesses. The proposed routing scheme is quite complex (involving an EM-based step at each layer); it's not fully clear how efficiently it can be performed / how scalable it is. Evaluation is performed on a small dataset for shape recognition; as noted in Sec. 6, the approach will need to be tested on larger, more challenging datasets. Clarity could be improved in some parts of the paper (e.g.: Sec. 1.1 may not be fully clear if the reader is not already familiar with (Sabour et al., 2017); the authors could give a better intuition about what is kept and what is discarded, and why, from that approach. Sec. 2: the sentence \"this is incorrect because the transformation matrix...\" could be elaborated more. V_{ih} in eq. 1 is defined only a few lines below; perhaps, defining the variables before the equations could improve clarity. Sec. 2.1 could be accompanied by mathematical formulation).\nAll in all, the paper brings an original contribution and will encourage further research / discussion on an important research question (how to effectively leverage knowledge about the part-whole relationships).\n\nOther notes:\n- There are a few typos (e.g. Sec. 1.2 \"(Jaderberg et al. (2015)\", Sec. 2 \"the the transformation\", Sec. 4 \"cetral crop\" etc.).\n- The authors could discuss in more detail why the approach does not show significant improvement on NORB with respect to the state of the art.\n- The authors could provide more insights about why capsule gradients are smaller than CNN ones.\n- It would be interesting to discuss how the network could potentially be adapted, in the future, to: 1. be more efficient 2. take into account other changes produced by viewpoint changes (pixel intensities, as noted in Sec. 1).\n- In Sec, 4, the authors could provide more details about the network training.\n- In Procedure 1, for indexing tensors and matrices it might be better to use a comma to separate dimensions (e.g. V_{:,c,:} instead of V_{:c:}).", "This paper proposes a new kind of capsules for CNN. The capsule contains a 4x4 pose matrix motivated by 3D geometric transformations describing the relationship between the viewer and the object (parts). An EM-type of algorithm is used to compute the routing.\n\nThe authors use the smallNORB dataset as an example. Since the scenes are simulated from different viewer angles, the pose matrix quite fits the motivation. It would be more beneficial to know if this kind of capsules is limited to the motivation or is general. For example, the authors may consider reporting the results of the affNIST dataset where the digits undergo 2D affine transformations (in which case perhaps 3x3 pose matrices are enough?).\n\nMinor: The arguments in line 5 of the procedure RM Routing(a,V) do not match those in line 1 of the procedure E-Step.\n\nSection 2.1 (objective of EM) is unclear. The authors may want to explicitly write down the free energy function.\n\nThe section about robustness against adversarial attacks is interesting.\n\nOverall the idea appears to be useful but needs more empirical validation (affNIST, ImageNet, etc).\n", "The paper describes another instantiation of \"capsules\" which attempt to learn part-whole relationships and the geometric pose transformations between them. Results are presented on the smallNORB test set obtaining impressive performance.\n\nAlthough I like very much this overall approach, this particular paper is so opaquely written that it is difficult to understand exactly what was done and how the network works. It sounds like the main innovation here is using a 4x4 matrix for the pose parameters, and an iterative EM algorithm to find the correspondence between capsules (routing by agreement). But what exactly the pose matrix represents, and how they get transformed from one layer to the next, is left almost entirely to the reader's imagination. In addition, how EM factors in, what the probabilities P_ih represent, etc. is not clear. I think the authors could do a much better job explaining this model, the rationale behind it, and how it works.\n\nPerhaps the most interesting and compelling result is Figure 2, which shows how ambiguity in object class assignment is resolved with each iteration. This is very intriguing, but it would be great to understand what is going on and how this is happening.\n\nAlthough the results are impressive, if one can't understand how this was achieved it is hard to know what to make of it.\n\n", "Thank you for your comments. upon reflection we agree that the paper was confusing and we have taken several steps to reduce the opacity of our work to the reader. To that end we have done the following: \n- We have added section 2 which gives a general and intuitive explanation of the mechanism of capsule networks, paying close attention to how pose matrices get transformed from one layer to the next.\n- Having identified the EM objective as another source of confusion, we added an extended appendix in which we provide a gentle and approachable explanation for the free energy view of EM and how our routing algorithm builds upon it. \n- We have also added a paragraph to further explain figure 2 in the experiments section. \n- Finally we have made several changes to the language of the paper, focusing in particular on the notation. \nWe believe that the comprehensibility of the paper has thus improved and appreciate your criticism. ", "Thank you for your detailed reading of the paper and suggestions! \nAs per your comments on the EM routing, we agree that it was not presented as best it could have been, and have added an appendix to present a gentle and thorough introduction to the free energy view of EM and the objective function which our routing operation minimizes. In response to the question about efficiency, we would like to draw your attention to the total number of arithmetic operations required for the routing procedure - each iteration of routing represents fewer arithmetic operations than a single layer feed forward pass, but due to architectural optimization decisions in tensorflow, our current capsule implementation is not as fast as it could be. \n\nWe agree that larger scale testing would ideal, but due to the aforementioned efficiency limitations were not able to include it in this paper. \n\nIn regards to your other comments we have done the following: \n- To increase the clarity of the paper, we have made several changes to the language used, and improved the mathematical notation.\n- We have added section 2 which provides an intuitive explanation of capsules and makes clear when the routing occurs. We feel that improves the readers' ability to engage with the rest of the presented content. We also defined the variables and notation used in the rest of the paper more explicitly. \n- We have expanded on the sentence \"this is incorrect because the transformation matrix...\" you mentioned which is now in the appendix. \n- We have also made several changes to the nation and language throughout the paper to make it more comprehensible. \nthank you for your feedback, and hope that we have addressed your comments to your satisfaction. ", "thank you for the feedback! To address your comments we have done the following: \n- To clarify the EM objective we have added an extended and thorough appendix which presents a gentle and intuitive explanation of the free energy view of EM, and explicit free energy function, and how our routing algorithm makes use of it.\n- We believe that the benefit of capsules is not limited to smallNORB and will generalize. As suggested, we replicated the affNIST generalization experiment reported in the previous Capsule paper (Sabour et al. 2017). We found that our EM capsule model (the exact architecture used for smallNORB and MNIST in the paper), when trained to 0.8% test error on expanded MNIST (40x40 pixel MNIST images, created by padding and shifting MNIST), achieved 6.9% test error on affNIST. We trained a baseline CNN (with AlexNet architecture, without pooling) to 0.8% test error and it was only able to achieve 14.1% test error on affNIST. Our capsule model was able to half the test error of a CNN when trained on MNIST and tested on affNIST. Due to time and space constraints these results are not reported in the paper as it is now. \n- finally we address the minor issue raised in line 5 of the routing procedure. \nwe hope this has addressed your concerns, and thank you for your suggestions. ", "Author: Hang Yu | Suofei Zhang\n\nEmail: hangyu5 at illinois.edu | zhangsuofei at njupt.edu.cn\n\n## Reproduce Method\n\n#### Hyperparameters\nsmallNORB dataset:\n* Samples per epoch: 46800\n* Sample dimensions: 96x96x1\n* Batch size: 50\n* Preprocessing:\n * training:\n 1. add random brightness with max delta equals 32 / 255.\n 2. add random contrast with lower 0.5 and upper 1.5.\n 3. resize into HxW 48x48 with bilinear method.\n 4. crop into random HxW 32x32 piece.\n 5. apply batch norm to have zero mean and unit variance.\n 6. squash the image from 4 so that each entry has value from 0 to 1. This image is to be compared with the reconstructed image.\n * testing:\n 1. resize into HxW 48x48 with bilinear method.\n 2. crop the center HxW 32x32 piece.\n 3. apply batch norm with moving mean and moving variance collected from training data set.\n\n#### Method\n\n1. The so called dynamic routing is in analog to the fully-connected layer in CNN. The so called ConvCaps structure extends dynamic routing into convolutional filter structure. The ConvCaps are implemented similarly as the dynamic routing for the whole feature map. The only difference is to tile the feature map into kernel-wise data and treat different kernels as batches. Then EM routing can be implemented within each batch in the same way as dynamic routing.\n\n2. Different initialization strategies are used for convolutional filters. Linear weights are initialized with Xavier method. Biases are initialized with truncated normal distribution. This configuration provide higher numerical stability of input to EM algorithm.\n\n3. The output of ConvCaps2 layer is processed by em routing with kernel size of 1*1. Then a global average pooling is deployed here to results final Class Capsules. Coordinate Addition is also injected during this stage.\n\n4. Equation 2 in E-step of Procedure 1 from original paper is replaced by products of probabilities directly. All the probabilities are normalized into [0, 10] for higher numerical stability in products. Due to the division in Equation 3, this operation will not impact the final result. Exponent and logarithm are also used here for the same purpose.\n\n5. A common l2 regularization of network parameters is considered in the loss function. Beside this, reconstruction loss and spread loss are implemented as the description in the original paper.\n\n6. Learning rate: starts from 1e-3, then decays exponentially in a rate of 0.8 for every 46800/50 steps, and ends in 1e-5 (applied for all trainings).\n\n7. We use Tensorflow 1.4 API and python programming language.\n\n## Reproduce Result\n\n#### Overview\n\nExperiments on is done by Suofei Zhang. His hardware is:\n\n* cpu:Intel(R) Xeon(R) CPU E5-2680 v4@ 2.40GHz,\n* gpu:Tesla P40\n\n\n**On test accuracy**:\n\nsmallNORB dataset test accuracy (our result/proposed result):\n\n* CNN baseline (4.2M): 88.7%(best)/94.8%\n* Matrix Cap with EM routing (310K, 2 iteration): 91.8%(best)/98.6%\n\nThere are two comments to make:\n\n1. Even though the best of Matrix Cap is over by 3% to the best of CNN baseline, the test curve suggest Matrix Cap fluctuates between roughly 80% to 90% test dataset.\n2. We are curious to know the learning curve and test curve that can be generated by the author.\n\n**Training speed**:\n\n1. CNN baseline costs 6m to train 50 epochs on smallNORB dataset. Each batch costs about 0.006s.\n2. Matrix Cap costs 15h55m36s to train. Each batch costs about 1.2s.\n\n**Recon image**:\n\nWill come soon.\n\n**routing histogram**:\n\nWe have difficulty in understanding how the histogram is calculated.\n\n**AD attack**:\n\nWe haven't planned to run AD attack yet.\n\n### Notes\n\n> **Status:**\n> According to github commit history, this reproduce project had its init commit on Nov.19th. We started writing this report on Dec.19th. Mainly, it is cost by undedicated code review so that we have to fix bug and run it again, otherwise the project should be able to finish in a week.\n\n> **Current Results on smallNORB:**\n- Configuration: A=32, B=8, C=16, D=16, batch_size=50, iteration number of EM routing: 2, with Coordinate Addition, spread loss, batch normalization\n- Training loss. Variation of loss is suppressed by batch normalization. However, there still exists a gap between our best results and the reported results in the original paper.\n\n- Test accuracy(current best result is 91.8%)\n\n> **Current Results on MNIST:**\n- Configuration: A=32, B=8, C=16, D=16, batch_size=50, iteration number of EM routing: 2, with Coordinate Addition, spread loss, batch normalization, reconstruction loss.\n\n- Test accuracy(current best result is 99.3%, only 10% samples are used in test)\n\n##Reference\n\n[1] [MATRIX CAPSULES WITH EM ROUTING (paper)](https://openreview.net/pdf?id=HJWLfGWRb)\n\n[2] [Matrix-Capsules-EM-Tensorflow (our github repo: code and comments)](https://github.com/www0wwwjs1/Matrix-Capsules-EM-Tensorflow/)\n", "Thanks for all your research effort. It is great to read on this new paradigm and to see it actually working.\n\nOne part that is missing in my opinion, or I am very ignorantly glossing over it, is the downsampling from ConvCaps2 (L_final-1) to Class Capsules (L_final).\n\nAs mentioned, weights are shared among same entity capsules, so this would result in a one dimensional convolution (because K=1, stride=1), i.e. keeping the two spatial dimensions of the ConvCaps2 layer.\nWhereas the other layer transitions indeed keep their spatial information and result in multiple, same-entity capsules spread over the 2 input image dimensions, the final layer only has one capsule for each class for the entire image.\nIMHO this therefore requires a downsampling of the ConvCaps2 votes, a la maxpool, averagepool, or some extra dimension added to the EM routing algorithm.", "beta_v and beta_a are per capsule type. Therefore, they are vectors for both convolutional capsules and final capsules. For example in terms of the notation in fig.1 beta_a and beta_v for convCaps1 are C dimensional vectors.\n\nThanks! We will revise the paper in regard to these points.", "The dimensionality of the two trained beta parameters is not very clear to me from the paper. Are they shared across all capsules in the same layer (making them scalars) or does each capsule type have their own beta (meaning they are vectors). I have had a look at the current implementation attempts of the model on GitHub and there the interpretations vary widely as well. Could you please clarify this point?\n\nMinor notes on the algorithm (Procedure 1):\n- \"V_ich is an H dimensional vote...\": Did you mean V_ic?\n- M-Step line 5: Missing quantifier. Like mu and sigma, cost_h is computed for all h", "The spread-loss in 3.1 is the square of the WW-hinge-loss for multi-class SVMs, a large-margin loss.\n\nSee:\n\nWeston, Jason; Watkins, Chris (1999). \"Support Vector Machines for Multi-Class Pattern Recognition\" (PDF). European Symposium on Artificial Neural Networks.\n\nand the following paper describes the relations of the different variants of this loss:\n\nhttp://jmlr.org/papers/v17/11-229.html\nIn the notation of that paper, it would be the combination of sum-over-others aggregation with relative margin concept and squared hinge loss.\n\nFor theoretical considerations, the log-probability should be used, in which case m = 1 is fine and the last layer would not need to be normalized any more.", "The sentence fragment:\n Spatial transformer networks (Jaderberg et al. (2015) seeks\nis missing a ), and the subject is plural and not singular. So it should be:\n Spatial transformer networks (Jaderberg et al. (2015)) seek", "1) \"V_ih is the product of the the transformation matrix W_ic that is learned discriminatively\"\nThere is part of the sentense missing. Also, I believe this sentence describes \"V_i\" and not \"V_ih\". Suggestion:\n\" ... and V_ih is the value on dimension h of the vote V_i from capsule i to capsule c. V_i is obtained by taking the matrix product of the pose p_i of capsule i and the transformation Matrix W_ic. W_ic is learned discriminatively.\"\n\n2) From what I understand, the vote V_i is a matrix (since it's obtained by multiplying a 4x4 matrix with a 4x4 matrix), and v_ih is a scalar. I found \"V_ih is the value on dimension h of the vote ...\" to be missleading. Maybe it should be mentioned that V_i has to be reshaped into a vector first and then its h'th entry is V_ih?", "W_{ic} is 4*4 if you flatten the capsule types and grid positions. Therefore i goes over changes in the range of (1, channels * height * width) in this formulation.\n\nHowever, We share the W_ic between different positions of two capsule types as in a convolutional layer with a kernel size k. Therefore, the total number of trainable parameters between two convolutional capsule layer types is 4*4*k*k and for the whole layer is 4*4*k*k*B*C. Where B is the number of different capsule types in layer bellow and C is the number of different capsule types in the next layer.\n\nPlease note that it is 4*4 rather than (4*4)*(4*4). ", "As Jianfei has explained, the primary capsule layer is a convolutional layer with 1x1 kernel. It transforms the A channels in the first layer to B*(4x4+1) channels. Then we split the B*(4x4+1) channels into B*(4x4) as the pose matrices for B capsules and B*1 as the activation logits of B capsules in primary layer. Then we apply sigmoid nonlinearity on the activation logits.", "In the convolutional capsule layers, what's the dimensionality of transformation matrix W_{ic}?\nIs it still (4*4)->(4*4) which correspond to a 1*1 linear convolutional layer?\nor it is (4*4*k*k)->(4*4) which correspond to a k*k linear convolutional layer?", "Figure 1 explains that. I guess they use a A*B*(4*4+1) kernel to (linear) transform a 1 width * 1 height * 32 channels patch to 32 capsules, each shape is 4*4+1. Then they reshape the 4*4 part as a matrix and apply a sigmoid on the 1 part. ", "I still don't understand the transformation from convolution layer to primary capsule layer? Is it achieved by slicing 4x4*32 patches from the conv layer and then do a linear transformation for each 4x4 matrices? what is the weight in \"The activations of the primary capsules are produced by applying the sigmoid function to weighted sums of the same set of lower layer ReLUs.\" is it the 4x4 variable? I found it confusing, can you elaborate how this works.", "Can you write down what exactly is the objective function in Section 2.1?", "They gain a lot by using the meta data at test time. Without using that information (which normally is not available at test time) they get 2.6%. ", "The meta data is not used during test time only during training time.", "1.5% error rate has previously been reported on small NORB.\nhttps://www.researchgate.net/publication/265335724_Nonlinear_Supervised_Locality_Preserving_Projections_for_Visual_Pattern_Discrimination" ]
[ -1, 7, 6, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, 3, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "ByAqs7VJf", "iclr_2018_HJWLfGWRb", "iclr_2018_HJWLfGWRb", "iclr_2018_HJWLfGWRb", "ByZRu4ClG", "Hykw8iKxG", "ry1nhoKgM", "iclr_2018_HJWLfGWRb", "iclr_2018_HJWLfGWRb", "ryTPZJd-f", "iclr_2018_HJWLfGWRb", "iclr_2018_HJWLfGWRb", "iclr_2018_HJWLfGWRb", "iclr_2018_HJWLfGWRb", "Hy9EvktkG", "ryM_Fi4JM", "iclr_2018_HJWLfGWRb", "ryM_Fi4JM", "iclr_2018_HJWLfGWRb", "iclr_2018_HJWLfGWRb", "S1uPsnwR-", "ByVzDRDRW", "iclr_2018_HJWLfGWRb" ]
iclr_2018_BJE-4xW0W
CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training
We introduce causal implicit generative models (CiGMs): models that allow sampling from not only the true observational but also the true interventional distributions. We show that adversarial training can be used to learn a CiGM, if the generator architecture is structured based on a given causal graph. We consider the application of conditional and interventional sampling of face images with binary feature labels, such as mustache, young. We preserve the dependency structure between the labels with a given causal graph. We devise a two-stage procedure for learning a CiGM over the labels and the image. First we train a CiGM over the binary labels using a Wasserstein GAN where the generator neural network is consistent with the causal graph between the labels. Later, we combine this with a conditional GAN to generate images conditioned on the binary labels. We propose two new conditional GAN architectures: CausalGAN and CausalBEGAN. We show that the optimal generator of the CausalGAN, given the labels, samples from the image distributions conditioned on these labels. The conditional GAN combined with a trained CiGM for the labels is then a CiGM over the labels and the generated image. We show that the proposed architectures can be used to sample from observational and interventional image distributions, even for interventions which do not naturally occur in the dataset.
accepted-poster-papers
This paper proposes an interesting machinery around Generative Adversarial Networks to enable sampling not only from conditional observational distributions but also from interven­tional distributions. This is an important contribution as this means that we can obtain samples with desired properties that may not be present in the training set; useful in applications such as ones involving fairness and also when data collection is expensive and biased. The main component called the causal controller models the label dependencies and drives the standard conditional GAN. As reviewers point out, the causal controller assumes the knowledge of the causal graph which is a limitation as this is not known a priori in many applications. Nevertheless, this is a strong paper that convincingly demonstrates a novel approach to incorporate causal structure into generative models. This should be of great interest to the community and may lead to interesting applications that exploit causality. I recommend acceptance.
test
[ "Bk3mPx0xM", "rkBmo9ryM", "ryv9d98lf", "HJlNEQvGf", "r1ZoQQPMz", "ryPEmQwMf", "H151-7mlG", "HJV1pD3eG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "author" ]
[ "The paper describes a way of combining a causal graph describing the dependency structure of labels with two conditional GAN architectures (causalGAN and causalBEGAN) that generate images conditioning on the binary labels. Ideally, this type of approach should allow not only to generate images from an observational distribution of labels (e.g. P(Moustache=1)), but also from unseen interventional distributions (e.g. P(Male=0 | do(Moustache =1)).\n\nMaybe I misunderstood something, but one big problem I have with the paper is that for a “causalGAN” approach it doesn’t seem to do much causality. The (known) causal graph is only used to model the dependencies of the labels, which the authors call the “Causal Controller”. On this graph, one can perform interventions and get a different distribution of labels from the original causal graph (e.g. a distribution of labels in which women have the same probability as men of having moustaches). Given the labels, the rest of the architecture are extensions of conditional GANs, a causalGAN with a Labeller and an Anti-Labeller (of which I’m not completely sure I understand the necessity) and an extension of a BEGAN. The results are not particularly impressive, but that is not an issue for me.\n\nMoreover sometimes the descriptions are a bit imprecise and unstructured. For example, Theorem 1 is more like a list of desiderata and it already contains a forward reference to page 7. The definition of intervention in the Background applies only to do-interventions (Pearl 2009) and not to general interventions (e.g. consider soft, uncertain or fat-hand interventions). \n\nOverall, I think the paper proposes some interesting ideas, but it doesn’t explore them yet in detail. I would be interested to know what happens if the causal graph is not known, and even worse cannot be completely identified from data (so there is an equivalence class of possible graphs), or potentially is influenced by latent factors. Moreover, I would be very curious about ways to better integrate causality and generative models, that don’t focus only on the label space. \n\n\nMinor details:\nPersonally I’m not a big fan of abusing colons (“:”) instead of points (“.”). See for example the first paragraph of the Related Work.\n\nEDIT: I read the author's rebuttal, but it has not completely addressed my concerns, so my rating has not changed.", "In their paper \"CausalGAN: Learning Causal implicit Generative Models with adv. training\" the authors address the following issue: Given a causal structure between \"labels\" of an image (e.g. gender, mustache, smiling, etc.), one tries to learn a causal model between these variables and the image itself from observational data. Here, the image is considered to be an effect of all the labels. Such a causal model allows us to not only sample from conditional observational distributions, but also from intervention distributions. These tasks are clearly different, as nicely shown by the authors' example of \"do(mustache = 1)\" versus \"given mustache = 1\" (a sample from the latter distribution contains only men). The paper does not aim at learning causal structure from data (as clearly stated by the authors). The example images look convincing to me.\n\nI like the idea of this paper. IMO, it is a very nice, clean, and useful approach of combining causality and the expressive power of neural networks. The paper has the potential of conveying the message of causality into the ICLR community and thereby trigger other ideas in that area. For me, it is not easy to judge the novelty of the approach, but the authors list related works, none of which seems to solve the same task. The presentation of the paper, however, should be improved significantly before publication. (In fact, because of the presentation of the paper, I was hesitating whether I should suggest acceptance.) Below, I give some examples (and suggest improvements), but there are many others. There is a risk that in its current state the paper will not generate much impact, and that would be a pity. I would therefore like to ask the authors to put a lot of effort into improving the presentation of the paper. \n\n\n- I believe that I understand the authors' intention of the caption of Fig. 1, but \"samples outside the dataset\" is a misleading formulation. Any reasonable model does more than just reproducing the data points. I find the argumentation the authors give in Figure 6 much sharper. Even better: add the expression \"P(male = 1 | mustache = 1) = 1\". Then, the difference is crystal clear.\n- The difference between Figures 1, 4, and 6 could be clarified. \n- The list of \"prior work on learning causal graphs\" seems a bit random. I would add Spirtes et al 2000, Heckermann et al 1999, Peters et al 2016, and Chickering et al 2002. \n- Male -> Bald does not make much sense causally (it should be Gender -> Baldness)... Aha, now I understand: The authors seem to switch between \"Gender\" and \"Male\" being random variables. Make this consistent, please. \n- There are many typos and comma mistakes. \n- I would introduce the do-notation much earlier. The paragraph on p. 2 is now written without do-notation (\"intervening Mustache = 1 would not change the distribution\"). But this way, the statements are at least very confusing (which one is \"the distribution\"?).\n- I would get rid of the concept of CiGM. To me, it seems that this is a causal model with a neural network (NN) modeling the functions that appear in the SCM. This means, it's \"just\" using NNs as a model class. Instead, one could just say that one wants to learn a causal model and the proposed procedure is called CausalGAN? (This would also clarify the paper's contribution.)\n- many realizations = one sample (not samples), I think. \n- Fig 1: which model is used to generate the conditional sample? \n- The notation changes between E and N and Z for the noises. I believe that N is supposed to be the noise in the SCM, but then maybe it should not be called E at the beginning. \n- I believe Prop 1 (as it is stated) is wrong. For a reference, see Peters, Janzing, Scholkopf: Elements of Causal Inference: Foundations and Learning Algorithms (available as pdf), Definition 6.32. One requires the strict positivity of the densities (to properly define conditionals). Also, I believe the Z should be a vector, not a set. \n- Below eq. (1), I am not sure what the V in P_V refers to.\n- The concept of data probability density function seems weird to me. Either it is referring to the fitted model, then it's a bad name, or it's an empirical distribution, then there is no pdf, but a pmf.\n- Many subscripts are used without explanation. r -> real? g -> generating? G -> generating? Sometimes, no subscripts are used (e.g., Fig 4 or figures in Sec. 8.13)\n- I would get rid of Theorem 1 and explain it in words for the following reasons. (1) What is an \"informal\" theorem? (2) It refers to equations appearing much later. (3) It is stated again later as Theorem 2. \n- Also: the name P_g does not appear anywhere else in the theorem, I think. \n- Furthermore, I would reformulate the theorem. The main point is that the intervention distributions are correct (this fact seems to be there, but is \"hidden\" in the CIGN notation in the corollary).\n- Re. the formulation in Thm 2: is it clear that there is a unique global optimum (my intuition would say there could be several), thus: better write \"_a_ global minimum\"?\n- Fig. 3 was not very clear to me. I suggest to put more information into its caption. \n- In particular, why is the dataset not used for the causal controller? I thought, that it should model the joint (empirical) distribution over the labels, and this is part of the dataset. Am I missing sth?\n- IMO, the structure of the paper can be improved. Currently, Section 3 is called \"Background\" which does not say much. Section 4 contains CIGMs, Section 5 Causal GANs, 5.1. Causal Controller, 5.2. CausalGAN, 5.2.1. Architecture (which the causal controller is part of) etc. An alternative could be: \nSec 1: Introduction \nSec 1.1: Related Work\nSec 2: Causal Models\nSec 2.1: Causal Models using Generative Models (old: CIGM)\nSec 3: Causal GANs\nSec 3.1: Architecture (including controller)\nSec 3.2: loss functions \n...\nSec 4: Empricial Results (old: Sec. 6: Results)\n- \"Causal Graph 1\" is not a proper reference (it's Fig 23 I guess). Also, it is quite important for the paper, I think it should be in the main part. \n- There are different references to the \"Appendix\", \"Suppl. Material\", or \"Sec. 8\" -- please be consistent (and try to avoid ambiguity by being more specific -- the appendix contains ~20 pages). Have I missed the reference to the proof of Thm 2?\n- 8.1. contains copy-paste from the main text.\n- \"proposition from Goodfellow\" -> please be more precise\n- What is Fig 8 used for? Is it not sufficient to have and discuss Fig 23? \n- IMO, Section 5.3. should be rewritten (also, maybe include another reference for BEGAN).\n- There is a reference to Lemma 15. However, I have not found that lemma.\n- I think it's quite interesting that the framework seems to also allow answering counterfactual questions for realizations that have been sampled from the model, see Fig 16. This is the case since for the generated realizations, the noise values are known. The authors may think about including a comment on that issue.\n- Since this paper's main proposal is a methodological one, I would make the publication conditional on the fact that code is released. \n\n\n", "This should be the first work which introduces in the causal structure into the GAN, to solve the label dependency problem. The idea is interesting and insightful. The proposed method is theoretically analyzed and experimentally tested. Two minor concerns are 1) what is the relationship between the anti-labeler and and discriminator? 2) how the tune related weight of the different objective functions. ", "Thank you for your detailed and insightful comments.\n\n- On structural changes, suggestions: Thank you for taking time to point out these points. We will add the listed references and implement all the suggested changes to make the presentation more clear, to make the wording more consistent, to fix typos, and to remove the CiGM concept, explaining it in words.\n\n- On Fig. 1: Our CausalBEGAN implementation is used to generate this figure.\n\n- On Prop. 1: Thank you for pointing this out. As you correctly observe, strict positivity of the densities is required for this to be true. We will move our assumption that label distribution is strictly positive to here as an assumption for the theorem.\n\n- On data probability density: The data probability density function corresponds to a hypothetical distribution from which the finite sized dataset was sampled.\n\n- On Theorem 1 (Informal): We will remove the “informal theorem” and replace with the a statement in words.\n\n- On formulation in Theorem 2: Since the optimization is assumed to have been done on the pdf level, and since KL divergence is zero if and only if the distributions are the same, the global minimum is unique, although there may be multiple parameterizations of the network that achieves this global minimum. \n\n- On dataset not being used for causal controller: We haven’t shown the connection to the dataset in the CausalGAN architecture figure since we assume it is already pretrained with the same dataset. We will clarify this in the figure caption.\n\n- On counterfactual samples from distribution: Thank you for your insightful comment. We do not assume that the noise distributions are known (this is not required for interventional samples to be correct). We will add a paragraph explaining that if the noise terms are known, we can use our framework to take counterfactual samples.\n\n- On code availability: The code will be made public and linked in the paper in the camera ready version.", "Thank you for your positive comments and feedback.\n\n- The Anti-labeler estimates labels of a given generated image. The Discriminator estimates whether a given image is real or generated, which is standard in the GAN literature. Please see Section 5.2.1 and 5.2.2 for their role and importance. 2) We did not scale the different objective functions. The main reason is that the theory we have suggests no scaling is needed, which we observe in practice. ", "Thank you for your comments and feedback.\n\n- On the use of causality:\n\nYou are correct, the causal controller is the causal part of our paper. The point is that we assume the causal graph structure but not the functions that determine the structural equations. The novelty is that the structural equations can be modeled with neural networks and learned through adversarial training. \n\nThe second novelty is that by creating the image conditional GAN (with a labeler and anti-labeler), we can provably guarantee that we sample from conditional and interventional distributions of labels and images. The complexity of having a labeler and an anti-labeler is needed for our proof. \n\nAnother interesting byproduct of our method is that the image generation (which is essentially a conditional GAN) can be creative, i.e., produce images that never appear in the training set which does not happen for other conditional GANs.\n\n- On structural suggestions: Thank you for your comments on structuring and presentation. Among other changes, we will remove the “informal theorem” and replace with the a statement in words.\n\n- When the causal graph is unknown: \n\nIt is indeed very interesting to extend our framework for learning the causal graph structure or when there are latent variables. \nWe investigate the effect of using the wrong causal graph in the appendix of the paper. We see that, as long as CIs in the data are respected, a wrong causal graph can also be learned with a GAN. As it is evident from this observation, it is not trivial to infer causality from how well the data can be fit. This is an interesting direction for future work. ", "seems that on equations 2 and 3 (page 7) you need to switch between the l=0 and l=1 positions.... ", "As you correctly observe, the positions of l=0 and l=1 should be swapped in (2) and (3)." ]
[ 6, 7, 9, -1, -1, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJE-4xW0W", "iclr_2018_BJE-4xW0W", "iclr_2018_BJE-4xW0W", "rkBmo9ryM", "ryv9d98lf", "Bk3mPx0xM", "iclr_2018_BJE-4xW0W", "H151-7mlG" ]
iclr_2018_SJyEH91A-
Learning Wasserstein Embeddings
The Wasserstein distance received a lot of attention recently in the community of machine learning, especially for its principled way of comparing distributions. It has found numerous applications in several hard problems, such as domain adaptation, dimensionality reduction or generative models. However, its use is still limited by a heavy computational cost. Our goal is to alleviate this problem by providing an approximation mechanism that allows to break its inherent complexity. It relies on the search of an embedding where the Euclidean distance mimics the Wasserstein distance. We show that such an embedding can be found with a siamese architecture associated with a decoder network that allows to move from the embedding space back to the original input space. Once this embedding has been found, computing optimization problems in the Wasserstein space (e.g. barycenters, principal directions or even archetypes) can be conducted extremely fast. Numerical experiments supporting this idea are conducted on image datasets, and show the wide potential benefits of our method.
accepted-poster-papers
The paper presents a practical approach to compute Wasserstein distance based image embeddings. The Euclidean distance in the embedded space approximates the true Wasserstein distance, thus reducing the high computation cost associated with the latter. Pros: - Reviewers agree that the proposed solution is novel, straightforward and well described. - Experiment demonstrate the usefulness of such embeddings for data mining tasks such as fast computation of barycenters & geodesic analysis. Cons: - Though the empirical analysis is convincing, the paper lacks theoretical analysis of the approximation quality.
train
[ "S1FE0K2eG", "r11xXR3xf", "SkpXB7TlM", "Hk4iVgFff", "ryBDExKGM", "SylcXetzz", "BywLGeYMz", "HyE_4duzG", "B11F7OOfM", "rkpfmO_fG", "SJqdGdufM", "SJddZ_OzM", "Byrxn34Wf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "The paper proposes to use a deep neural network to embed probability distributions in a vector space, where the Euclidean distance in that space matches the Wasserstein distance in the original space of probability distributions. A dataset of pairs of probability distributions and their Wasserstein distance is collected, and serves as a target to be predicted by the deep network.\n\nThe method is straightforward, and clearly explained. Two analyses based on Wasserstein distances (computing barycenters, and performing geodesic analysis) are then performed directly in the embedded space.\n\nThe authors claim that the proposed method produces sharper barycenters than those learned using the standard (smooth) Wasserstein distance. It is unclear from the paper whether the advantage comes from the ability of the method to scale better and use more examples, or to be able to use the non-smooth Wasserstein distance, or finally, whether the learning of a deep embedding yields improved extrapolation properties. A short discussion could be added. It would also be interesting to provide some guidance on what is a good structure for the encoder (e.g. should it include spatial pooling layers?)\n\nThe term “Wasserstein deep learning” is probably too broad, “deep Wasserstein embedding” could be more appropriate.\n\nThe last line of future work in the conclusion seems to describe the experiment of Table 1.", "The paper presents a simple idea to reduce the computational cost of computing Wasserstein distance between a pair of histograms. Specifically, the paper proposes learning an embedding on the original histograms into a new space where Euclidean distance in the latter relates to the Wasserstein distance in the original space. Despite simplicity of the idea, I think it can potentially be useful practical tool, as it allows for very fast approximation of Wasserstein distance. The empirical results show that embeddings learned by the proposed model indeed provide a good approximation to the actual Wasserstein distances.\n\nThe paper is well-written and is easy to follow and understand. There are some grammar/spelling issues that can be fixed by a careful proofreading. Overall, I find the paper simple and interesting.\n\nMy biggest concern however is the applicability of this approach to high-dimensional data. The experiments in the paper are performed on 2D histograms (images). However, the number of cells in the histogram grows exponentially in dimension. This may turn this approach impractical even in a moderate-sized dimensionality, because the input to the learning scheme requires explicit representation of the histogram, and the proposed method may quickly run into memory problems. In contrast, if one uses the non-learning based approach (standard LP formulation of Wasserstein distance), at least in case of W_1, one can avoid memory issues caused by the dimensionality by switching to the dual form of the LP. I believe that is an important property that has made computation of Wasserstein distance practical in high dimensional settings, but seems inapplicable to the learning scheme. If there is a workaround, please specify.\n", "This paper proposes approximating the Wasserstein distance between normalized greyscale images based on a learnable approximately isometric embedding of images into Euclidean space. The paper is well written with clear and generally thorough prose. It presents a novel, straightforward and practical solution to efficiently computing Wasserstein distances and performing related image manipulations.\n\nMajor comments:\n\nIt sounds like the same image may be present in the training set and eval set. This is methodologically suspect, since the embedding may well work better for images seen during training. This affects all experimental results.\n\nI was pleased to see a comparison between using exact and approximate Wasserstein distances for image manipulation in Figure 5, since that's a crucial aspect of whether the method is useful in practice. However the exact computation (OT LP) appears to be quite poor. Please explain why the approximation is better than the exact Wasserstein difference for interpolation. Relatedly, please summarize the argument in Cuturi and Peyre that is cited (\"as already explained in\").\n\nMinor comments:\n\nIn section 3.1 and 4.1, \"histogram\" is used to mean normalized-to-sum-to-1 images, which is not the conventional meaning.\n\nIt would help to pick one of \"Wasserstein Deep Learning\" and \"Deep Wasserstein Embedding\" and use it and the acronym consistently throughout.\n\n\"Disposing of a decoder network\" in section 3.1 should be \"using a decoder network\"?\n\nIn section 4.1, the architectural details could be clarified. What size are the input images? What type of padding for the convolutions? Was there any reason behind the chosen architecture? In particular the use of a dense layers followed by convolutional layers seems peculiar.\n\nIt would be helpful to say explicitly what \"quadratic ground metric\" means (i.e. W_2, I presume) in section 4.2 and elsewhere.\n\nIt would be helpful to give a sense of scale for the numbers in Table 1, e.g. give the 95th percentile Wasserstein distance. Perhaps use the L2 distance passed through a 1D-to-1D learned warping as a baseline.\n\nMention that OT stands for optimal transport in section 4.3.\n\nSuggest mentioning \"there is no reason for a Wasserstein barycenter to be a realistic sample\" in the main text when first discussing barycenters.", "The main interest of the method is to be able to compute a fast and accurate approximation of the true Wasserstein distance (and not the regularized one), but the embedding could also be learned to reflect a regularized version of W if needed by the application. The sharper quality of barycenters mostly comes with the fact that we are handling true Wasserstein distances and not regularized ones\n", "This is a difficult question. The Wasserstein distance cares about spatial location, hence adding spatial pooling in our network may coarser the embedding. For bigger images, we may consider strided convolutions instead of max-pooling. This is currently under examination as we are working with larger images, but with no definitive answer for the moment.\n", "Indeed the first of line of future work is concerned with transferability issue of a learned mapping toward a new \ndataset. In the paper we have examined if the mapping was transferable and we observed that it is mostly data dependent. In a future line of work, we would like to see if we can ‘transfer’ an already learnt embedding to work on a different dataset (as would work a domain adaptation technique). We have rephrased the text to state this idea more clearly. \n", "Indeed we agree with the reviewer that the input dimension of our embedding network scales linearly in terms of bins in the histograms. Note however that dual (or semi-dual) approaches require the computation of Kantorovich potentials that are scalar functions of the dimension of ambient (input) space, that turns to be of same size as the number of bins of the histogram. Hence both views require to process the data through networks that have the same input size and might suffer from the same problem of high dimensionality. If considering 2D, 3D or 4D tensors, note however that neural networks architecture are known to accommodate well to such dimensions (generally through convolution and pooling layers). We also note that in high dimensions, even computing a single Wasserstein distance is difficult, and a recent analysis [1] shows also the impact of dimensionality in estimating accurately the Wasserstein distance.\n\n[1] J. Weed, F. Bach. Sharp asymptotic and finite-sample rates of convergence of empirical measures in Wasserstein distance. Technical Report, Arxiv-1707.00087, 2017", "First of all, thanks for the reviewer for helping us to improve our manuscript.\n\nRegarding the Quadratic ground metric, it refers to squared Euclidean distance. As it was indeed not clear in the paper, we changed this notation in the revised version.\n\nRegarding the scale of table 1, the theoretical maximum distance is 1458 (all mass between pixels in opposite corners), in average the pairwise wasserstein distance if of the order 12 for MNIST and 15 for CAT, CRAB and FACES but with relative MSE of order 1e-3 (see Table 1 for the exact values) for which is quite large with respect to the quadratic mean error reported in the tables.", "We provide details about the architecture we used :\n\nEncoder :\n- input size (1, 28, 28)\n- a convolutional layer: 20 filters of kernel size 3 by 3, with zero padding and ReLu activation\n- a convolutional layer: 10 filters of kernel size 3 by 3, with zero padding and ReLu activation\n- a convolutional layer:5 filters of kernel size 5 by 5, with zero padding and ReLu activation\n- a fully connected layer with 100 output neurons and ReLu activation\n- a fully connected layer with 50 output neurons, Relu activation. The output is our embedding.\n\nDecoder :\n- input size (50,)\n- a fully connected layer with 100 output neurons and ReLu activation\n- a fully connected layer with 5*28*28 output neurons and ReLu activation\n- a reshape layer of target size (5, 28, 28)\n- a convolutional layer: 10 filters of kernel size 5 by 5, with zero padding and ReLu activation\n- a convolutional layer: 20 filters of kernel size 3 by 3, with zero padding and ReLu activation\n- a convolutional layer: 1 filter of kernel size 3 by 3, with zero padding and ReLu activation\n- a Softmax layer whose output is the image reconstruction.\n\nAll weights are initialized with Glorot’s rule.\nIn the encoder, there is no dense layer followed by a convolutional layer. However without max-pooling, we need dense layers at the end of the encoder to control the size of the embedding. Hence to mimic the inversion of each layer of the encoder, we indeed add dense layers followed by convolutional layers.\nWe also plan to publish a version of our code on GitHub.\n", "We are referring to Figures 3.1 and 3.2 in the paper ‘A smoothed dual approach for variational Wasserstein problems’ from Cuturi and Peyré, that show how the exact solution of the linear program corresponding to an interpolation in the Wasserstein sense of two Gaussians can lead to a staircase effect in the interpolated Gaussian, that is mainly due to discretization. We believe that the reconstructed images in our case suffer from the same discretization effect. ", "With our settings, it may be possible to have some redundancy between the training and the test set. However, we ensure that statistically, it is highly unlikely to have redundant couples of images between the training and test set. Eventually, the higher the number of images to compute pairwise Wasserstein distance is, the lower is the probability of sharing images between the training and test set: especially when N > sqrt(100,000). We ensure this condition for every dataset ( N(mnist)=50000, N(face)=161666, N(crab)= 126930, N(cat)= 123202).\nRegarding our experiments on Principal Geodesic Analysis and Barycenter’s estimation, those have been done on test images independent from the training set.\n\nHowever, to clear any doubt regarding the efficiency of our method, we update Figure 2, Figure 9 and Table 1: we tested the pairwise Wasserstein distance with test images independent from the training set. Our results remain almost unchanged.\n", "Thanks for your comments. When referring to dimensionality of distributions, several dimensions can be taken into account: dimension of the ambient space, dimension of discretization (number of bins in the histograms) in a Eulerian setting or number of Diracs in a Lagrangian view of empirical distributions. Our method is for now adapted mostly to distributions with fixed discretization on a constant Eulerian grid, that corresponds to the input size of the embedding network. As such, it is difficult to consider empirical distributions that we would draw from multivariate Gaussians (hence with known and computable Wasserstein distance). Note that also in this case, a sampling error should be taken into account (and the exact W distance would be different from the theoretical one). We have started working on ways to embed empirical distributions in a similar framework as the one developed in our paper but this is somehow out the scope of the proposed work. Regarding the generalization of our approach to larger number of bins in the histogram (Eulerian view), the problem of computing even a single Wasserstein distance may arise, especially because the size of the coupling scales quadratically in the number of bins. While for 2D and 3D histograms convolutional Wasserstein distances can be used to compute efficiently the Wasserstein distance, scaling to larger dimension of ambient space is still an open issue. Working with stochastic semi-dual or dual approaches such as in [Genevay et al. 2016] is a possible option, but it comes with higher computational costs, that prevents computing Wasserstein distances for a large number of pairs. \n", "I think it is an interesting paper for approximating OT on low-dimensional space. \n\nCould the author comment on how accurate/applicable this approach will generalize to high dimensional distributions? \n\n\nAlso we know the closed formula for computing Wasserstein distance between multivariate Gaussians. Maybe a sanity check with Gaussians can strengthen the related claims numerically. \n\nThanks!" ]
[ 7, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJyEH91A-", "iclr_2018_SJyEH91A-", "iclr_2018_SJyEH91A-", "S1FE0K2eG", "S1FE0K2eG", "S1FE0K2eG", "r11xXR3xf", "SkpXB7TlM", "SkpXB7TlM", "SkpXB7TlM", "SkpXB7TlM", "Byrxn34Wf", "iclr_2018_SJyEH91A-" ]
iclr_2018_BJNRFNlRW
TRAINING GENERATIVE ADVERSARIAL NETWORKS VIA PRIMAL-DUAL SUBGRADIENT METHODS: A LAGRANGIAN PERSPECTIVE ON GAN
We relate the minimax game of generative adversarial networks (GANs) to finding the saddle points of the Lagrangian function for a convex optimization problem, where the discriminator outputs and the distribution of generator outputs play the roles of primal variables and dual variables, respectively. This formulation shows the connection between the standard GAN training process and the primal-dual subgradient methods for convex optimization. The inherent connection does not only provide a theoretical convergence proof for training GANs in the function space, but also inspires a novel objective function for training. The modified objective function forces the distribution of generator outputs to be updated along the direction according to the primal-dual subgradient methods. A toy example shows that the proposed method is able to resolve mode collapse, which in this case cannot be avoided by the standard GAN or Wasserstein GAN. Experiments on both Gaussian mixture synthetic data and real-world image datasets demonstrate the performance of the proposed method on generating diverse samples.
accepted-poster-papers
The paper makes a good theoretical contribution by formulating the GAN training as primal-dual subgradient method for convex optimization and providing convergence proof. The authors then propose a modified objective to standard GAN training, based on this formulation, that helps address the mode collapse issue. One weak point of the paper as pointed out by reviewers is that that the experimental results are underwhelming and the approach may not scale well to high dimensional datasets / high-resolution images. Interestingly, the proposed approach is general enough to be applied to other GAN variants that may address this issue in future. I recommend acceptance.
train
[ "SkmK6TUxz", "SkvyRa3lG", "SJHM1GxbG", "Bkv1X-3Qz", "SyfZHx5Xz", "H1yaEeqXG", "SJMKDw_7f", "B1haOAOXG", "r1o4ORO7z", "SJ6GI0_XG", "H1X2ld_7G", "HJlcnP_XG", "B1dsSPOXG", "rybMrduXG", "r1FDmdumM", "ry3SXdO7M", "Hy0GRPumz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ "This paper formulates GAN as a Lagrangian of a primal convex constrained optimization problem. They then suggest to modify the updates used in the standard GAN training to be similar to the primal-dual updates typically used by primal-dual subgradient methods.\n\nTechnically, the paper is sound. It mostly leverages the existing literature on primal-dual subgradient methods to modify the GAN training procedure. I think this is a nice contribution that does yield to some interesting insights. However I do have some concerns about the way the paper is currently written and I find some claims misleading.\n\nPrior convergence proofs: I think the way the paper is currently written is misleading. The authors quote the paper from Ian Goodfellow: “For GANs, there is no theoretical prediction as to\nwhether simultaneous gradient descent should converge or not.”. However, the f-GAN paper gave a proof of convergence, see Theorem 2 here: https://arxiv.org/pdf/1606.00709.pdf. A recent NIPS paper by (Nagarajan and Kolter, 2017) also study the convergence properties of simultaneous gradient descent. Another problem is of course the assumptions required for the proof that typically don’t hold in practice (see comment below).\n\nConvex-concave assumption: In practice the GAN objective is optimized over the parameters of the neural network rather than the generative distribution. This typically yields a non-convex non-concave optimization problem. This should be mentioned in the paper and I would like to see a discussion concerning the gap between the theory and the practical algorithm.\n\nRelation to existing regularization techniques: Combining Equations 11 and 13, the second terms acts as a regularizer that minimizes [\\lapha f_1(D(x_i))]^2. This looks rather similar to some of the recent regularization techniques such as\nImproved Training of Wasserstein GANs, https://arxiv.org/pdf/1704.00028.pdf\nStabilizing Training of Generative Adversarial Networks through Regularization, https://arxiv.org/pdf/1705.09367.pdf\nCan the authors comment on this? I think this would also shed some light as to why this approach alleviates the problem of mode collapse.\n\nCurse of dimensionality: Nonparametric density estimators such as the KDE technique used in this paper suffer from the well-known curse of dimensionality. For the synthetic data, the empirical evidence seem to indicate that the technique proposed by the authors does work but I’m not sure the empirical evidence provided for the MNIST and CIFAR-10 datasets is sufficient to judge whether or not the method does help with mode collapse. The inception score fails to capture this property. Could the authors explore other quantitative measure? Have you considered trying your approach on the augmented version of the MNIST dataset used in Metz et al. (2016) and Che et al. (2016)?\n\nExperiments\nTypo: Should say “The data distribution is p_d(x) = 1{x=1}”.\n", "This paper proposed a framework to connect the solving of GAN with finding the saddle point of a minimax problem.\nAs a result, the primal-dual subgradient methods can be directly introduced to calculate the saddle point.\nAdditionally, this idea not only fill the relatviely lacking of theoretical results for GAN or WGAN, but also provide a new perspective to modify the GAN-type models.\nBut this saddle point model reformulation in section 2 is quite standard, with limited theoretical analysis in Theorem 1.\nAs follows, the resulting algorithm 1 is also standard primal-dual method for a saddle point problem.\nMost important I think, the advantage of considering GAN-type model as a saddle point model is that first--order methods can be designed to solve it. But the numerical experiments part seems to be a bit weak, because the MINST or CIFAR-10 dataset is not large enough to test the extensibility for large-scale cases. ", "In this paper, the authors study the relationship between training GANs and primal-dual subgradient methods for convex optimization. Their technique can be applied on top of existing GANs and can address issues such as mode collapse. The authors also derive a GAN variant similar to WGAN which is called the Approximate WGAN. Experiments on synthetic datasets demonstrate that the proposed formulation can avoid mode collapse. This is a strong contribution\n\nIn Table 2 the difference between inception scores for DCGAN and this approach seems significant to ignore. The authors should explain more possibly.\nThere is a typo in Page 2 – For all these varaints, -variants.\n", "Thanks for clarifying this point.", "We would like to thank the reviewer for raising this up. We added the discussion in Section 3.2 of the paper (see below).\n\n\\textit{Although having good convergence guarantee in theory, the non-parametric kernel density estimation\nof the generated distribution may suffer from the curse of dimension. Previous works combining\nkernel learning and the GAN framework have proposed methods to scale the algorithms to deal with\nhigh-dimensional data, and the performances are promising (Li et al., 2015; 2017a; Sinn \\& Rawat,\n2017). One common method is to project the data onto a low dimensional space using an autoencoder or a bottleneck layer of a pretrained neurual network, and then apply the kernel-based estimates on the feature space. Using this approach, the estimated probability of $\\bx_i$ becomes\n\\begin{align}\\label{eq:calc_prob}\np_g (\\bx_i) = \\frac{1}{m_2} \\sum_{j=1}^{m_2} k_{\\sigma} (f_{\\phi} (G(\\bz_j) )-f_{\\phi}( \\bx_i )),\n\\end{align}\nwhere $f_{\\phi} (.)$ is the projection of the data to a low dimensional space. We will leave the work of generating high-resolution images using this approach as future work. }\n\nWe agree with the reviewer that it will definitely help with more convincing experiments. Since the application of kernel methods with data projected onto a low-dimensional space has shown promising results in the previous works, we think it should be applicable to our approach as well. Nevertheless, training a GAN for high-dimensional data such as high resolution images may require more sophisticated GAN architectures, similar to the paper of \"``Progressive Growing of GANs for Improved Quality, Stability, and Variation''. We will leave this as our future work.", "Comment: Can you confirm that all the regularization methods cited above would avoid the mode collapse problem for the specific example you used in your paper?\n\nOur reply: Since we did not implement all the methods, we can only provide some arguments from the perspective of theory.\n\n[a] Improved Training of Wasserstein GANs.\n[b] Stabilizing Training of Generative Adversarial Networks through Regularization.\n[c] Mode Regularized Generative Adversarial Networks.\n\nFor WGAN, the purpose of weight clipping and gradient norm penalty in reference [a] is to enforce the Lipschitz condition of the discriminator function. The toy example shows that even with optimization over the functions with Lipschitz constraints, mode collapse still occurs.\n\nFor reference [b], the regularization technique has the effects of adding noise purturbation in the discriminator input, so that the support of data plus noise will be overlapped with the support of generated data plus noise. Without noise addition, the true data support may be disjoint with the generated data, which yields mode collapse. If the support of noise is large enough, the noise perturbation technique should be able to alleviate the mode collapse problem. However, according to our knowledge, the regularizer applied in reference [b] is derived from first-order Taylor expansion, assuming the perturbation is {\\em small}. Therefore, it may require a fine tuning of the noise variance (equivalent to the weighting factor in the regularizer) in practical training, especially in high dimensional datasets. \n\nFor reference [c], assume $E(.)$ can effectively encode the data to the latent space, the regularization term should be effective to encourage the generator to generate samples in all modes. Thus, it should be able to alleviate mode collapse problem, and the authors of [c] did some experiments to demonstrate the performance in their paper.", "The authors would like to thank the reviewer for his/her invaluable comments. We have taken the reviewers' comments into consideration when revising our paper. Moreover, our responses to the comments raised by the reviewer are as follows:\n\n1. Comment: In Table 2 the difference between inception scores for DCGAN and this approach seems significant to ignore. The authors should explain more possibly.\n\nOur reply: The different performance is in part due to the different network architecture and different training objective from DCGAN. Specifically, \n- We do not use BatchNorm.\n\u000f- We do not use LeakyReLU activation in the discriminator.\n\u000f- We do not use SoftPlus for the last layer of discriminator.\n-\u000f We use the approximate-WGAN variant as proposed in the paper, while DCGAN uses the vanilla GAN objective function.\n\nIn this regard, a more suitable baseline approach to compare is probably the WGAN result, which has similar architecture and optimization objective. In order to achieve better inception score performance, we probably need more extensive hyper-parameter tuning. We have to acknowledge that the aim of this paper is not to achieve superior performance, but to provide a new perspective on understanding GAN, and provide a new training technique that can be applied on top of different GAN variants to alleviate the mode collapse issue.\n\n2. Comment: There is a typo in Page 2 – For all these varaints, -variants.\n\nOur reply: Corrected accordingly. We appreciate that the reviewer points out this mistake.", "Thank you for the updated discussion. I think the new version nicely reflects the current status of existing convergence results for GAN training.", "I think the practical issue of performing KDE in high-dimensional spaces is not discussed enough in the revised version of the paper. Please revise the paper accordingly, clearly pointing the potential shortcomings and citing some of the work you just discussed here.\n\nThis is still my main concern about the practical aspect of the proposed approach and I will therefore not raise my scores unless the authors can provide convincing experiments.", "Thanks for the detailed answer. Can you confirm that all the regularization methods cited above would avoid the mode collapse problem for the specific example you used in your paper?", "The authors would like to thank the reviewer for his/her invaluable comments. We have taken the reviewers' comments into consideration when revising our paper. Moreover, our responses to the comments raised by the reviewer are as follows:\n\n1. Comment: Prior convergence proofs: I think the way the paper is currently written is misleading. The authors quote the paper from Ian Goodfellow: “For GANs, there is no theoretical prediction as to\nwhether simultaneous gradient descent should converge or not.”. However, the f-GAN paper gave a proof of convergence, see Theorem 2 here: https://arxiv.org/pdf/1606.00709.pdf. A recent NIPS paper by (Nagarajan and Kolter, 2017) also study the convergence properties of simultaneous gradient descent. Another problem is of course the assumptions required for the proof that typically don’t hold in practice (see comment below).\n\nOur reply: We would like to thank the reviewer for pointing out the latest NIPS paper by Nagarajan and Kolter. We have included this in our literature review. We have also made revisions in the paper to avoid the misleading arguments (see below).\n\n\\textit{ However, the analysis of the convergence properties on the training\napproaches is challenging, as noted by Ian Goodfellow in (Goodfellow, 2016), ``For GANs, there is\nno theoretical prediction as to whether simultaneous gradient descent should converge or not. Settling\nthis theoretical question, and developing algorithms guaranteed to converge, remain important open\nresearch problems.\". There have been some recent studies on the convergence behaviours of GAN\ntraining (Nowozin et al., 2016; Li et al., 2017; Heusel et al., 2017; Nagarajan \\& Kolter, 2017;\nMescheder et al., 2017). The simultaneous gradient descent method was proved to converge assuming\nthe objective function is convex-concave in the network parameters (Nowozin et al., 2016). The local\nstability property is established in (Heusel et al., 2017; Nagarajan \\& Kolter, 2017). }\n\nIan Goodfellow raised the convergence issue in the tutorial paper, because the study of convergence for the simultaneous gradient descent method was limited at that time. They also gave a counterexample in the tutorial paper, which shows that the simultaneous gradient descent cannot converge for some objective functions with some step size. This is one of the motivations of the paper to study the simultaneous gradient descent method.\n\nWe agree with the reviewer that this convergence issue has been studied at least in the following works:\n[a] ``f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization'' by Nowozin and Cseke.\n[b] ``Gradient descent GAN optimization is locally stable\" by Nagarajan and Kolter.\n[c] ``GANs trained by a two time-scale update rule converge to a Nash equibrium\" by Heusel et al (as noted in our introduction part).\n\nThese papers together with our paper study the convergence issue from different perspectives. In particular, these three papers study the convergence behavior of updates over the network parameters. Reference [a] assumes the objective function is convex-concave in the network parameters, while reference [b] and [c] study the local stability property. Our paper studies the convergence behavior in the function space, which was the started point in the first GAN paper by Ian Goodfellow. We incorporate the two conventional training methods into one framework, namely simultaneous gradient descent update and discrminator-driven update (discriminator is fully optimized before the gradient update of the generator). The theoretical convergence proof leverage some well-established results from the primal-dual subgradient methods for convex optimization. Although the actual optimization is over the network parameters, which is non-convex non-cave in general, our formulation provides important insights in improving the training methods, as detailed in the next point.\n\n", "The authors would like to thank the reviewer for his/her invaluable comments. We have taken the reviewers' comments into consideration when revising our paper. Moreover, our responses to the comments raised by the reviewer are as follows:\n\n1. Comment: But this saddle point model reformulation in section 2 is quite standard, with limited theoretical analysis in Theorem 1. As follows, the resulting algorithm 1 is also standard primal-dual method for a saddle point problem. Most important I think, the advantage of considering GAN-type model as a saddle point model is that \ffirst order methods can be designed to solve it.\n\nOur reply: We agree with the reviewer that the Lagrangian formulation in Section 2 is standard. The main contribution of the paper is to provide a new perspective of understanding GAN. In particular, we relate the minimax game to \ffinding the saddle points of the Lagrangian function for a convex optimization problem, where the generated distribution plays the role of the dual variable.\n\nThis inherent connection was not established in previous works and it shows that the standard training of GANs actually falls in the framework of primal-dual subgradient methods for convex optimization. As the the reviewer mentions, one important result is to show that the training actually converge to the optimal point if a proper step size is chosen, and both the discriminator output and the generated distribution are correctly updated according to the primal-dual rule. Besides this, it provides the following important insights:\n\n(a). It inspires an improved training technique to avoid mode collapse. In practical training, the generated distribution is not updated according to the desired direction. As Claim 1 points out, when the generated probability at some data point $\\bx$ is zero and the discriminator output $D(\\bx)$ is locally constant, mode collapse occurs. Using the traditional training, we can hardly avoid such mode collapse, even under the recently proposed WGAN. The Lagrangian formulation tells that the optimal update direction of $p_g(\\cdot)$ is given by Eq. (11). When mode collapse occurs, Eq. (13) gives a large gradient to push the generator to produce some samples at $\\bx$. The synthetic example shows that it indeed increases the data sample diversity and effectively avoids mode collapse.\n\n(b). It naturally incorporates different variants of GANs into the convex optimization framework including the\nfamily of f-GAN (Nowozin et al., 2016) and an approximate variant of WGAN. For all these GAN variants, an improved training objective can be easily derived.\n\n(c). The simultaneous primal-dual update is known to have a very slow convergence rate. There have been proposed methods to accelerate the convergence rates in the following papers:\n\nAngelia Nedic and Asuman Ozdaglar, ``Subgradient methods for saddle-point problems\".\n\nYunmei Chen, Guanghui Lan and Yuyuan Ouyang, ``Optimal primal-dual methods for a class of saddle point problems\".\n\nBy building the relation of GAN training and the primal-dual approach for convex optimizations, these improved methods can be directly applied. In future research, we will evaluate the acceleration of the training process using these approaches.\n\n(d). For some GAN variants, where the objective function is not strictly convex, the convergence may be slow or the converging point is not unique. By casting the minimax game in the Lagrangian framework, we could easily tweak the objective function such that the objective function is strictly convex and the optimal solution is not affected, then the convergence performance can be improved. Examples can be found in ``Nonlinear Programming\" by D. Bertsekas.\n\n\n", "The authors would like to thank the reviewers for their insightful comments. We have taken the reviewers' comments into consideration when revising our paper. In particular, we have made the following major revisions:\n\n1. We have corrected the typos and grammar mistakes as pointed out by the reviewers.\n\n2. We have incorporated more references as pointed out by the reviewer in the literature survey. Moreover, we made some revisions in the discussions to better clarify our ideas.\n\n3. We have run more experiments on the augmented MNIST dataset with 1000 classes to test the extensibility of our method for large-scale cases. Due to the page limits by the submission guideline, we elaborate the results in the appendix section.", "4. Comment: Curse of dimensionality: Nonparametric density estimators such as the KDE technique used in this paper suffer from the well-known curse of dimensionality. For the synthetic data, the empirical evidence seem to indicate that the technique proposed by the authors does work but I’m not sure the empirical evidence provided for the MNIST and CIFAR-10 datasets is sufficient to judge whether or not the method does help with mode collapse. The inception score fails to capture this property. Could the authors explore other quantitative measure? Have you considered trying your approach on the augmented version of the MNIST dataset used in Metz et al. (2016) and Che et al. (2016)?\n\nOur reply: We agree with the reviewer that the curse of dimension is a known problem for nonparametric density estimation. We would also like to acknowledge some recent works that show promising results in generative learning using nonparametric density estimation, including\n\n[a] \"``Generative Moment Matching Networks\" https://arxiv.org/abs/1502.02761,\n[b] ``\"MMD GAN: Towards Deeper Understanding of Moment Matching Network\", https://arxiv.org/abs/1705.08584\n\n[c] ``\"Non-parametric estimation of Jensen-Shannon Divergence in Generative Adversarial Network training\", https://arxiv.org/pdf/1705.09199.pdf.\n\nIn practice, in order to work with high dimensional data such as large images, it is useful to project data into lower dimension with pretrained neural network. Popular choice of projections include the bottleneck layer of a classifier neural network trained on ImageNet and auto-encoder (used in Che et al. 2016). This approach has also been used in references [b] and [c], and show excellent performance for large datasets.\n\n\nWe can incorporate the projection neural network and apply the KDE on the projected space $f_{\\phi} (\\mathcal{X})$ but not directly on the original data $\\mathcal{X}$. The estimated probabilies become\n\\begin{align}\np_g (\\bx_i) = \\frac{1}{m_2} \\sum_{j=1}^{m_2} k_{\\sigma} (f_{\\phi} (G(\\bz_j))- f_{\\phi} (\\bx_i)).\n\\end{align}\nWe will leave it as our future work.\n\nWe have experimented the proposed method on the augmented MNIST dataset with 1000 classes as proposed in unrolled GAN (Metz et al. 2016). The results are shown in the table below and details are elaborated in the appendix section. \n\nMethod & Modes generated & Inception Score \n Metz et al. (2016) 5 steps & 732 & NA \n Metz et al. (2016) 10 steps & 817 & NA \n Che et al. (2016) & 969 & NA \n Baseline & 526 & 87.15 \n Proposed & 827 & 155.6 \n\nWe use the same architecture as unrolled GAN in the experiment. We find that the proposed approach generates much larger number of modes than unrolled GAN with 5 steps, and similar number of modes compared to unroll GAN with 10 steps. However, the proposed approach is much more computationally efficient than unrolled GAN. We also show that the proposed approach generates more modes and achieves higher inception score than the baseline, which does not use the regularization term in the modified training objective function of Eq. (13). For Che et al. (2016), it only misses 31.6 modes, but it uses a much more complex neural network architecture, which is known to contribute to mode collapse avoidance, as noted in (Metz et al. 2016). \n\n5. Comment: Typo: Should say ``The data distribution is \"$p_d(x) = 1\\{x=1\\}$”.\n\nOur reply: Corrected accordingly. We appreciate that the reviewer points this out.", "\n3. Comment: Relation to existing regularization techniques: Combining Equations 11 and 13, the second terms acts as a regularizer that minimizes $[\\alpha f_1(D(x_i))]^2$. This looks rather similar to some of the recent regularization techniques such as\n\n[a] Improved Training of Wasserstein GANs, https://arxiv.org/pdf/1704.00028.pdf\n[b] Stabilizing Training of Generative Adversarial Networks through Regularization, https://arxiv.org/pdf/1705.09367.pdf\n\nCan the authors comment on this? I think this would also shed some light as to why this approach alleviates the problem of mode collapse.\n\nOur reply: We would like to thank the reviewer for this inisightful question. We would like to point out the differences between our regularization term and other works. We also incorporate the discussions in the revised paper.\n\nThe regularization terms proposed in different papers may have different purposes:\n(1). In reference (a), the gradient penalty regularization is calculated as \n$$(\\nabla_{\\hat{x}} D(\\hat{x}) -1)^2,$$ \nwhere $\\hat{x}$ is some point lying in between the data samples and the generated samples. It is used to force the gradient norm to be close to 1, in order to enforce the Lipschitz constraint of the discriminator function. The recent paper ``Spectral Normalization for Generative Adversarial Networks\" also aims to regularize the gradient norm.\n\n(2). The regularization term in reference (b) is calculated as\n$$E_{p_{g}}[f^{c''} \\circ \\psi ||\\nabla \\psi||^2],$$\nwhere $f^c(\\cdot)$ is the Fenchel dual of the f-divergence and $\\psi$ is the discriminator function. It is used to smooth the probability distribution such that the generated distribution is not disjoint with the data distribution. In particular, the regularization term was shown to have the same effects of adding noise perturbation in the discriminator input, as suggested by Martin Arjovsky and Léon Bottou in ``Towards principled methods for training generative adversarial networks\".\n\n\n(3). The regularization term in ``\"Mode Regularized Generative Adversarial Networks\" by Che et al is calculated as\n$$||\\bx_i - G(E(x_i))||^2,$$ \nwhere $E(\\cdot)$ is the autoencoder for the data samples. The regularization term is used to penalize the missing modes by minimizing the Euclidean distance between the data samples and the generated samples.\n\n\nThe purpose of the regularization term in our paper is more aligned with the third one, with the aim of avoiding missing modes. However, as discussed in the introduction part in the paper, whether the regularization in (Che et al, 2016) is optimal or whether it can converge lacks theoretical guarantee. In this paper, we leverage the insights of the primal-dual subgradient methods to force the generated distributions to update according to the optimal direction:\n$$||p'_g(\\bx_i) - \\sum_j \\frac{1}{m_2} \\sum_{j=1}^{m_2} k_{\\sigma} (G(\\bz_j)-\\bx_i) ||.$$ (Eq.(11) in the paper)\n\nNote that the regularization term in our paper is actually not $[\\alpha f_1(D(x_i))]^2$. The reviewer has this confusion probably because Eq. (11) is directly substituted in Eq. (13). We have modified the notation in the draft to avoid such confusion. The proposed regularization is motivated due to the following reasoning.\n\nBy the primal-dual subgradient method, we know that the updated generated distribution $p'_g$ should be updated according to Eq. (11). Then we fix the target probability distribution $p'_g$ and optimize the generator such that its generated distribution approximated by Eq. (12) is pushed to $p'_g$. As discussed in Section 3.3, when a missing mode occurs at some $x_i$, the generated probability at $x_i$ is close to 0 and $D(x_i)$ is close to 1. Then the term $\\alpha f_1(D(x_i)) = \\alpha \\log ( 2 (1-D(x_i)))$ would be very large and the regularization term plays an important role in the loss function. Ideally, it should encourage the generator to generate some samples at $x_i$, because the loss function would be very large otherwise. For every data point $x_i$, we are using the information from all the data in a batch in the regularization term while only the data point $x_i$ itself is used in (Che et al, 2016). The formulation also enables us to derive different kinds of regularizers for different GAN variants.", "2. Comment: Convex-concave assumption: In practice the GAN objective is optimized over the parameters of the neural network rather than the generative distribution. This typically yields a non-convex non-concave optimization problem. This should be mentioned in the paper and I would like to see a discussion concerning the gap between the theory and the practical algorithm.\n\nOur reply: We agree with the reviewer that the actual optimization is over the network parameters, which is non-convex non-concave in general. We mentioned this important point in the last paragraph of Section 3. \n\nAlthough the local stability property is studied in Nagarajan 2017 and Heusel 2017, the convergence to the global optimum point is not guaranteed. Even for non-adversarial training, the non-convex is not well-understood. In fact, a non-convex optimization is NP hard in general.\n\nTo complement the gap between theory and practical algorithm, most of the works including the first GAN paper by Ian Goodfellow, the f-GAN paper and the recent paper ``Training GANs with Optimism\" apply the following approach. First, they propose methods that have good theoretical properties in the convex setting. Secondly, the methods are applied in the non-convex setting, and the actual learning performances are evaluated by experiments, which hopefully yield promising results. Our paper follows this approach as well.\n\nIn this paper, we build the connection between GAN training and finding the saddle points of the Lagrangian function for a convex optimization problem. Although the theoretical convergence proof assumes functional space updates, the relationship provides important insights in understanding GANs and designing training methods. It inspires an improved training technique to avoid mode collapse by pushing the updates of the generated probabilities along the optimal direction in function space.\n\nFor example, when mode collapse occurs, the generated probability at some data point $x$ is zero or very small and the discriminator output $D(x)$ is close to 1. Using the traditional training, the loss function at point $x$ contributes little since the derivative at that point is almost zero. We know that the ideal update direction in the function space according to the primal-dual update rule is given by Eq. (11), which gives a large gradient to push the generator to produce some samples at $x$. The synthetic example shows that it indeed increases the data sample diversity and effectively avoids mode collapse, which may never be escaped by the conventional GAN and WGAN.\n", "2. Comment: But the numerical experiments part seems to be a bit weak, because the MINST or CIFAR-10 dataset is not large enough to test the extensibility for large-scale cases.\n\nOur reply: We appreciate that the reviewer points this out. We have experimented with augmented MNIST dataset with 1000 classes as proposed in unrolled GAN (Metz et al. 2016). The results are shown in the table below and the details are elaborated in the appendix section.\n\n Method Modes generated Inception Score \n Metz et al. (2016) 5 steps 732 NA \n Metz et al. (2016) 10 steps 817 NA \n Che et al. (2016) 969 NA \n Baseline 526 87.15 \n Proposed 827 155.6\n\nWe use the same architecture as unrolled GAN in the experiment. We find that the proposed approach generates much larger number of modes than unrolled GAN with 5 steps, and similar number of modes compared to unrolled GAN with 10 steps. However, the proposed approach is much more computationally efficient than unrolled GAN. We also show that the proposed approach generates more modes and achieves higher inception score than the baseline, which does not use the regularization term in the modified training objective function of Eq. (13). For Che et al. (2016), it only misses 31.6 modes, but it uses a much more complex neural network architecture, which is known to contribute to mode collapse avoidance, as noted in (Metz et al. 2016). " ]
[ 7, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJNRFNlRW", "iclr_2018_BJNRFNlRW", "iclr_2018_BJNRFNlRW", "H1yaEeqXG", "r1o4ORO7z", "SJ6GI0_XG", "SJHM1GxbG", "H1X2ld_7G", "rybMrduXG", "r1FDmdumM", "SkmK6TUxz", "SkvyRa3lG", "iclr_2018_BJNRFNlRW", "r1FDmdumM", "ry3SXdO7M", "H1X2ld_7G", "HJlcnP_XG" ]
iclr_2018_HyyP33gAZ
Activation Maximization Generative Adversarial Nets
Class labels have been empirically shown useful in improving the sample quality of generative adversarial nets (GANs). In this paper, we mathematically study the properties of the current variants of GANs that make use of class label information. With class aware gradient and cross-entropy decomposition, we reveal how class labels and associated losses influence GAN's training. Based on that, we propose Activation Maximization Generative Adversarial Networks (AM-GAN) as an advanced solution. Comprehensive experiments have been conducted to validate our analysis and evaluate the effectiveness of our solution, where AM-GAN outperforms other strong baselines and achieves state-of-the-art Inception Score (8.91) on CIFAR-10. In addition, we demonstrate that, with the Inception ImageNet classifier, Inception Score mainly tracks the diversity of the generator, and there is, however, no reliable evidence that it can reflect the true sample quality. We thus propose a new metric, called AM Score, to provide more accurate estimation on the sample quality. Our proposed model also outperforms the baseline methods in the new metric.
accepted-poster-papers
The authors investigate various class aware GANs and provide extensive analysis of their ability to address mode collapse and sample quality issues. Based on this analysis they propose an extension called Activation Maximization-GAN which tries to push each generated sample to a specific class indicated by the Discriminator. As experiments show, this leads to better sample quality & helps with mode collapse issue. The authors also analyze inception score to measure sample quality and propose a new metric better suited for this task.
train
[ "HJZyvxT1G", "HyUOlCNlf", "SJ5g2WcgM", "rJGlj7pmf", "SkIE9XHbG", "HJ7_9mrWG", "r1F_FQBZG", "rknnXVKxz", "B1ZnkAYkz", "Sy7107h0b", "SJC9zbqCZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "author", "public" ]
[ "\nI thank the authors for the thoughtful responses and updated manuscript. Although the manuscript is improved, I still feel it is unfocused and may be substantially improved, thus my review score remains unchanged.\n\n===============\n\nThe authors describe a new version of a generative adversarial network (GAN) for generating images that is heavily related to class-conditional GAN's. The authors highlight several additional results on evaluation metrics and demonstrate some favorable analyses using their new proposed GAN.\n\nMajor comments:\n1) Unfocused presentation. The paper presents a superfluous and extended background section that needs to be cut down substantially. The authors should aim for a concise presentation of their work in 8 pages. Additionally, the authors present several results (e.g. Section 5.1 on dynamic labeling, Section 6.1 on Inception score) that do not appear to improve the results of the paper, but merely provide commentary. The authors should either defend why these sections are useful or central to the arguments in the paper; otherwise, remove them.\n\n2) Quantitative evaluation highlight small gains. The gains in Table 1 seem to be quite small and additionally there are no error bars so it is hard to assess what is statistically meaningful. Table 2 highlights some error bars but again the gains some quite small. Given that the AM-GAN seems like a small change from an AC-GAN model, I am not convinced there is much gained using this model.\n\n3) MS-SSIM. The authors' discussion of MS-SSIM is fairly confusing. MS-SSIM is a measure of image similarity between a pair of images. However, the authors quote an MS-SSIM for various GAN models in Table 3. What does this number mean? I suspect the authors are calculating some cumululative statistics across many images, but I was not able to find a description, nor understand what these statistics mean.\n\n4) 'Inception score as a diversity measurement.' This argument is not clear to me. Inception scores can be quite high for an individual image indicating that the image 'looks' like a given class in a discriminative model. If a generative model always generates a single, good image of a 'dog', then the classification score would be quite high but the generative model would be very poor because the images are not diverse. Hence, I do not see how the inception score captures this property.\n\nIf the authors can address all of these points in a substantive manner, I would consider raising my rating.\n\n", "+ Pros:\n- The paper properly compares and discusses the connection between AM-GAN and class conditional GANs in the literature (AC-GAN, LabelGAN)\n- The experiments are thorough\n- Relation to activation maximization in neural visualization is also properly mentioned\n- The authors publish code and honestly share that they could not reproduce AC-GAN's results and thus using to its best variant AC-GAN* that they come up with. I find this an important practice worth encouraging!\n- The analysis of Inception score is sound.\n+ Cons:\n- A few presentation/clarity issues as below\n- This paper leaves me wonder why AM-GAN rather than simply characterizing D as a 2K-way classifier (1K real vs 1K fake).\n\n+ Clarity: \nThe paper is generally well-written. However, there are a few places that can be improved:\n- In 2.2, the authors mentioned \"In fact, the above formulation is a modified version of the original AC-GAN..\", which puts readers confusion whether they were previously just discussed AC-GAN or AC-GAN* (because the previous paragraph says \"AC-GAN are defined as..\".\n- Fig. 2: it's not clear what the authors trying to say if looking at only figures and caption. I'd suggest describe more in the caption and follow the concept figure in Odena et al. 2016.\n- A few typos here and there e.g. \"[a]n diversity measurement\"\n\n+ Originality: AM-GAN is an incremental work by applying AM to GAN. However, I have no problems with this.\n+ Significance: \n- Authors show that in quantitative measures, AM-GAN is better than existing GANs on CIFAR-10 / TinyImageNet. Although I don't find much a real difference by visually comparing of samples of AM-GAN to AC-GAN*.\n\nOverall, this is a good paper with thorough experiments supporting their findings regarding AM-GAN and Inception score!", "This paper is a thorough investigation of various “class aware” GAN architectures. It purposes a variety of modifications on existing approaches and additionally provides extensive analysis of the commonly used Inception Score evaluation metric.\n\nThe paper starts by introducing and analyzing two previous class aware GANs - a variant of the Improved GAN architecture used for semi-supervised results (named Label GAN in this work) and AC-GAN, which augments the standard discriminator with an auxiliary classifier to classify both real and generated samples as specific classes. \n\nThe paper then discusses the differences between these two approaches and analyzes the loss functions and their corresponding gradients. Label GAN’s loss encourages the generator to assign all probability mass cumulatively across the k-different label classes while the discriminator tries to assign all probability mass to the k+1th output corresponding to a “generated” class. The paper views the generators loss as a form of implicit class target loss.\n\nThis analysis motivates the paper’s proposed extension, called Activation Maximization. It corresponds to a variant of Label GAN where the generator is encouraged to maximize the probability of a specific class for every sample instead of just the cumulative probability assigned to label classes. The proposed approach performs strongly according to inception score on CIFAR-10 and includes additional experiments on Tiny Imagenet to further increase confidence in the results.\n\nA discussion throughout the paper involves dealing with the issue of mode collapse - a problem plaguing standard GAN variants. In particular the paper discusses how variants of class conditioning effect this problem. The paper presents a useful experimental finding - dynamic labeling, where targets are assigned based on whatever the discriminator thinks is the most likely label, helps prevent mode collapse compared to the predefined assignment approach used in AC-GAN / standard class conditioning.\n\nI am unclear how exactly predefined vs dynamic labeling is applied in the case of the Label GAN results in Table 1. The definition of dynamic labeling is specific to the generator as I interpreted it. But Label GAN includes no class specific loss for the generator. I assume it refers to the form of generator - whether it is class conditional or not - even though it would have no explicit loss for the class conditional version. It would be nice if the authors could clarify the details of this setup.\n\nThe paper additionally performs a thorough investigation of the inception score and proposes a new metric the AM score. Through analysis of the behavior of the inception score has been lacking so this is an important contribution as well.\n\nAs a reader, I found this paper to be thorough, honest, and thoughtful. It is a strong contribution to the “class aware” GAN literature.\n", "We have conducted additional experiments on combining AM-GAN with the class-splitting technique proposed in [1] which is orthogonal to our work and has shown to improve quality of generated samples. Unfortunately, we found that it fails to further improve Inception Score in our setting. This might be due to the fact that the quality of split-classes is not good enough, which largely depends on the features it learns and the clustering algorithm it uses. It requires further investigations to make split-classes more effective and we would leave it as future work.\n\n[1] Guillermo, L. Grinblat, Lucas, C. Uzal, and Pablo, M. Granitto. Class-splitting generative adversarial\nnetworks. arXiv preprint arXiv:1709.07359, 2017.\n", "We sincerely thank you for your comprehensive comments on our paper.\n\n1. What does dynamic/predefined labeling in Table 1 means? How does it apply to LabelGAN? \n\nModels with dynamic labeling and predefined labeling settings require different network structures (G and D’s capacities). The “dynamic” and “predefined” in Tables 1 and 3 represent two experimental settings which differ in network structures. Under the “dynamic” setting, if a model requires specific target class (AC-GAN, AM-GAN), we apply dynamic labeling; under the “predefined” setting, if a model requires specific target class, we apply predefined labeling; for models that do not need target class (GAN, LabelGAN), neither of them are applied and the two (“dynamic” and “predefined”) only differ in network structures. In this way, we compare various models with almost identical network structures. We have revised the caption of Table 1. It should be clear now.\n", "We sincerely thank you for your constructive feedback. We have revised the paper and fixed the confusing statements. More descriptions about the tables and figures have been added in their captions.\n\n1. Why not characterize the discriminator as a 2K-way classifier (K real vs K fake)?\n\nThis is an interesting idea and we have thought about this originally. However, we did not feel strongly that considering K fake logits would help in our case: \n\na) Introducing specific real class logits in the discriminator makes it possible to assign a specific target class for each generated sample, which provides a clearer guidance to the generator. However, a fake class will not be used as the target for the generated sample. In this sense, how and whether we can benefit from using K specific fake class logits are still unknown.\n\nb) Introducing more fake classes does influence the gradient that the generator receives from the discriminator. When optimizing a generated sample, only the target class logit is encouraged while all the others are otherwise discouraged. Thus, replacing a single fake class with K fake classes changes the discouraged recipient from the overall fake class to the K specific fake classes. It requires further investigations to figure out whether this will help or not. We leave it as our future work.\n", "We sincerely thank you for your constructive advice on our paper. We have substantially revised our paper according to your comments.\n\n1. Unfocused Presentation:\n\na) Superfluous Preliminary. We shorten the preliminary section and only keep necessary equations that are referred in later sections. \n\nb) Section 6.1 on Inception Score. We have discarded the inessential part of the discussions in Section 6.1 and only kept the definition of Inception Score.\n\nc) Section 5.1 on Dynamic Labeling. Dynamic labeling brings important improvements to AM-GAN, and is applicable to other models that require target class for each generated sample, such as AC-GAN. It is an alternative to predefined labeling, and affects the models largely according to our experiments. As such, we consider it necessary to keep the discussions about dynamic labeling. We have made the point clearer in the revised version.\n\n2. Quantitative Evaluation:\n\na) No Error Bar in Table 1. We have added error bars in Table 1, where AM-GAN still consistently outperforms variants of AC-GAN and LabelGAN by a large margin in terms of both Inception Score and AM Score.\n\nb) Table 2 Shows Small Gains. As shown in Table 2, AM-GAN achieves 8.91±0.11 Inception Score on CIFAR-10, which significantly outperforms all the baseline methods, including Improved GAN (8.09±0.07), AC-GAN (8.25±0.07), WGAN-GP + AC (8.42±0.10) and SGAN (8.59±0.11). When compared to Splitting GAN (8.87±0.09), an orthogonal work to AM-GAN, which enhances the class label information via class splitting, the improvement seems to be less significant. However, since it is orthogonal to AM-GAN, we can combine them to further improve the results. As it takes some time to conduct additional experiments, we would add such results in the later version.\n\n3. MS-SSIM: \n\nSorry for missing the descriptions on MS-SSIM. We actually borrow the usage of MS-SSIM from AC-GAN (Odena et al., 2016) which measures the MS-SSIM scores between a set of randomly sampled pairs of images within a given class and uses the mean of the MS-SSIM scores, where a high mean MS-SSIM indicates intra-class mode collapse or low sample diversity in the class. In our paper, we report the maximum of the mean MS-SSIM over the 10 classes in CIFAR-10, with which we judge whether there exists obvious intra-class mode collapse. We have added corresponding descriptions and citations to MS-SSIM in the new version (the first paragraph in Section 7 and the caption of Table 2).\n\n4. Inception Score as a Diversity Measurement:\n\nA direct answer is that Inception Score, i.e. exp(H(E_x[C(x)])-E_x[H(C(x))]), has two terms and, more importantly, when the generator collapsed to a single point, the first term H(E_x[C(x)]) (the entropy of “the mean classification distribution of all samples”) and the second term E_x[H(C(x))] (the mean classification entropy score for each sample) in Inception Score would be actually equal. Though the second term can be high, the overall Inception Score, in this case, would always be the minimal value 1.0 (exp^0).\n\nIt is worth noting that the first and second terms are highly correlated. As in Section 6.3, we provide an alternative explanation by understanding Inception Score in the KL divergence formulation, i.e. exp(E_x[KL(C(x),E_x[C(x)])]), which involves a single term and can be interpreted as it requires each sample’s distribution to be highly different from the overall distribution of the generator. In this view, it measures the sample diversity. We further demonstrate that Inception Score can capture sample diversity well with synthetic experiments: assuming the generator perfectly generates a subset of the training data, with the subset growing to cover the entire dataset, Inception Score is monotonically increasing. Please also refer to Sections 6.2 and 6.3 in our paper for more details.\n\nThe comments have been very useful for us to improve our paper, and we have updated our paper according to your valuable review comments. Please check it.", "Thanks for your reference to Fréchet Inception Distance (FID). \n\nFID measures the distance between two distributions using their means and variances after a fixed mapping, e.g. Inception Network, which works well in practice as illustrated in [1]. As another evaluation metric for generative models, we would add discussions in the revision.\n\nA concern on FID is that the mean and variance of the distribution are not sufficient to represent the whole distribution. That is, for any given distribution, we can always design another distribution which is totally different from the given distribution but has the same mean and variance. We are not sure whether this would cause a problem in practice.\n\nAlso, FID is actually orthogonal to Inception Score and AM Score. FID directly measures the distance between generated distribution and real-data distribution, while Inception Score mainly measures the sample diversity and AM Score mainly measures the sample quality.\n\nAs for the failure of Inception Score on CelebA illustrated in [1], according to our analysis, Inception Score works as a diversity measurement and we might need a more suitable classifier (maybe a classifier trained on a face dataset) to make it work on CelebA. FID seems to have the benefit of being not sensitive to the choice of the mapping function, though it also remains uncertain whether the Inception Network is always the best choice as the mapping function for variant models.", "[1] proposed the Fréchet Inception Distance (FID) to evaluate GANs which is the Fréchet distance aka Wasserstein-2 distance between the real world and generated samples statistics. The statistics consists of the first two moments such that the sample quality (first moment match) and variation (second moment match) are covered.\n\nAs highlighted here in Section 6.2, for datasets not covering all ImageNet classes e.g. celebA, CIFAR-10 etc, the entropy of E_x~G[C(x)] is going down not up as soon as a trained GAN starts producing correctly samples falling only in some of the ImageNet classes. [1] also showed inconistent behaviour of the Inception Score in their experiments (see Appendix A1). Especially interesting here is experiment 6 where a dataset (celebA) is increasingly mixed with ImageNet samples. The Inception Score shows a contradictory behaviour, while the FID captures this contamination, and other disturbance variants, very well. \n\nThe authors should discuss their proposed AM Score compared to the FID, also under consideration that the FID does not\nneed an accordingly pretrained classifier.\n\n[1] https://arxiv.org/abs/1706.08500", "Yeah, it is indeed a mistake. We will correct it in the revision. Thanks. ^_^", "In Table 2, the citation for SGAN should be Huang et al. instead." ]
[ 5, 7, 8, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyyP33gAZ", "iclr_2018_HyyP33gAZ", "iclr_2018_HyyP33gAZ", "r1F_FQBZG", "SJ5g2WcgM", "HyUOlCNlf", "HJZyvxT1G", "B1ZnkAYkz", "iclr_2018_HyyP33gAZ", "SJC9zbqCZ", "iclr_2018_HyyP33gAZ" ]
iclr_2018_SkVqXOxCb
Coulomb GANs: Provably Optimal Nash Equilibria via Potential Fields
Generative adversarial networks (GANs) evolved into one of the most successful unsupervised techniques for generating realistic images. Even though it has recently been shown that GAN training converges, GAN models often end up in local Nash equilibria that are associated with mode collapse or otherwise fail to model the target distribution. We introduce Coulomb GANs, which pose the GAN learning problem as a potential field, where generated samples are attracted to training set samples but repel each other. The discriminator learns a potential field while the generator decreases the energy by moving its samples along the vector (force) field determined by the gradient of the potential field. Through decreasing the energy, the GAN model learns to generate samples according to the whole target distribution and does not only cover some of its modes. We prove that Coulomb GANs possess only one Nash equilibrium which is optimal in the sense that the model distribution equals the target distribution. We show the efficacy of Coulomb GANs on LSUN bedrooms, CelebA faces, CIFAR-10 and the Google Billion Word text generation.
accepted-poster-papers
The paper provides an interesting take on GAN training based on Coulomb dynamics. The proposed formulation is theoretically well motivated and authors provide guarantees for convergence. Reviewers agree that the theoretical analysis is interesting but are not completely impressed by the results. The method addresses mode collapse issue but still lacks in sample quality. Nevertheless, reviewers agree that this is a good step towards the understanding of GAN training.
test
[ "BkPeeQ5lf", "r1EJEK6lG", "HkdTXw1bM", "ByCZjzt7G", "BkzCcfFQz", "H1bh5MtXz", "Hyi0SB3lM", "ryPND-5gG", "SkQ_1mQ1f", "S1wB0slyf", "BJI5By0RZ", "rJDaKf0Rb", "Sy5L6uoC-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "public", "author", "public", "public" ]
[ "\nIn this paper, the authors interpret the training of GAN by potential field and inspired from which to provide new training procedure for GAN. They claim that under the condition that global optima are achieved for discriminator and generator in each iteration, the Coulomb GAN converges to the global solution. \n\nI think there are several points need to be addressed. \n\n1, I agree that the \"model collapsing\" is due to converging to a local Nash Equilibrium. However, there are more reasons besides the drawback of the loss function, which is emphasized in the paper. Leave the stochastic gradient descent optimization algorithm apart (since most of the neural networks are trained in this way), the parametrization and the richness of discriminator family play a vital role in the model collapsing issue. In fact, even with KL-divergence in which log operation is involved, if one can select reasonable parametrization, e.g., directly handling in function space, the saddle point optimization is convex-concave, which means under the same assumption made in the paper, there is only one global Nash Equilibrium. On the other hand, the richness of the discriminator also important in the training of GAN. I did not get the point about the drawback of III. If indeed as the paper considered in the ideal case, the discriminator is rich enough, III cannot happen. \n\nThe model collapsing is not just because loss function in training GAN. It is caused by the twist of these three issues listed above. Modifying the loss can avoid partially model collapsing, however, it is not appropriate to claim that the proposed algorithm is 'provable'. The assumption in this paper is too restricted, and the discussion is unfair to the existing variants of GAN, e.g., GMMN or Wasserstein GAN, which under some assumptions, there is also only one global Nash Equilibrium. \n\n2, In the training procedure, the discriminator family is important as we discussed. The paper claims that the reason to introduce the extra discriminator is reducing variance. However, such parametrization will introduce bias too. The bias and variance tradeoff should be explicitly discussed here. Ideally, it should contain all the functions formed with Plummer kernel, but not too large (otherwise, it will increase the sample complexity.). Which function family used in the paper is not clear. \n\n\n3, As the authors already realized, the GMMN is one closely related model. It will be more convincing to add the comparison with GMMN. \n\nIn sum, this paper provides an interesting perspective modeling GAN from the potential field, however, there are several issues need to be addressed. I expect to see the reply of the authors regarding the mentioned issues. ", "The authors draw from electrical field dynamics and propose to formulate the GAN learning problem in a way such that generated samples are attracted to training set samples, but repel each other. Optimizing this formulation using gradient descent can be proven to yield only one optimal global Nash equilibrium, which the authors claim allows Coulomb GANs to overcome the \"mode collapse\" issue. Experimental results are reported on image and language modeling tasks, and show that the model can produce very diverse samples, although some samples can consist of somewhat nonsensical interpolations.\n\nThis is a good, well-written paper. It is technically rigorous and empirically convincing. Overall, it presents an interesting approach to overcome the mode collapse problem with GANs. \n\nThe image samples presented -- although of high variability -- are not of very high quality, though, and I somewhat disagree with the claim that \"Coulomb GAN was able to efficiently learn the whole distribution\" (Sec 3.1). At best, it seems to me that the new objective does in fact force the generator to concentrate efforts on learning over the full support of the data distribution, but the lower quality samples and sometimes somewhat bad interpolations seem to suggest to me that it is *not* yet doing so very \"efficiently\". \n\nNonetheless, I think this is an important step forward in improving GANs, and should be accepted for publication.\n\nNote: I did not check all the proofs in the appendix.", "The paper takes an interesting approach to solve the existing problems of GAN training, using Coulomb potential for addressing the learning problem. It is also well written with a clear presentation of the motivation of the problems it is trying to address, the background and proves the optimality of the suggested solution. My understanding and validity of the proof is still an educated guess. I have been through section A.2 , but I'm unfamiliar with the earlier literature on the similar topics so I would not be able to comment on it. \n\nOverall, I think this is a good paper that provides a novel way of looking at and solving problems in GANs. I just had a couple of points in the paper that I would like some clarification on : \n\n* In section 2.2.1 : The notion of the generated a_i not disappearing is something I did not follow. What does it mean for a generated sample to \"not disappear\" ? and this directly extends to the continuity equation in (2). \n\n* In section 1 : in the explanation of the 3rd problem that GANs exhibit i.e. the generator not being able to generalize the distribution of the input samples, I was hoping if you could give a bit more motivation as to why this happens. I don't think this needs to be included in the paper, but would like to have it for a personal clarification. ", "We'd like to thank the reviewer for this in-depth and constructive review, it was a great help in improving our manuscript. The major changes in the new version of the text are:\n\n* Clarified notion of Nash Equilibria: reformulated Theorem 2 in function space (your point 1)\n* Clarified bias/variance issues (your point 2)\n* Added comparison to MMD GAN (your point 3)\n\nIn hindsight, we didn't outline the contribution that Coulomb GANs make as clear as we could have. As a result of this review, we have rewritten large portions of Section 2 and strongly clarified some of our main statements. We were able to reformulate Theorem 2 in a much more precise way. We think that the new version of the text is much improved, and we hope it clears up the items you mentioned.\n\nTo your specific points:\n\n1. Thank you, this comment was very helpful, and lead us to reformulate some of our claims in a clearer way. We agree that loss functions are not the only culprit for unsuccessful GAN learning, and that all practical learning approaches - where generator/discriminator are parametrized models and learned by SGD - introduce a whole lot of convergence and local optimality problems in GANs. But more fundamentally the choice of the loss function might introduce bad local Nash equlibria already in the \"theoretical\" function space. This fundamental issue is - to the best of our knowledge - not explored in the current literature, neither in the context of Wasserstein GANs nor in GMMN losses and we are not sure if the absence of local Nash equilibria in function space could be proven for those cases. This issue has fundamental implications for all GAN architectures. Therefore our work aims to be more than \"just another cool GAN“, but hopefully furthers the theoretical understanding of GANs in general.\nWe think that the main contribution of Coulomb GANs is to provide a loss function for GAN learning with the mathematically rigorous guarantee that no such local Nash Equlibria *IN FUNCTION SPACE* exist. We think this is a crucial issue that has not received proper attention yet in order to put scientific GAN research on a solid rigorous ground. We are not aware of any other paper that provides such a strong claim as our Theorem 2. Neither WGAN nor MMD-based approaches have made this claim and we are not sure that a corresponding claim for them would be provable at all.\nWe hope you will appreciate our newly written section 2.1, where we discuss in more depth and mathematical precision what we mean by \"local nash equlibrium in function space\", and how it differs from looking at things in probability-distribution space or parameter space. \nWith that said, we fully agree that for all practical purposes the choice of rich discriminators (and the parametrization in general) is highly important for good empirical performance. However, that topic is not the main point we are trying to investigate.\n\n2. You are right, thank you for this head's up! There are two kinds of approximation here: First, we approximate the potential Phi using a mini-batch specific \\hat{Phi}. The newer version of the paper discusses the properties of this approximation. Concretely, we show in the appendix that the estimate is unbiased, and explicitly mention the drawback of its high variance in the main text (Section 2.4).\nSecondly, as you correctly stated, we learn Phi with a neural network (the discriminator) to reduce the high variance of the mini-batch \\hat{Phi}. With this, we run into the usual the bias/variance tradeoff of Machine Learning: trading overfitting against underfitting. And we absolutely agree that finding a good discriminator (that is able to learn the potential field correctly) is vital. Thankfully, in GAN learning we can always sample new generator data in each mini-batch, so overfitting on those is not too much of an issue, but we could still overfit on the real-world training data. This could lead to local Nash equilibria in parameter space. Therefore, we tried to be more explicit in the new version of the text that our analysis focuses on the space of functions, and we explicitly mention that neural network learning is vulnerable to issues such as over/underfitting (again in Section 2.4).\n\n3. Thank you for the suggestion, we have added this comparison: The original GMMN approach is computationally very expensive to run on the typical GAN datasets, and was recently improved upon by the MMD-GAN model [Li et al, NIPS 2017]. Most importantly, MMD GAN extends the GMMN approach to a learnable discriminator, which makes the approach better and feasible for larger datasets (& very similar to Coulomb GAN's discriminator, presumably with the same advantage of reducing variance). In their paper, Li et al. show that MMD GAN outperforms GMMN on all tested datasets. We thus added a comparison to MMD-GAN to the current revision of the manuscript.\n", "Thank you for your review, and thanks for the „important step forward in improving GANs“. We appreciate your positive feedback. We agree with your assessment that our objective forces the generator to concentrate efforts on learning over the full support at the cost of somewhat bad interpolations, and toned down the statement about learning \"efficiently\" in the new update of the paper.\nTo see how Coulomb GANs perform when contrasted with similar approaches that aim to learn the full support of the distribution, we added a new comparison with MMD approaches. It turns out that Coulomb GANs are more efficient than MMD GANs.", "We thank the reviewer for their encouraging review, appreciate the positive feedback. For your questions:\n\n* Thanks for pointing out that our explanation was not clear enough. An a_i is associated with a particular random variable z_i of the generator which is mapped by the generator to a_i. If the generator changes, then the same random variable z_i is mapped to another a_i'. That is a_i moved to a_i'. We have explained this more clearly in the current revision.\n\n* We have tried to explain this better in the new version of the text (even if you said it wasn't necessary). Informally speaking, we meant the following: in typical GAN learning (e.g. Goodfellow's original formulation) the discriminator is able to say \"in this region of space A, the probability of a sample being fake is x%\". Which provides the generator with the information of how well it does in said region. However, the discriminator has usually no way of telling the generator \"you should move probability mass over to this region B which is far, far away from A, because there is a lack of generated density there\". Thus, the discriminator cannot tell the generator how to globally move its mass (it just gets local gradient information at the points where it currently generates). In particular, the generator cannot move samples across regions where the real world data has no support. As soon as generator samples appear at the border of such regions, they are penalized and move back to regions with real world support where they come from. Moving again means that samples \"a\" are associated with random variables \"z\", and small changes in the generator lead to small changes of \"a\" (\"z\" is mapped to a slightly different \"a\"). Thus, it is impossible to move samples from one island of real world support to another island of real world support.\n", "Hi! Thanks for your thoughtful comments, we also think Coulomb GANs are an interesting new avenue. For your questions:\n\nad 1:The proof that Eq (16) is a minimum is formally correct: Since the derivative in Eq (18) is != 0 everywhere, the only extreme points we need to check are the boundary. Here, the minimum is at r = 0, as the function increases with r (see Eq 18).\n\nad 2: In order to see the general validity of (21), set epsilon to 0. From (19) follows that the expression is continuously increasing with increasing epsilon. Therefore like with r there is a minimum at the boundary point of \\epsilon=0. ", "This paper represents the first attempt to formulate GANs as a minimization problem rather than a standard mini-max. \n\nThe overall idea is interesting, but I have some concerns regarding the proof of convergence to the global Nash equilibrium (proof of the main Theorem 1):\n1. The authors study the function \\nabla^2k(a,b) and claim that its minimum corresponds to Eq. 16. The methodology to check the minimality of the function (Eqs. 18-19) is not conventional. One should compute the Hessian matrix and then see if it is positive (semi-)definite.\n2. The last inequality in Eq. 21 is not valid in general. The authors should specify the conditions of validity for that. Note that the bound of \\nabla^2k(a,b) is obtained by setting \\epsilon to zero. One can find non-zero values of \\epsilon for which the equality does not hold.\n\n\n", "Hi! Interesting to hear that you're facing a similar argument! First off: Note that in Coulomb GANs, the generator objective does not include a log; but that's just a minor note.\n\nIf I get you right, you're saying that the Generator might get stuck. This is indeed true for generators that haven't got enough capacity to move points freely along the field generated by the potential (e.g. a super small generator, this is why we have Assumption A1 in Theorem 2). But as long as the generator can still move it's points, Theorem 1 guarantees that there are no local NE.\n\nBut I'm not sure I understood you correctly: another interpretation of your question is that d_G / d_Theta = 0 for all possible z_i, but that's only possible if G is a constant function. Is that what you meant?\n", "Dear authors,\nI think about the argument of the local NEs and still have a couple questions. The generator objective function is to minimize f = - \\sum_j log(D(G(z_j))). We take derivative of this w.r.t. the network parameters theta_g, by chain rule we have \ndf/d theta = - \\sum_j (1/D(G(z_j))) * (d D(x) / dx |x = G(z_j)) *(dG /d theta).\n\nFor traditional GANs, (d D(x) / dx |x = G(z_j)) = 0, which yields the gradients equal to zero. I think Coulomb GAN makes (d D(x) / dx |x = G(z_j)) non-zero whenever pg is not equal to pd. However, the third term (dG /d theta) could still be zero. We recently came up with a new training algorithm, which exactly faces the same problem. \n\nAccording to my understanding, the proposed GAN considerably reduces the local NEs but not remove all local NEs, because this problem is fundamentally a nonconvex problem. Please let me know if my understanding is correct. \n", "Hi, thank you for your comment. You are right, as we discussed within the paper, MMD is indeed the closest existing work, and the two differences you pointed out are exactly the two fundamental key novelties of our approach, which allow us to make theoretical guarantees that go over what MMD claim. To your specific questions:\n\n1. Calculating the true Phi(...) would involve calculating the pairwise interactions between all pairs of training set data points, which is not feasible. Averaging over many \\hat{Phi} (i.e., the potential function spanned by only a single mini-batch), is also very difficult: Even averaging over many mini-batches is infeasible because of the high variance in approximating the whole field when using only a subset of samples. The key idea is to \"store\" the average field in the discriminator which includes also a smoothing effect and tracks the change of the field. Since the field created by the previous generator is close to the actual field, the discriminator can track the current field. We would expect that MMD approaches would also benefit greatly from adapting this approach. To the best of our knowledge, no MMD paper has ever made this connection and proposed this solution to the problem (from which MMDs do clearly suffer).\n\n2. Yes, you are right, our kernel is very different from the Gaussian Kernel that is used in MMD. In fact, as we show in Theorem 1, our kernel can guarantee that we learn the correct distribution (if points are freely movable). Moreover, Hochreiter & Obermayer (2001) show that Gaussian Kernels *do not* guarantee convergence to a unique solution. So our choice of kernel is a very crucial improvement over an MMD approach.\n\n3. This is a tricky point, and we might not have explained this well in the paper: Yes, the original GAN by Goodfellow has a unique Nash Equilibrium when the two distributions match perfectly. This is a true Nash Equlibrium according to the original definition: neither D or G can improve on their own strategy. However, due to the way we train GANs (using gradient based methods), there are also many local Nash Equilibria: situations where neither D or G can improve their own strategy within their local environments. They have to follow their gradients and cannot \"jump\" out of this local environment. We describe an example of this in appendix A1 of the paper (Mode collapse is a special case of such a local optimum). These are not Nash Equilibria in a global sense as assumed in the original definition, as better strategies for both players exist; but those strategies are unreachable with gradient based methods.\n\nTo put it another way: What we ultimately want is to match two distributions. The optimization problem created by Goodfellow's GAN is littered with many local Nash Equilibria (where the distributions don't match but gradients vanish) where optimization will get stuck, even though we know that there is a global NE (where the distributions match) somewhere else. The Coulomb GAN's error surface does not contain such local Nash Equilibria. You cannot construct a situation where a Coulomb GAN's gradients vanish unless you are at the unique global optimum.\n", "Thanks a lot! I think it is a very good paper. I hope it will be accepted. ", "This paper is interesting. It relates the GAN learning game to potential field. It seems to me it is fundamentally the same as Moment Matching Networks, in the sense that it aims to minimize the distance between two distributions. In particular, I have the following questions:\n\n1. The discriminator aims to learn the potential function Phi. The optimal D(.) = Phi(.). Since we already know the expression of Phi, why don't we just use Phi for the generator optimization in eq (21)?\n\n2. If the answer to question (1) is affirmative. Then the formulation is exactly the same as \"Generative Moment Matching Networks\", where the generator is updated to minimize some distance between two distributions. The only difference is that in GMM networks, Phi(.) is in the form of kernel, while in this paper Phi(.) is in the form of potential function given by eq. (1). \n\n3. Theorem 2 and the title shows that the proposed GAN converges to the optimal Nash equilibrium. This holds in the condition that the objective functions can reach global minimum. For the original GAN proposed by Ian Goodfellow, if the optimization is with respective to D(.) and pg(.) in the function space, the Nash equilibrium is also unique! \n\nGenerally, one of the main challenges of GAN is that it is not convex-concave in the network parameters. If it is formulated as optimization in terms of the function space D(.) and pg(.), the local optimum is exactly the global one with D(.)=1/2 and pg=pd. The global nash equilibrium of Coulomb GAN is important, but is not a major contributions, considering that all other GANs have a unique Nash equilibrum in D(.) and pg(.). The major contribution of this paper is to provide a new perspective of formulating GAN, which will enable us to borrow the ideas from other fields to design systematic training methods or innovative GAN structures." ]
[ 5, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkVqXOxCb", "iclr_2018_SkVqXOxCb", "iclr_2018_SkVqXOxCb", "BkPeeQ5lf", "r1EJEK6lG", "HkdTXw1bM", "ryPND-5gG", "iclr_2018_SkVqXOxCb", "S1wB0slyf", "rJDaKf0Rb", "Sy5L6uoC-", "BJI5By0RZ", "iclr_2018_SkVqXOxCb" ]
iclr_2018_SJx9GQb0-
Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect
Despite being impactful on a variety of problems and applications, the generative adversarial nets (GANs) are remarkably difficult to train. This issue is formally analyzed by \cite{arjovsky2017towards}, who also propose an alternative direction to avoid the caveats in the minmax two-player training of GANs. The corresponding algorithm, namely, Wasserstein GAN (WGAN) hinges on the 1-Lipschitz continuity of the discriminators. In this paper, we propose a novel approach for enforcing the Lipschitz continuity in the training procedure of WGANs. Our approach seamlessly connects WGAN with one of the recent semi-supervised learning approaches. As a result, it gives rise to not only better photo-realistic samples than the previous methods but also state-of-the-art semi-supervised learning results. In particular, to the best of our knowledge, our approach gives rise to the inception score of more than 5.0 with only 1,000 CIFAR10 images and is the first that exceeds the accuracy of 90\% the CIFAR10 datasets using only 4,000 labeled images.
accepted-poster-papers
The paper proposes various improvements to Wasserstein distance based GAN training. Reviewers agree that the method produces good quality samples and are impressed by the state of the art results in several semi-supervised learning benchmarks. The paper is well written and the authors have further improved the empirical analysis in the paper based on reviewer comments.
train
[ "SkNSwnmrG", "r17cU27HG", "rJJbI3QHM", "BJNCqPqxM", "B11HQF84M", "HkbRNKwef", "ryT2f8KgM", "SywG98VGM", "r1QGQhVfG", "ryr5SEEfG", "Bkee-E4zG", "SksBJ-fzz", "SkTmQbkbz", "BJhxQVieM", "HkLJt_QxG", "BJopYw5yM", "BynjqwxJf", "SkpgYPxyz", "Hk9rHlS0-", "BJhcHerRb", "BkWTPbHCW", "rJm5I9kJM", "r1mlEqyyz", "BJzbrHC0-", "Syb85wYR-", "r1s7vBrCb", "B1cwHGBA-", "HkMNKONR-", "ry2iCPQRb", "HJ-LeemC-" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "public", "public", "public", "author", "author", "author", "author", "author", "public", "public", "public", "public", "public", "public", "public", "public", "public" ]
[ "AnonReviewer2: \"But still, without the analysis of the temporal ensembling trick [Samuli & Timo, 2017]\" \n\nWe have actually reported the ablation study about this temporal ensemebling technique in the rebuttal. Please read our answer to Q4 in the rebuttal. \n\n=Q4=\n\"... which part of the model works\"\n\nPlease see either Appendix C of the revised paper or the following for our answer to this question.\n\nWe have done some ablation studies about our semi-supervised learning approach on CIFAR-10. \nMethod, Error\nw/o CT, 15.0\nw/o GAN (note 1), 12.0\nw batch norm (note 2), --\nw/o D_, 10.7\nOURS, 10.0\n\nNote 1: This almost reduces to TE (Laine & Aila, 2016). All the settings here are the same as TE except that we use the extra regularization ($D\\_(.,.)$ in CT) over the second-to-last layer.\nNote 2: We use the weight normalization as in (Salimans et al., 2016), which becomes a core constituent of our approach. The batch normalization would actually invalidate the feature matching in (Salimans et al., 2016).\n\nWe can see that both GAN and the temporal ensembling effectively contribute to our final results. The results without our consistency regularization (w/o CT) drop more than those without GAN. We are running the experiments without any data augmentation and will include the corresponding results in the paper.", "Following (Laine & Aila, 2016, Miyato et al., 2017, Tarvainen & Valpola, 2017), we do not apply any augmentation to MNIST and yet augment the CIFAR10 images in the following way. We flip the images horizontally and randomly translate the images within [-2,2] pixels horizontally. \n\nSamuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016.\nTakeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. arXiv preprint arXiv:1704.03976, 2017.\nAntti Tarvainen and Harri Valpola. Weight-averaged consistency targets improve semi-supervised deep learning results. arXiv preprint arXiv:1703.01780, 2017.", "FYI, we have finally finished the experiments on LSUN-Bedroom. The results are comparable to those reported in (Gulrajani et al. 2017) except that our generated images are more diverse in terms of the color theme. \n\n1. https://goo.gl/MvK2x8\n2. https://goo.gl/Cidqgu\n3. https://goo.gl/f6WMeJ\n4. https://goo.gl/N3Jc6M\n5. https://goo.gl/XCpmaK", "Updates: thanks for the authors' hard rebuttal work, which addressed some of my problems/concerns. But still, without the analysis of the temporal ensembling trick [Samuli & Timo, 2017] and data augmentation, it is difficult to figure out the real effectiveness of the proposed GAN. I would insist my previous argument and score. \n\nOriginal review:\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\nThis paper presented an improved approach for training WGANs, by applying some Lipschitz constraint close to the real manifold in the pixel level. The framework can also be integrated to boost the SSL performances. In experiments, the generated data showed very good qualities, measured by inception score. Meanwhile, the SSL-GANs results were impressive on MNIST and CIFAR-10, demonstrating its effectiveness. \n\nHowever, the paper has the following weakness: \n\nMissing citations: the most related work of this one is the DRAGAN work. However, it did not cite it. I think the author should cite it, make a clear justification for the comparison and emphasize the main contribution of the method. Also, it suggested that the paper should discuss its relation to other important work, [Arjovsky & Bottou 2017], [Wu et al. 2016].\n\nExperiments: as for the experimental part, it is not solid. Firstly, although the SSL results are very good, it is guaranteed the proposed GAN is good [Dai & Almahairi, et al. 2017]. Secondly, the paper missed several details, such as settings, model configuration, hyper-parameters, making it is difficult to justify which part of the model works. Since the paper using the temporal ensembling trick [Samuli & Timo, 2017], most of the gain might be from there. Data augmentation might also help to improve. Finally, except CIFAR-10, it is better to evaluate it on more datasets. \n\nGiven the above reason, I think this paper is not ready to be published in ICLR. The author can submit it to the workshop and prepare for next conference. ", "Thank you for the rebuttal! I'm glad you took the comments into account and updated the manuscript.\n\nA few big picture comments and conclusions, and why I decided to keep my score based on rebuttal, revision and newer comments:\n- I still think the justification of *why* the regularizer should give an increase in performance is not on the level of a venue like this, and most of the arguments on the theoretical motivation are not strong. I don't find the rebuttal sufficient in this context. If keeping the gradient penalty makes it safe to only check the data manifold, it's still not clear why your method should give an improvement over it (for example, what's a solid theoretical argument for saying enforcing 1-lip in the data manifold more important than elsewhere?).\n- The authors did a good job at showing improvement in their models on the CIFAR dataset. However, this is not sufficient evidence to prove scalability of the regularizer, especially considering that adding hyperparameters and tuning well in a small dataset like CIFAR can give drastically different results. Furthermore, the results in LSUN and Imagenet are very unconvincing.\n- I appreciate Appendices D and E. These are very important sanity checks and I'm glad you added them.\n\nI think the paper is in a better state now, but taking all of these things into account, the given score is accurate in my opinion.", "Summary:\n\nThe paper proposes a new regularizer for wgans, to be combined with the traditional gradient penalty. The theoretical motivation is bleak, and the analysis contains some important mistakes. The results are very good, as noticed by the comments, the fact that the method is also less susceptible to overfitting is also an important result, though this might be purely due to dropout. One of the main problems is that the largest dataset used is CIFAR, which is small. Experiments on something like bedrooms or imagenet would make the paper much stronger. \n\nIf the authors fix the theoretical analysis and add evidence in a larger dataset I will raise the score.\n\nDetailed comments:\n\n- The motivation of 1.2 and the sentence \"Arguably, it is fairly safe to limit our scope to the manifold that supports the real data distribution P_r and its surrounding regions\" are incredibly wrong. First of all, it should be noted that the duality uses 1-Lip in the entire space between Pr and Pg, not in Pr alone. If the manifolds are not extremely close (such as in the beginning of training), then the discriminator can be almost exactly 1 in the real data, and 0 on the fake. Thus the discriminator would be almost exactly constant (0-Lip) near the real manifold, but will fail to be 1-lip in the decision boundary, this is where interpolations fix this issue. See Figure 2 of the wgan paper for example, in this simple example an almost perfect discriminator would have almost 0 penalty.\n\n- In the 'Potential caveats' section, the implication that 1-Lip may not be enforced in non-examined samples is checkable by an easy experiment, which is to look for samples that have gradients of the critic wrt the input with norm > 1. I performed the exp in figure 8 and saw that by taking a slightly higher lambda, one reaches gradients that are as close to 1 as with ct-gan. Since ct-gan uses an extra regularizer, I think the authors need some stronger evidence to support the claim that ct-gan better battles this 'potential caveat'.\n\n- It's important to realize that the CT regularizer with M' = 1 (1-Lip constraint) will only be positive for an almost 1-Lip function if x and x' are sampled when x - x' has a very similar direction than the gradient at x. This is very hard in high dimensional spaces, and when I implemented a CT regularizer indeed the ration of eq (4) was quite less than the norm of the gradient. It would be useful to plot the value of the CT regularizer (the eq 4 version) as the training iterations progresses. Thus the CT regularizer works as an overall Lipschitz penalty, as opposed to penalizing having more than 1 for the Lipschitz constant. This difference is non-trivial and should be discussed.\n\n- Line 11 of the algorithm is missing L^(i) inside the sum.\n\n- One shouldn't use MNIST for anything else than deliberately testing an overfitting problem. Figure 4 is thus relevant, but the semi-supervised results of MNIST or the sample quality experiments give hardly any evidence to support the method.\n\n- The overfitting result is very important, but one should disambiguate this from being due to dropout. Comparing with wgangp + dropout is thus important in this experiment.\n\n- The authors should provide experiments in at least one larger dataset like bedrooms or imagenet (not faces, which is known to be very easy). This would strengthen the paper quite a bit.", "This paper continues a trend of incremental improvements to Wasserstein GANs (WGAN), where the latter were proposed in order to alleviate the difficulties encountered in training GANs. Originally, Arjovsky et al. [1] argued that the Wasserstein distance was superior to many others typically used for GANs. An important feature of WGANs is the requirement for the discriminator to be 1-Lipschitz, which [1] achieved simply by clipping the network weights. Recently, Gulrajani et al. [2] proposed a gradient penalty \"encouraging\" the discriminator to be 1-Lipschitz. However, their approach estimated continuity on points between the generated and the real samples, and thus could fail to guarantee Lipschitz-ness at the early training stages. The paper under review overcomes this drawback by estimating the continuity on perturbations of the real samples. Together with various technical improvements, this leads to state-of-the-art practical performance both in terms of generated images and in semi-supervised learning. \n\nIn terms of novelty, the paper provides one core conceptual idea followed by several tweaks aimed at improving the practical performance of GANs. The key conceptual idea is to perturb each data point twice and use a Lipschitz constant to bound the difference in the discriminator’s response on the perturbed points. The proposed method is used in eq. (6) together with the gradient penalty from [2]. The authors found that directly perturbing the data with Gaussian noise led to inferior results and therefore propose to perturb the hidden layers using dropout. For supervised learning they demonstrate less overfitting for both MNIST and CIFAR 10. They also extend their framework to the semi-supervised setting of Salismans et al 2016 and report improved image generation. \n\nThe authors do an excellent comparative job in presenting their experiments. They compare numerous techniques (e.g., Gaussian noise, dropout) and demonstrates the applicability of the approach for a wide range of tasks. They use several criteria to evaluate their performance (images, inception score, semi-supervised learning, overfitting, weight histogram) and compare against a wide range of competing papers. \n\nWhere the paper could perhaps be slightly improved is writing clarity. In particular, the discussion of M and M' is vital to the point of the paper, but could be written in a more transparent manner. The same goes for the semi-supervised experiment details and the CIFAR-10 augmentation process. Finally, the title seems uninformative. Almost all progress is incremental, and the authors modestly give credit to both [1] and [2], but the title is neither memorable nor useful in expressing the novel idea. \n[1] Martin Arjovsky, Soumith Chintala, and Leon Bottou. Wasserstein gan.\n\n[2] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. \n\n", "We thank the reviewer for the insightful comments and suggestions! The paper has been revised accordingly. Next, we answer the questions in detail.\n\n== Q1: The motivation ==\nWe acknowledge that the duality uses 1-Lipschitz continuity in the entire space between Pr and Pg, and it is impossible to visit everywhere of the space in the experiments. We instead focus on the region around the real data manifold to complement the region checked by GP-WGAN --- the gradient penalty term is kept in our overall approach. We have clarified this point by the following in the revised paper.\n\nArguably, it is fairly safe to limit our scope to the manifold that supports the real data distribution $\\mathbb{P}_r$ and its surrounding regions mainly for two reasons. First, we keep the gradient penalty term and improve it by the proposed consistency term in our overall approach. While the former enforces the continuity over the points sampled between the real and generated points, the latter complement the former by focusing on the region around the real data manifold instead. Second, the distribution of the generative model $\\mathbb{P}_G$ is virtually desired to be as close as possible to $\\mathbb{P}_r$.\n\n\n== Q2: That 1-Lip may not be enforced in non-examined samples is checkable ==\nThe non-examined samples can refer to all the possible samples in the continuous space which cannot be traversed in a discrete manner. Figure 8 plots the norm of the gradients (of the critic with respect to the input) over the real data points only. In other words, Figure 8 is only part of the consequence, and certainly not the cause, of the discriminators trained by GP-WGAN and our CT-GAN, respectively. It is not surprising that the norms by CT-GAN are closer to 1 than by GP-WGAN because we explicitly enforce the continuity around the real data. \n\nWe have run more experiments with larger \\lambda values in GP-GAN, and found the gradient norms can indeed reach those of CT-GAN when the \\lambda is four times larger than the original one used in the authors’ code. However, the inception score on CIFAR-10 drops a little, and the overfitting remains. \n\nStronger evidence? In addition to the gradient norm, we have also examined the 1-Lipschitz continuity of the critic using the basic definition. For any two inputs x and x', the difference of the critic's outputs should be no more than M*|x-x'|. This notion is captured by our CT term defined in eq. (4). We plot the CT versus the training iterations as Figure 9 in the revised paper. In particular, for every 100 iterations, we randomly pick up 64 real examples and split them into two subsets of the same size. We compute d(D(x1)-D(x2))/d(x1-x2) for all the (x1,x2) pairs, where x1 is from the first subset and x2 is from the second. The maximum of d(D(x1)-D(x2))/d(x1-x2) is plotted in Figure 9. We can see that the CT-GAN curve converges under a certain value much faster than GP-WGAN.\n\n== Q3: Plot the value of the CT regularizer ==\nPlease see Figures 9 and 10 in the revised paper for the plots. Note that M’ has absorbed the term d(x’,x’’) in the final consistency term (eq. (5)), so we have to tune its value as opposed to fixing it to 1. Also, because of this fact, we agree with the comment that “Thus the CT regularizer works as an overall Lipschitz penalty, as opposed to penalizing having more than 1 for the Lipschitz constant.” We will clarify this part in the final paper, if it is accepted. \n\n== Q4: Line 11 ==\nIt is correct and is another way of denoting the gradient.\n\n== Q5: MNIST ==\nWe understand your concern with the use of MNIST and appreciate that you agree the overfitting experiments (Figure 4) are relevant. The other results (e.g., the generated samples and the test error in semi-supervised learning) can give the readers a concrete understanding about our model, but we agree one should not use MNIST to compare different algorithms.\n\n\n== Q6: GP-WGAN + Dropout ==\nPlease see Appendix E for the experimental results of GP-WGAN+Dropout on CIFAR-10 using 1000 training images. The corresponding inception score is better than GP-WGAN and yet still significantly lower than ours (2.98+-0.11 vs. 4.29+-0.12 vs. 5.13+-0.12). Figure 12, which is about the convergence curves of the discriminator cost over both training and testing sets, shows that dropout is indeed able to reduce the overfitting, but it is not as effective as ours.\n\n\n== Q7: Experiments in larger datasets ==\nIn Appendix F of the revised paper, we present results on the ImageNet and LSUN bedroom datasets following the experiment setup of GP-WGAN. After 200,000 generator iterations on ImageNet, the inception score of CT-GAN is 10.27+-0.15, whereas GP-WGAN's is 9.85+-0.17. Since there is only one class in LSUN bedroom, the inception score is not a proper evaluation metric for the experiments on this dataset. Visually, there is no clear difference between the generated samples of GP-WGAN and CT-GAN up to the 124,000th generator iteration.\n", "Thank you for checking out our code! If you are interested, please also test M’=0.2 for CIFAR-10 and you should be able to see a slightly higher inception score than M’=0. We fix M’=0 for all the experiments in the paper for consistency, but as we wrote in the paper, the best results are obtained between M’=0 and M’=0.2. \n\nWe noted that the assumption of d(x_1,x_2) being a constant can be relaxed. Our derivations still hold as long as d(x’,x’’) is bounded by a constant, and we can absorb the constant to M’. \n\nWe have actually reported two sets of experiments for CIFAR-10 in the paper. The first set is done using 1000 images and the second uses the whole CIFAR-10 dataset to train a ResNet. These setups are the same as in [1]. Additionally, we are running experiments on ImageNet and LSUN; we will update the response once the experiments are done. \n\nAbout the overfitting, please see Appendix E of the revised paper for the experimental results of GP-WGAN+Dropout on CIFAR-10 using 1000 training images. The corresponding inception score is better than GP-WGAN and yet still significantly lower than ours (2.98+-0.11 vs. 4.29+-0.12 vs. 5.13+-0.12). Figure 12, which is about the convergence curves of the discriminator cost over both training and testing sets, shows that dropout is indeed able to reduce the overfitting, but it is not as effective as ours.\n\nIn Appendix F of the revised paper, we further present experimental results on the large-scale ImageNet and LSUN bedroom datasets. The experiment setup (e.g., network architecture, learning rates, etc.) is exactly the same as in the GP-WGAN work. After 200,000 generator iterations on ImageNet, the inception score of the proposed CT-GAN is 10.27+-0.15, whereas GP-WGAN's is 9.85+-0.17. Since there is only one class in LSUN bedroom, the inception score is not a proper evaluation metric for the experiments on this dataset. Visually, there is no clear difference between the generated samples of GP-WGAN and CT-GAN.", "We thank the reviewer for the very positive and affirmative comments about our work. \n\nWe also appreciate the suggestions for improving the writing clarify of the paper. The following has been incorporated in the revised paper.\n\n== M vs. M' == \nWe use the notation $M$ in eq. (3) and a different $M'$ in eq. (4) to reflect the fact that the continuity will be checked only sparsely at some data points in practice. ... ... Note that, however, it becomes impossible to compute the distance $d(\\bm{x}',\\bm{x}'')$ between the two virtual data points. In this work, we assume it is bounded by a constant and absorb the constant to $M'$. Accordingly, we tune $M'$ in our experiments to take account of this unknown constant; the best results are obtained between $M'=0$ and $M'=0.2$. \n\n== Semi-supervised experiment details and the CIFAR-10 augmentation process ==\n\nMNIST: There are 60,000 images in total. We randomly choose 10 data points for each digit as the labeled set. No data augmentation is used.\n\nCIFAR-10: There are 50,000 image in total. We randomly choose 400 images for each class as the labeled set. We augment the data by horizontally flipping the images and randomly translating the images within [-2,2] pixels. No ZCA whitening is used.\n\nModel Configuration\n\nTable 1: MNIST\n--------------\nClassifier C | Generator G\nInput: Labels y, 28*28 Images x | Input: Noise 100 z\nGaussian noise 0.3, MLP 1000, ReLU | MLP 500, Softplus, Batch norm \nGaussian noise 0.5, MLP 500, ReLU | MLP 500, Softplus, Batch norm \nGaussian noise 0.5, MLP 250, ReLU | MLP 784, Sigmoid, Weight norm \nGaussian noise 0.5, MLP 250, ReLU | \nGaussian noise 0.5, MLP 250, ReLU | \nGaussian noise 0.5, MLP 10, Softmax | \n\nTable 2: CIFAR-10\n-----------------\nInput: Labels y, 32*32*3 Colored Image x, | Input: Noise 50 z\n------------------------------------------------------------------------------\n0.2 Dropout | MLP 8192, ReLU, BN \n3*3 conv. 128, Pad =1, Stride =1, lReLU, Weight norm | Reshape 512*4*4 \n3*3 conv. 128, Pad =1, Stride =1, lReLU, Weight norm | 5*5 deconv. 256*8*8, \n3*3 conv. 128, Pad =1, Stride =2, lReLU, Weight norm | ReLU, Batch norm \n------------------------------------------------------------------------------\n0.5 Dropout |\n3*3 conv. 256, Pad =1, Stride =1, lReLU, Weight norm |\n3*3 conv. 256, Pad =1, Stride =1, lReLU, Weight norm | 5*5 deconv. 128*16*16, \n3*3 conv. 256, Pad =1, Stride =2, lReLU, Weight norm | ReLU, Batch norm \n------------------------------------------------------------------------------\n0.5 Dropout | \n3*3 conv. 512, Pad =0, Stride =1, lReLU, Weight norm |\n3*3 conv. 256, Pad =0, Stride =1, lReLU, Weight norm | 5*5 deconv. 3*32*32, \n3*3 conv. 128, Pad =0, Stride =1, lReLU, Weight norm | Tanh, Weight norm \n-----------------------------------------------------------------------------\nGlobal pool |\nMLP 10, Weight norm, Softmax |\n\n== Hyper-parameters == \nWe set \\lambda = 1.0 in Eq.(7) in all our experiments. For CIFAR-10, the number of training epochs is set to 1,000 with a constant learning rate of 0.0003. For MNIST, the number of training epochs is set to 300 with a constant learning rate of 0.003. The other hyper-parameters are exactly the same as in the improved GAN (Salimans et al., 2016).\n\n== New Title == \nImproving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect\n", "We are pleased to see that the reviewer thinks our \"generated data showed very good qualities\" and \"the SSL-GANs results were impressive\".\n\n=Q1=\n\"Missing citations: the most related work of this one is the DRAGAN\"\n\nWe would consider WGAN and WGAN-GP as the most related works to ours and DRAGAN ranks after them. As a matter of fact, DRAGAN is an unpublished work and has not been peer-reviewed. As another matter of fact, the gradient penalty in DRAGAN is the same as in WGAN-GP except that it is imposed around the real data while WGAN-GP applies it to the points sampled between the real and the generated ones. \n\nNext, we highlight some key differences between DRAGAN and ours. \n\nWe propose to improve Wasserstein GAN, while DRAGAN works with GAN. \n\nDRAGAN aims to reduce the non-optimal saddle points in the minmax two-player training of GANs. In contrast, we propose an approach to enforcing the 1-Lipschitz continuity over the critic of WGANs. \n\nOne of our key observations is that it blurs the generated samples if we add noise directly to the data points, as done in DRAGAN. Instead, we perturb the hidden layers of the discriminator. \n\nDRAGAN perturbs a data point once while we do it twice in each iteration. After the perturbations, DRAGAN penalizes the gradients while we enforce the consistency of the outputs.\n\nOne of the most distinct features of our approach is that it seamlessly integrates the semi-supervised learning method by Laine & Aila (2016) with GANs. \n\n=Q2=\n\"the paper should discuss ... [Arjovsky & Bottou 2017], [Wu et al. 2016]\"\n\nWe had included both in our paper. Arjovsky & Bottou 2017 analyzes some distribution divergences and their effects in training GANs. Wu et al. 2016 propose to quantitatively evaluate the decoder-based generative models by annealed importance sampling. In our paper, we focus on a different subject, i.e., to design an algorithmic solution to the difficulty of training GANs.\n\n=Q3=\n\"the paper missed several details\"\n\nPlease see either Appendices A and B of the revised paper or the following for our answer to this question.\n\nGiven the context of the question, we believe it is about SSL. We follow the experiment setups in the prior works so that our results are directly comparable to theirs. Please see below for more details. If you are interested, you may also check out our code: https://github.com/biuyq/CT-GAN/blob/master/CT-GANs/Theano_classifier/CT_CIFAR-10_TE.py.\n\nMNIST: There are 60,000 images in total. We randomly choose 10 data points for each digit as the labeled set. No data augmentation is used.\n\nCIFAR-10: There are 50,000 image in total. We randomly choose 400 images for each class as the labeled set. We augment the data by horizontally flipping the images and randomly translating the images within [-2,2] pixels. No ZCA whitening is used.\n\nModel Configurations: We had included them in the appendix.\n\nHyper-parameters: We set lambda = 1.0 in Eq.(7) in all our experiments. For CIFAR-10, the number of training epochs is set to 1,000 with a constant learning rate of 0.0003. For MNIST, the number of training epochs is set to 300 with a constant learning rate of 0.003. The other hyper-parameters are exactly the same as in the improved GAN (Salimans et al., 2016).\n\n=Q4=\n\"... which part of the model works\"\n\nPlease see either Appendix C of the revised paper or the following for our answer to this question.\n\nWe have done some ablation studies about our semi-supervised learning approach on CIFAR-10. \nMethod, Error\nw/o CT, 15.0\nw/o GAN (note 1), 12.0\nw batch norm (note 2), --\nw/o D_, 10.7\nOURS, 10.0\n\nNote 1: This almost reduces to TE (Laine & Aila, 2016). All the settings here are the same as TE except that we use the extra regularization ($D\\_(.,.)$ in CT) over the second-to-last layer.\nNote 2: We use the weight normalization as in (Salimans et al., 2016), which becomes a core constituent of our approach. The batch normalization would actually invalidate the feature matching in (Salimans et al., 2016).\n\nWe can see that both GAN and the temporal ensembling effectively contribute to our final results. The results without our consistency regularization (w/o CT) drop more than those without GAN. We are running the experiments without any data augmentation and will include the corresponding results in the paper.\n\n=Q5=\n\"...it is better to evaluate it on more datasets\"\n\nWe have run some new experiments on the SVHN dataset. Ours is the best among all the GAN based semi-supervised learning methods, and is on par with the state of the arts.\nMethod, Error\nPI Laine & Aila 2016, 4.8\nTE Laine & Aila 2016, 4.4\nTarvainen & Valpola 2017, 4.0\nMiyato et al. 2017, 3.86\nSalimans et al. 2016, 8.1\nDumoulin et al. 2016, 7.4\nKumar et al. 2017, 5.9\nOurs, 4.2", "This paper points out a potential caveat for the improved training of WGAN approach. The gradient penalty term takes effects only upon points sampled on the lines connecting pairs of data points sampled from the real distribution and the model distribution. At the beginning of the training the Lipschitz continuity over the manifold supporting the real distribution is not enforced because at the beginning stage the synthetic data points G(z), and hence the sampled points $\\hat{x}$, could be far away from the manifold. The author introduces a natural solution to overcome that problem, that is to additionally impose the Lipschitz continuity condition over the manifold supporting the real data distribution. This paper showed that WGAN with consistency term can generate sharper and more realistic samples than most state-of-art GANs. Moreover, they proposed a framework for semi-supervised training which is able to train a decent GAN model.\n\nGenerally speaking, the author did a pretty good job on improving the training of WGANs, and the results are very impressive. We re-conducted some of the experiments using the code released by author, and here are several comments based on our findings:\n\nMathematical rigorousness: Since throughout the experiments M' has been held at 0, it drops out from the consistency term (4); furthermore, since the denominator d(x_1,x_2) is a constant number, and the numerator is by the definition of metrics a value greater than or equal to 0, the consistency term effectively reduces to \n\n CT|_{x_1,x_2}= E_{x_1,x_2 ~ P_r}[d(D(x_1),D(x_2))].\n\nWe find it hard to infer how the Lipschitz continuity is enforced from merely adding a metric d(D(x_1),D(x_2)) as an additional constraint, and we suspect that the actual training has deviated from the initial motivation which was to enforce the Lipschitz continuity over the manifold supporting the real data distribution.\n\nMore experiments on larger and higher dimensional dataset: The paper has shown a noticeable improvement on the generated CIFAR-10 image quality. Nevertheless, higher dimensional datasets should be considered in the experiments. This quality improvement on higher dimensional images can be more noticeable. Furthermore, in the paper, the authors only used 1000 samples to train the generator; we think that using larger datasets can also verify their claims more persuasively. We also tried to use the original code from [1] for MNIST data. Our results show that using the whole MNIST dataset leads to less vague digits (low contrast between foreground and background) than only using 1000 images.\n\nOverfitting analysis: The paper stated that CT-WGAN is less prone to overfitting. We have verified this claim in our experiment. However, the reason for that is less clear in the paper. We think the dropout perturbation plays an important role in avoiding overfitting. We modified GP-WGAN architecture by adding the dropout layers in the same way as in CT-WGAN. Our results show that GP-WGAN becomes less prone to overfitting after adding dropout. Therefore, we suspect that the non-overfitting property of CT-WGAN is a direct consequence of adding dropout regularization, instead of adding consistency term.\n\n[1] I.Gulrajani, F.Ahmed, M.Arjovsky, V.Dumoulin, and A.Courville, \"Improved training of wasserstein gans,\"arXiv preprintarXiv:1704.00028, 2017.", "That’s a great question! We were actually wondering about a similar one. We can certainly interpret the formulation as that it encourages the network to be resilient to the dropout noise --- one of the notions that motivates the temporal ensembling semi-supervised learning method. In addition to that, however, it also enforces the Lipschitz continuity over the discriminator because of equation (3). Thanks to the dual roles of this formulation, we are able to use it to both improve the training of WGANs and connect GAN with the temporal ensembling method. \n\nWe have re-run the experiments using margin $M’=0.2$. To show that the margin, albeit small, plays an active role, we have got some statistics of the $d(D,D)+0.1 d(D_,D_)$ term over the last 10 epochs. We can see that the median values of that term are smaller and the max values are larger than the margin. \n\nMin 0.0162 0.0153 0.0149 0.0171 0.0170 0.0159 0.0140 0.0146 0.0159 0.0144\nMedian 0.1130 0.1133 0.1124 0.1138 0.1114 0.1123 0.1124 0.1122 0.1125 0.1111\nMax 7.1718 6.1229 7.1985 7.3505 4.9636 5.2252 5.2559 5.3058 5.9905 4.8519\n\n\n\n\n\n\n\n\n", "Section 1.2 and Figure 1 outline the general idea behind the approach.\n\nI wonder however, how much of the intuitive explanation is still valid in the actual CT loss.\n\nTo dissect this a little:\nd(x1, x2) being a metric should always be positive. that means all instances of max(0, d(x1,x2)) reduce to d(x1,x2)\n\nLooking at Eq 4 and 5, given M=0, d(x1,x2) is assumed to be constant, it reduces to \nCT_(x1,x2) = E_{x1,x2} d(D(x1),D(x2))\nwith the constant d(x1,x2) absorbed into d.\n\nHowever, since the input is not changed but rather the network, a better notation would probably be\n\nCT = E_(x, \\theta_1, \\theta_2) d(D(x, \\theta_1), D(x, \\theta_2))\n\nwhere \\theta_1, \\theta_2 are the noise vectors used for dropout.\n\nLooking at that formulation, does this still mean it penalizes the gradient in the original input space or would it be more appropriate to say it encourages the resilience to dropout (or is that actually the same thing)?\n", "The table below shows the ablated study results:\n\nMethod | Test Error\nOURS w/o CT | 14.98+-0.43\nOURS w/o GAN * | 11.98+-0.32\nOURS w batch norm ** | --\nOURS w/o D_(.,.) over the second-to-last layer | 10.70+-0.24\nOURS | 9.98+-0.21\n\n* This almost reduces to TE (Laine & Aila, 2016). All the settings are exactly the same as in TE (Laine & Aila, 2016) except that we use the extra regularization (D_(.,.) in CT) over the second-to-last layer.\n** We use the weight normalization as in (Salimans et al., 2016), which becomes a core constituent of our approach. The batch normalization would actually invalidate the feature matching in (Salimans et al., 2016).\n\nSamuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprintarXiv:1610.02242, 2016.\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training gans. In Advances in Neural Information Processing Systems,pp. 2234–2242, 2016.", "Here is the code for this paper: https://github.com/biuyq/CT-GAN\n", "We have done some ablated studies but they are not as thorough as you suggested. We will complete them and then get back to you soon. Thanks! \n\nObservations thus far: Both the consistent regularization and GAN are necessary to arrive at the report results, and the results without the consistency drop more than those without GAN.\n\n", "Hello Hongyi, \n\nWe will add a column or a new table about the results with and without the data augmentations. Thank you for the pointer! \n\nBest,\n\n\n", "Thank you for directing us to DRAGAN. Sorry for missing it in our paper. We will include it in the updated version.\n\nGoing back to your question, the short answer is no because we do not actually aim to smooth the discriminator though the approach may have that effect. The long answer below clarifies it further and additionally highlights some differences between ours and DRAGAN.\n\nMotivations: DRAGAN aims to reduce the non-optimal saddle points in the minmax two-player training of GANs by drawing results from the minimax theorem for zero-sum game. In sharp contrast, we propose an alternative way of enforcing the 1-Lipschitz continuity over the “critic” of WGANs thanks to the recent results by Arjovsky & Bottou (2017). \n\nHow to add the perturbations: One of the key observations in our experiments is that it reduces the quality of the generated samples if we add noise directly to the data points, as what is done in DRAGAN. Similar observations are reported by Arjovsky & Bottou (2017) and Wu et al. (2016). After many painstaking trials, we find good results by perturbing the hidden layers of the discriminator instead (as opposed to perturbing the original data). Besides, DRAGAN perturbs a data point once while we do it twice in an iteration. \n\nHow to use the perturbation: Similar to the gradient penalty proposed in (Gulrajani et al., 2017), DRGAN introduces a same regularization whereas for different reasons. In contrast, ours is a consistent regularization derived from the basic definition of Lipschitz continuous functions. \n\nSemi-supervised learning: One of the most notable features of our approach is that it seamlessly integrates the semi-supervised learning method by Laine & Aila (2016) with GANs. \n\nFinally, here is the DRAGAN paper we found on ArXiv: https://arxiv.org/abs/1705.07215 just to confirm it with you. Going back to the DRGAN work, it would be interesting to investigate whether it generates blurry images too, for example by comparing the results of different amount of noise including no noise. It may do not because it constraints the gradient as oppose to the discriminator’s output. \n \n", "Sorry about that and Thank you for noting it! We will correct it in the updated version. ", "Thank you for the interest in our work. Please see below for the answers to your questions. \n\n(1) Dropout ratio: \n\n## The network used to learn from 1000 labeled CIFAR10 images only\nDiscriminator Generator\nInput: 3*32*32 Image x Input: Noise z 128\n5*5 conv. 128, Pad = same, Stride = 2, lReLU MLP 8192, ReLU, Batch norm\n0.5 Dropout Reshape 512*4*4\n5*5 conv. 256, Pad = same, Stride = 2, lReLU 5*5 deconv. 256*8*8,\n0.5 Dropout ReLU, Bach norm\n5*5 conv. 512, Pad = same Stride = 2, lReLU 5*5 deconv. 128*16*16\n0.5 Dropout ReLU, Batch norm\nReshape 512*4*4 (D_) 5*5 deconv. 3*32*32\nMLP 1 (D) Tanh\n\n## ResNet:\nDiscriminator | Generator\nInput: 3*32*32 Image x | Input: Noise z 128\n[3*3]*2 Residual Block, Resample = DOWN | MLP 2048\n128*16*16 | Reshape 128*4*4\n[3*3]*2 Residual Block, Resample = DOWN | [3*3]*2 Residual Block, Resample = UP\n128*8*8 0.2 Dropout | 128*8*8\n[3*3]*2 Residual Block, Resample = None | [3*3]*2 Residual Block, Resample = UP\n128*8*8 0.5 Dropout | 128*16*16\n[3*3]*2 Residual Block, Resample = None | [3*3]*2 Residual Block, Resample = UP\n128*8*8 0.5 Dropout | 128*32*32\nReLU, Global mean pool (D_) | 3*3 conv. 3*32*32\nMLP 1 (D) | Tanh\n\n## The network for MNIST\nDiscriminator Generator\nInput: 1*28*28 Image x Input: Noise z 128\n5*5 conv. 64, Pad = same, Stride = 2, lReLU MLP 4096, ReLU\n0.5 Dropout Reshape 256*4*4\n5*5 conv. 128, Pad = same, Stride = 2, lReLU 5*5 deconv. 128*8*8\n0.5 Dropout ReLU, Cut 128*7*7\n5*5 conv. 256, Pad = same, Stride = 2, lReLU 5*5 deconv. 64*14*14\n0.5 Dropout ReLU\nReshape 256*4*4 (D_) 5*5 deconv. 1*28*28\nMLP 1 (D) Sigmoid\n\n(2) No. There are only two perturbations, as denoted by x' and x’’, for a data point x in each iteration. They are independently generated by the dropout as shown in my answer to your question (1). In other words, the two terms equation (5) are actually calculated over the same pair of x' and x'' for each draw x ~ P_r.\n\nWe will try to release the code in one or two weeks.", "Thanks for your clarification, it's very helpful.\n\nI think it is good to explicitly compare the data augmentation used by different methods in Table 2, so that the interested readers don't assume they all use the same augmentation, or don't have to look up each paper to figure out what augmentation each method used. For example, AFAIK, the Ladder Networks paper (Table 3, https://arxiv.org/pdf/1507.02672.pdf) reported results without data augmentation.", "I'm impressed by your semi-supervised learning results. However, without an ablation study it's hard to tell why your method works so well. Do you have any ideas about what's causing the improvement? It could be\n(1) You use both a GAN and consistency regularization (prior work uses one or the other).\n(2) Your GAN works better.\n(3) Your consistency regularization is better (either because dropout is better than Gaussian noise or because the second-to-last layer consistency term helps).\n(4) Improvements to the architecture/hyperparameters (e.g., using weight-norm instead of batch-norm as you mention in the appendix).\n\n", "Dear Hongyi,\n\nSorry for the late response. We did not receive or had missed the notification from Openreview about your comment. Following (Laine & Aila, 2016, Miyato et al., 2017, Tarvainen & Valpola, 2017), we do not apply any augmentation to MNIST and yet augment the CIFAR10 images in the following way. We flip the images horizontally and randomly translate the images within [-2,2] pixels horizontally. \n\nSamuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016.\nTakeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. arXiv preprint arXiv:1704.03976, 2017.\nAntti Tarvainen and Harri Valpola. Weight-averaged consistency targets improve semi-supervised deep learning results. arXiv preprint arXiv:1703.01780, 2017.", "In Figure 2, there is an \"augmentation\" process before feeding the input x into the network D. Could you clarify what \"augmentation\" means here? In particular, what kind of data preprocessing did you use in the semi-supervised learning experiments?\n\nThanks!", "Thanks for the clarification. Comparing with DRAGAN in your experiments would have helped to understand where the benefit is coming from. They show improvements over WGAN-GP as well but use only DCGAN architecture. \n\nGood luck!", "Now I can understand the experimental details. Thanks.", "Looks like an interesting paper!\n\nI noticed you accidentally cited Salimans et al. on the fourth row of Table 2 when you (probably) meant to cite our work: https://arxiv.org/abs/1703.01780", "Your contribution looks like a relaxed version of DRAGAN's regularization scheme, which you don't cite anywhere. Is that correct? \n\nKodali, N., Abernethy, J., Hays, J. and Kira, Z., 2017. How to Train Your DRAGAN. arXiv preprint arXiv:1705.07215.", "I appreciate your contribution to generative model research. The results seem to be great.\nI was trying to reproduce the paper's result, but I think it is difficult to get some details:\n(1) Dropout ratio for generative models is not specified. It might be better to have Tables like Tables 4 and 5 for generative modeling tasks.\n(2) I could not understand the meaning of \"We find that it slightly improves the performance by further controlling the second-to-last layer D_(.) of the discriminator.\" Are we generating two more perturbed points x''' and x'''' by inserting a dropout layer at second-to-last layer -- as opposed to perturbed points x' and x''?" ]
[ -1, -1, -1, 4, -1, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 4, -1, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "BJNCqPqxM", "BJNCqPqxM", "B11HQF84M", "iclr_2018_SJx9GQb0-", "SywG98VGM", "iclr_2018_SJx9GQb0-", "iclr_2018_SJx9GQb0-", "HkbRNKwef", "SksBJ-fzz", "ryT2f8KgM", "BJNCqPqxM", "iclr_2018_SJx9GQb0-", "BJhxQVieM", "iclr_2018_SJx9GQb0-", "BynjqwxJf", "iclr_2018_SJx9GQb0-", "r1mlEqyyz", "rJm5I9kJM", "ry2iCPQRb", "HkMNKONR-", "HJ-LeemC-", "BJzbrHC0-", "iclr_2018_SJx9GQb0-", "Syb85wYR-", "iclr_2018_SJx9GQb0-", "Hk9rHlS0-", "BkWTPbHCW", "iclr_2018_SJx9GQb0-", "iclr_2018_SJx9GQb0-", "iclr_2018_SJx9GQb0-" ]
iclr_2018_BJIgi_eCZ
FusionNet: Fusing via Fully-aware Attention with Application to Machine Comprehension
This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives. First, it puts forward a novel concept of "History of Word" to characterize attention information from the lowest word-level embedding up to the highest semantic-level representation. Second, it identifies an attention scoring function that better utilizes the "history of word" concept. Third, it proposes a fully-aware multi-level attention mechanism to capture the complete information in one text (such as a question) and exploit it in its counterpart (such as context or passage) layer by layer. We apply FusionNet to the Stanford Question Answering Dataset (SQuAD) and it achieves the first position for both single and ensemble model on the official SQuAD leaderboard at the time of writing (Oct. 4th, 2017). Meanwhile, we verify the generalization of FusionNet with two adversarial SQuAD datasets and it sets up the new state-of-the-art on both datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to 51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%.
accepted-poster-papers
State-of-the-art results on Squad (at least at time of submission) with a nice model. Authors have since applied the model to additional tasks (SNLI). Good discussion with reviewers, well written submission and all reviewers suggest acceptance.
train
[ "r1HLacEHf", "r1AM2beHM", "r1ApvdPxG", "SJxIVpkZM", "S1UrbZQ-f", "H1NajF9mf", "BJi3jC4-f", "SJeISQYWM", "rJtar37Zf", "HyLoVfF-f", "ryjOHJHbG", "H1qsigSWz", "rJK-1DBZM", "r13F2uHZM", "HJ3IoLB-z", "SJxG58rZM", "S1OQa84Wf", "rJYnDN4-f", "rkqkzzm-G", "HkeZo1MZG", "rJBpWyR1z", "H1sqbJ01G", "SJftaCpyM", "Sy1MCpzJf" ]
[ "public", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "author", "author", "author", "author", "public", "public", "public", "public", "author", "public", "public", "author", "author", "public", "public" ]
[ "I loved reading your paper. Very well written. Great work!\nI would like to know more about your ensemble model. You have mentioned that your ensemble contains 39 models. Can you please comment on what these models are?\n\nHow long did the training take during your experiments? What is the batch size? And GPU memory requirements? Also, do you plan to open-source your code?", "The performance of the model on SQuAD dataset is impressive. In addition to the performance on the test set, we are also interested in the sample complexity of the proposed model. Currently, the SQuAD dataset splits the collection of passages into a training set, a development set, and a test set in a ratio of 80%:10%:10% where the test set is not released. Given the released training and dev set, we are wondering what would happen if we split the data in a different ratio, for example, 50% for training and the rest 50% for dev. We will really appreciate it if the authors can report the model performance (on training/dev respectively) under this scenario. \n", "The paper first analyzes recent works in machine reading comprehension (largely centered around SQuAD), and mentions their common trait that the attention is not \"fully-aware\" of all levels of abstraction, e.g. word-level, phrase-level, etc. In turn, the paper proposes a model that performs attention at all levels of abstraction, which achieves the state of the art in SQuAD. They also propose an attention mechanism that works better than others (Symmetric + ReLU).\n\nStrengths:\n- The paper is well-written and clear.\n- I really liked Table 1 and Figure 2; it nicely summarizes recent work in the field.\n- The multi-level attention is novel and indeed seems to work, with convincing ablations.\n- Nice engineering achievement, reaching the top of the leaderboard (in early October).\n\n\nWeaknesses:\n- The paper is long (10 pages) but relatively lacks substances. Ideally, I would want to see the visualization of the attention at each level (i.e. how they differ across the levels) and also possibly this model tested on another dataset (e.g. TriviaQA).\n- The authors claim that the symmetric + ReLU is novel, but I think this is basically equivalent to bilinear attention [1] after fully connected layer with activation, which seems quite standard. Still useful to know that this works better, so would recommend to tone down a bit regarding the paper's contribution.\n\n\nMinor:\n- Probably figure 4 can be drawn better. Not easy to understand nor concrete.\n- Section 3.2 GRU citation should be Cho et al. [2].\n\n\nQuestions:\n- Contextualized embedding seems to give a lot of improvement in other works too. Could you perform ablation without contextualized embedding (CoVe)?\n\n\nReference:\n[1] Luong et al. Effective Approaches to Attention-based Neural Machine Translation. EMNLP 2015.\n[2] Cho et al. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. EMNLP 2014.", "The primary intellectual point the authors make is that previous networks for machine comprehension are not fully attentive. That is, they do not provide attention on all possible layers on abstraction such as the word-level and the phrase-level. The network proposed here, FusionHet, fixes problem. Importantly, the model achieves state-of-the-art performance of the SQuAD dataset.\n\nThe paper is very well-written and easy to follow. I found the architecture very intuitively laid out, even though this is not my area of expertise. Moreover, I found the figures very helpful -- the authors clearly took a lot of time into clearly depicting their work! What most impressed me, however, was the literature review. Perhaps this is facilitated by the SQuAD leaderboard, which makes it simple to list related work. Nevertheless, I am not used to seeing comparison to as many recent systems as are presented in Table 2. \n\nAll in all, it is difficult not to highly recommend an architecture that achieves state-of-the-art results on such a popular dataset.", "(Score before author revision: 4)\n(Score after author revision: 7)\n\nI think the authors have taken both the feedback of reviewers as well as anonymous commenters thoroughly into account, running several ablations as well as reporting nice results on an entirely new dataset (MultiNLI) where they show how their multi level fusion mechanism improves a baseline significantly. I think this is nice since it shows how their mechanism helps on two different tasks (question answering and natural language inference).\n\nTherefore I would now support accepting this paper.\n\n------------(Original review below) -----------------------\n\nThe authors present an enhancement to the attention mechanism called \"multi-level fusion\" that they then incorporate into a reading comprehension system. It basically takes into account a richer context of the word at different levels in the neural net to compute various attention scores.\n\ni.e. the authors form a vector \"HoW\" (called history of the word), that is defined as a concatenation of several vectors:\n\nHoW_i = [g_i, c_i, h_i^l, h_i^h]\n\nwhere g_i = glove embeddings, c_i = COVE embeddings (McCann et al. 2017), and h_i^l and h_i^h are different LSTM states for that word.\n\nThe attention score is then a function of these concatenated vectors i.e. \\alpha_{ij} = \\exp(S(HoW_i^C, HoW_j^Q))\n\nResults on SQuAD show a small gain in accuracy (75.7->76.0 Exact Match). The gains on the adversarial set are larger but that is because some of the higher performing, more recent baselines don't seem to have adversarial numbers.\n\nThe authors also compare various attention functions (Table 5) showing a particularone (Symmetric + ReLU) works the best. \n\nComments:\n\n-I feel overall the contribution is not very novel. The general neural architecture that the authors propose in Section 3 is generally quite similar to the large number of neural architectures developed for this dataset (e.g. some combination of attention between question/context and LSTMs over question/context). The only novelty is these \"HoW\" inputs to the extra attention mechanism that takes a richer word representation into account.\n\n-I feel the model is seems overly complicated for the small gain (i.e. 75.7->76.0 Exact Match), especially on a relatively exhausted dataset (SQuAD) that is known to have lots of pecularities (see anonymous comment below). It is possible the gains just come from having more parameters.\n\n-The authors (on page 6) claim that that by running attention multiple times with different parameters but different inputs (i.e. \\alpha_{ij}^l, \\alpha_{ij}^h, \\alpha_{ij}^u) it will learn to attend to \"different regions for different level\". However, there is nothing enforcing this and the gains just probably come from having more parameters/complexity.", "We have conducted additional experiments based on Reviewer 3's thoughtful comments and included the results in our paper.\n\n(1) Additional ablation study on input vectors. (Appendix C)\n\nThe following is a summary of our additional experimental results. The experiment shows that FusionNet, with or without CoVe, single or ensemble, all yield clear improvement over previous state-of-the-art models on all 3 machine comprehension datasets.\n\n=== SQuAD (Dev EM) ===\n> FusionNet: 75.3 (+3.2)\n> FusionNet (without CoVe): 74.1 (+2.0)\nPrevious SotA [1]: 72.1\n\n=== AddSent (Dev EM) ===\n> FusionNet (Single): 45.6 (+4.9)\n> FusionNet (Single, without CoVe): 47.4 (+6.7)\n> FusionNet (Ensemble): 46.2 (+5.5)\nPrevious SotA [2] (Ensemble): 40.7\n\n=== AddOneSent (Dev EM) ===\n> FusionNet (Single): 54.8 (+6.1)\n> FusionNet (Single, without CoVe): 55.2 (+6.5)\n> FusionNet (Ensemble): 54.7 (+6.0)\nPrevious SotA [2] (Ensemble): 48.7\n\n(2) Application to Natural Language Inference. (Appendix D)\n\nFusionNet is an improved attention mechanism that can be easily applied to attention-based models. Here, we consider a different task on natural language inference (NLI). We focus on MultiNLI [3], a recent NLI corpus. MultiNLI is designed to be more challenging than SNLI [4] since many models already outperform human performance (87.7%) on SNLI. A state-of-the-art model for NLI is ESIM [5], which performs 88.0% on SNLI and 72.3% (in-domain), 72.1% (cross-domain) on MultiNLI [3]. We implemented a version of ESIM in PyTorch and improved ESIM with our proposed fully-aware multi-level attention mechanisms. For fair comparisons, we reduce the hidden size after adding our enhancements, so the parameter size after attention enhancement is less than or similar to ESIM with standard attention. A summary of the result is shown below. Experiments on natural language inference conform with our observed improvements in machine comprehension tasks.\n\n=== MultiNLI (Dev Cross-domain / In-domain) ===\nOur ESIM (d = 300): 73.9 / 73.7\nOur ESIM + fully-aware (d = 250): 77.3 / 76.5\nOur ESIM + fully-aware + multi-level (d = 250): 78.4 / 78.2\n\n(3) Multi-level attention visualization. (Appendix G)\n\nIn Appendix G, we included multi-level attention visualization and a qualitative analysis of the attention variations between low-level and high-level. These visualizations support our original motivation and provide an intuitive explanation for our superior performance, especially on the adversarial datasets.\n\nReferences:\n[1]: Minghao Hu, Yuxing Peng, and Xipeng Qiu. \"Reinforced Mnemonic Reader for Machine Comprehension.\" arXiv preprint arXiv:1705.02798 (2017).\n[2]: Robin Jia, and Percy Liang. \"Adversarial examples for evaluating reading comprehension systems.\" EMNLP (2017).\n[3]: Adina Williams, Nikita Nangia, and Samuel R. Bowman. \"A broad-coverage challenge corpus for sentence understanding through inference.\" arXiv preprint arXiv:1704.05426 (2017).\n[4]: Samuel R. Bowman et al. \"A large annotated corpus for learning natural language inference.\" EMNLP (2015).\n[5]: Qian Chen et al. \"Enhancing and combining sequential and tree lstm for natural language inference.\" ACL (2017).", "Hi!\n\nWe don't think there is any \"simple heuristic\" that could \"solve\" SQuAD nor does any literature mention such a comment. We presume that you might be referring to FastQA [1] (EM / F1 = 68.4 / 77.1) and DrQA [2] (EM / F1 = 70.0 / 79.0). Both are great models that are conceptually simple, yet they contain nontrivial uses of LSTM and attention. They also perform well on machine comprehension tasks other than SQuAD. We highly respect these two models since a relatively simple model that performs well is an exceptionally great model. In the design of FusionNet, we have also taken this philosophy into account and try our best to simplify our model.\n\nAdditionally, FusionNet have improved SQuAD performance to EM / F1 = 76.0 / 83.9. The improvement is significant (+6% in EM). We would not regard these model, including our FusionNet, to have solved SQuAD.\n\nThe reason for showing only the ensemble performance on the adversarial datasets is solely due to the space constraint. Furthermore, for most models, the ensemble performs better or on par with the single model on adversarial datasets.\n\nWe understand your concern since most ensemble contains 10~20 models. We will train a 10-model ensemble of FusionNet and evaluate this smaller ensemble on adversarial datasets.\n\nReferences:\n[1] Dirk Weissenborn, Georg Wiese, Laura Seiffe. \"Making Neural QA as Simple as Possible but not Simpler.\" CoNLL (2017).\n[2] Danqi Chen, et al. \"Reading Wikipedia to Answer Open-Domain Questions.\" ACL (2017).", "The authors should include results of evaluating their model without CoVe in the submitted paper to avoid a confusion whether their complex model is good a by feature engineering or not.", "Thank you for your review!\n\n1. Our improvement over best model published is actually ~3% in Exact Match (73.2 -> 76.0). We understand that from our Table 2, it seems FusionNet only improve the best \"published\" model (R-net) by EM 0.3 (single model). We apologize for not writing this part clear and have updated our paper accordingly. If you look into the ACL2017 paper of R-net [1] or the recent technical report (http://aka.ms/rnet), you will find that the best-published version of R-net only achieves EM: 72.3, F1: 80.7. It is lower than our model by near 4% in EM. It is because the authors of R-net have been designing new models without publishing it while using the same model name (R-net) on SQuAD leaderboard. At the time of ICLR2018 submission, the best-published model is Reinforced Mnemonic Reader [2] (https://arxiv.org/pdf/1705.02798.pdf), which achieved EM: 73.2, F1: 81.8, 1% higher than published version of R-net. Reinforced Mnemonic Reader proposed feature-rich encoder, semantic fusion unit, iterative interactive-aligning self-aligning, multihop memory-based answer pointer, and a reinforcement learning technique to achieve their high performance. On the other hand, utilizing our simple \"HoW\" attention mechanism, FusionNet obtained a decent performance (EM: 76.0, F1: 83.9) on original SQuAD with a relatively simplistic model. For example, in Table 6, by changing S(h_i^C, h_j^Q) to S(HoW_i^C, HoW_j^Q) in a vanilla model (encoder + single-level attention), we observed +8% improvement and achieved EM: 73.3, F1: 81.4 on the dev set. (best documented number on dev set: 72.1 / 81.6)\n\n2. In our paper, we only compare with the official results on adversarial dataset shown in this year EMNLP paper [3]. Adversarial evaluation of more recent higher performing models can be found in a website maintained by the author (Robin Jia). And we still significantly outperform these recent state-of-the-art methods.\nhttps://worksheets.codalab.org/worksheets/0x77ca15a1fc684303b6a8292ed2167fa9/\nFor example, Robin Jia has compared with another state-of-the-art model DCN+, which is also submitted to ICLR2018. On the AddSent dataset, DCN+ achieved F1: 44.5, and on the AddOneSent dataset, DCN+ achieved F1: 54.3. The results are comparable to the previous state-of-the-art on the adversarial datasets. But FusionNet is +6% higher than DCN+ on both datasets. We attribute this significant gain to the proposed HoW attention, which can very easily incorporate into other models. We are excited to share this simple idea with the community to improve machines in better understanding texts.\n\n3. SQuAD is a very competitive dataset, so it is unlikely that the significant gain we have (+3% over best-documented models, +5% in adversarial datasets) comes from giving the model more parameters. Furthermore, existing models can always incorporate more parameters if it helps. For example, in the high-performing Reinforced Mnemonic Reader, they can increase the number of iterations in iterative aligning or increase the hidden size in LSTM. Additionally, in our Table 6, we have compared FA All-level and FA Multi-level. FA All-level uses the same attention weight for different levels and fuses all level of representation (including input vector). FA All-level has more parameters than FA Multi-level but performs 2% worse. Based on Reviewer3's comment, we will also include visualization to show that multi-level attention will learn to attend to \"different regions for different levels.\"\n\nReferences:\n[1] Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. \"Gated self-matching networks for reading comprehension and question answering.\" ACL (2017).\n[2] Minghao Hu, Yuxing Peng, and Xipeng Qiu. \"Reinforced Mnemonic Reader for Machine Comprehension.\" arXiv preprint arXiv:1705.02798 (2017).\n[3] Robin Jia, and Percy Liang. \"Adversarial examples for evaluating reading comprehension systems.\" EMNLP (2017).", "Thank you so much for your encouraging review!", "Thank you so much for your thoughtful review!\n\nWe acknowledged that our paper is quite long since we hope to best clarify the high-level concepts and the low-level details for fully-aware attention and our model, FusionNet. We are willing to add the visualization of multi-level attention weights in the appendix and will do so during the revision period. To support the generalization of our model, we have also tested on two adversarial datasets in addition to SQuAD and showed significant improvement. We are also working on extending to other datasets such as TriviaQA. While most machine comprehension models operate at paragraph-level (including ours), TriviaQA requires processing document-level input. For example, the current state-of-the-art method on TriviaQA [1] uses a pipelined approach where machine comprehension model is only a part of the pipeline. Hence, we think a more in-depth study is needed to give a solid comparison on TriviaQA.\n\nWe agree that the symmetric formulation is only a slightly modified version of standard multiplicative attention. However, during our research study on various architectures, incorporating history-of-word in attention score calculation only yields marginal improvements when existing attention formulations are used. On the other hand, when the slightly modified symmetric form is used, fully-aware attention becomes substantially better than normal attention. In the paper, we emphasized the importance of the symmetric attention form in the hope of the future researchers to utilize fully-aware attention better. We have rewritten our manuscript to stress the identification of the proper attention function and give less focus on the proposition of novel formulation. Thank you for pointing this out.\n\nThank you for the question! CoVe is also helpful in our model. We have conducted some additional ablation study regarding input vectors based on your question.\n\nPerformance on Dev set is shown (EM / F1):\nFusionNet : 75.3 / 83.6\nFusionNet without CoVe (original setting) : 74.1 / 82.5\nFusionNet without CoVe (embedding dropout=0.3) : 73.7 / 82.4\nFusionNet with GloVe fixed: 75.0 / 83.2\n=====\nBest documented number [3]: 72.1 / 81.6\n\nNote that we have optimized the hyperparameters in FusionNet with CoVe included. We have also tried our best to simplify the model when CoVe is available since we believe a simpler model is a better model. Therefore it is not very suitable to directly compare the ablation result with other models without CoVe. For example, we did not include character embedding (giving 2% improvement in BiDAF [4]) or multi-hop reasoning (giving 1% improvement in Reinforced Mnemonic Reader [3]) in our model.\n\nMinor:\n- We will try our best to improve figure 4. It will be very kind of you if you could give us some suggestions.\n- Thank you for pointing out the typo in GRU citation.\n\nReference:\n[1] Clark, Christopher, and Matt Gardner. \"Simple and Effective Multi-Paragraph Reading Comprehension.\" arXiv preprint arXiv:1710.10723 (2017)\n[2] Jia, Robin, and Percy Liang. \"Adversarial examples for evaluating reading comprehension systems.\" EMNLP (2017).\n[3] Minghao Hu, Yuxing Peng, and Xipeng Qiu. \"Reinforced Mnemonic Reader for Machine Comprehension.\" arXiv preprint arXiv:1705.02798 (2017).\n[4] Minjoon Seo, et al. \"Bidirectional attention flow for machine comprehension.\" ICLR (2017).", "Hi!\n\nWe would kindly disagree with your statement. First of all, every model has it's unique input embeddings. BiDAF [1] uses the output of highway network that combines GloVe and Char embedding as the input vector. Reinforced Mnemonic Reader [2] proposes Feature-rich Encoder which includes additional features such as query category. It is not suitable to criticize that BiDAF achieved a previous SotA solely because of the Char embedding. The neural architecture as a whole is what delivers the SotA performance.\n\nNevertheless, we agree that including an ablation study on input vectors can help the community better understand the model. Here is the ablation study for FusionNet.\n\nPerformance on Dev set is shown (EM / F1):\nFusionNet : 75.3 / 83.6\nFusionNet without CoVe (original setting) : 74.1 / 82.5\nFusionNet without CoVe (embedding dropout=0.3) : 73.7 / 82.4\nFusionNet with GloVe fixed: 75.0 / 83.2\n=====\nBest documented number [2]: 72.1 / 81.6\n\nAs you can see, CoVe is indeed helpful in FusionNet. However, the ablated performance is still higher than the best model published (Reinforced Mnemonic Reader) by 2% in EM.\n\nNevertheless, it is not appropriate to directly compare our ablation result with other models without CoVe. We have optimized the hyperparameters in FusionNet when CoVe is present, and we have tried our best to simplify FusionNet when CoVe is included due to our belief that a simple model is a better model. For example, we did not include multi-hop reasoning [2] (giving reinforced mnemonic reader 1% improvement), character embedding (giving BiDAF 2% improvement) ... etc.\n\nThe core of FusionNet is the Fully-Aware Attention. We have found this simple enhancement to be exceptionally advantageous for machine comprehension. We sincerely hope the community can benefit from this simple but powerful idea.\n\nReferences:\n[1] Minjoon Seo, et al. \"Bidirectional attention flow for machine comprehension.\" ICLR (2017).\n[2] Minghao Hu, Yuxing Peng, and Xipeng Qiu. \"Reinforced Mnemonic Reader for Machine Comprehension.\" arXiv preprint arXiv:1705.02798 (2017).", "In our opinion, FusionNet is architecturally simpler than previous state-of-the-art methods (such as Reinforced Mnemonic Reader, which utilizes feature-rich encoder, semantic fusion unit, iterative interactive-aligning self-aligning, multihop memory-based answer pointer, and a reinforcement learning technique to achieve their great performance) in many ways while performing better. We are sorry to hear your opinion.\n\nStill, we hope this simple and powerful technique can help in your future application of machine comprehension!", "I agree, it is simpler than that model, but not at all simpler than, for instance, r-net which performs almost as well. Anyway, simplicity is just a personal preference for me until something more complex is able to make a significant improvement on *many* datasets. I believe we create complex networks in abundance, but it turns out that fair comparisons almost always show that they do not give a significant gain (both empirically and wrt. research insights). This, however, is my very subjective opinion.", "The mentioned models do not contain attention at all. They compute a weighted question representation that is all. So these model clearly indicate that much of SQuAD is more or less trivial. However, I am not saying that your model is not a contribution, only that it should be evaluated on another dataset since there are quite a few now. Otherwise, your improvements can stem from overfitting your architecture to SQuAD.\n\nI think having less comparable number of models in the ensemble is necessary to get the clear picture. This evaluation should also be done without CoVe, to be really comparable.", "Thanks for the clarification! It is nice to see that the model is still (slightly) better, but given the complexity this is not surprising. Overall the results are less impressive now. I also do not really buy the term fully aware attention. It is just attention on multiple levels, an engineering trick.", "Well, a lot of dataset papers and papers have shown that SQuAD is overly simple, due to limited context size and a lot of paraphrases. It requires only a rather simple heuristic to solve it. So not the entire community agrees on that.\n\nThe improvements on adversarial datasets are also not convincing by themselve, because (for some reason) you merely evaluate the ensemble against it, which is extremely large (more than 30 models), much larger (please correct me if I am wrong) than other ensembles. Since it is known that ensembling makes systems more robust against such attacks, this is not really comparable. Showing that a single model is more robust might help make this claim stronger though.", "Hi!\n\nWe are sorry to hear about your view on SQuAD. However, in our opinion, SQuAD is one of the best machine comprehension datasets currently available. The community also has a consensus on this. SQuAD has received the best resource paper award at EMNLP, and there has been active participation from many institutions (Salesforce, Google, Stanford, AI2, Microsoft, CMU, FAIR, HIT and iFLYTEK). Furthermore, many recent papers have shown that the performance on other datasets, such as TriviaQA, often supports the improvement on SQuAD. For example, in the comparison of [1], the F1 performances of three high-performing models (BiDAF, MEMEN, M-reader+RL) on SQuAD are BIDAF: 81.5, MEMEN: 82.7 (+1.2), M-reader+RL: 84.9 (+3.4). On the TriviaQA Web domain verified setting, the F1 performance becomes BIDAF: 55.8, MEMEN: 57.6 (+1.8), M-reader+RL: 61.5 (+5.7), which highly authenticate the improvement seen in SQuAD.\n\nNonetheless, we agree with your concern that a high performance on SQuAD may not always generalize to a different dataset. It is the exact motivation of recently proposed adversarial datasets [2], which has received this year EMNLP outstanding paper award. These adversarial datasets are very challenging. The accuracy of sixteen published models drops from an average of 75% F1 score on SQuAD to 36% on these datasets. Furthermore, many high-performing models on SQuAD performs particularly bad on these adversarial datasets.\n\nTo verify the generalization of our model in addition to SQuAD, we have also evaluated on two challenging adversarial datasets proposed in [2], AddSent and AddOneSent. From our paper, you can see that FusionNet not only performs well on SQuAD but also achieves a significant improvement (+5%) on these adversarial datasets over state-of-the-art methods.\n\nReferences:\n[1] Minghao Hu, Yuxing Peng, and Xipeng Qiu. \"Reinforced Mnemonic Reader for Machine Comprehension.\" arXiv preprint arXiv:1705.02798 (2017).\n[2] Robin Jia, and Percy Liang. \"Adversarial examples for evaluating reading comprehension systems.\" EMNLP (2017).", "Hi, most models this paper compares to are trained with GloVe embeddings but you only show results with CoVe (if I am not mistaken). I feel like this model is only able to achieve SotA because it uses CoVe and not because of the additional extensions. Is this correct? Do you have an ablation for that?", "I noticed that you only evaluate against SQuAD which is known to be a bad dataset for evaluating machine comprehension. It has only short documents and most of the answers are easily extractable. This is a bit troubling especially given that there are plenty of good and much more complex datasets out there, e.g., TriviaQA, NewsQA, just to mention a few. It feels like we are totally overfitting on a simple dataset. Would it be possible to also provide results on one of those, otherwise it is really hard to judge whether there is indeed any significant improvement. I think this is a big issue.", "Thank you for your compliment!", "Thank you for being interested in reproducing our work!\n\n- Which components are used without dropout?\nDropout is applied before every linear transform, including the input for each layer of LSTM and Attention. For fast implementation, we do not use hidden state dropout in LSTM. Also, an additional dropout is applied after the GloVe and CoVe embedding layer.\nThe dropout is shared across time step (i.e., variational dropout). And different linear layers use different dropout masks.\n\n- Attention dimensions for the various fusions present.\nFor all the fully-aware attention S(HoW_i, HoW_j), we used an attention dimension k = 250 (the same as the output size of BiLSTM).", "I set up a reproduction experiment for which I need a little clarification on the following.\n\n- Which components are used/set up without dropout?\n- Attention dimensions for the various fusions present.", "Nice simple model." ]
[ -1, -1, 7, 8, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 5, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJIgi_eCZ", "iclr_2018_BJIgi_eCZ", "iclr_2018_BJIgi_eCZ", "iclr_2018_BJIgi_eCZ", "iclr_2018_BJIgi_eCZ", "iclr_2018_BJIgi_eCZ", "S1OQa84Wf", "HJ3IoLB-z", "S1UrbZQ-f", "SJxIVpkZM", "r1ApvdPxG", "rkqkzzm-G", "SJxG58rZM", "rJK-1DBZM", "BJi3jC4-f", "H1qsigSWz", "rJYnDN4-f", "HkeZo1MZG", "iclr_2018_BJIgi_eCZ", "iclr_2018_BJIgi_eCZ", "Sy1MCpzJf", "SJftaCpyM", "iclr_2018_BJIgi_eCZ", "iclr_2018_BJIgi_eCZ" ]
iclr_2018_rkgOLb-0W
Neural Language Modeling by Jointly Learning Syntax and Lexicon
We propose a neural language model capable of unsupervised syntactic structure induction. The model leverages the structure information to form better semantic representations and better language modeling. Standard recurrent neural networks are limited by their structure and fail to efficiently use syntactic information. On the other hand, tree-structured recursive networks usually require additional structural supervision at the cost of human expert annotation. In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model. In our model, the gradient can be directly back-propagated from the language model loss into the neural parsing network. Experiments show that the proposed model can discover the underlying syntactic structure and achieve state-of-the-art performance on word/character-level language model tasks.
accepted-poster-papers
Nice language modeling paper with consistently high scores. The model structure is neat and the results are solid. Good ICLR-type paper with contributions mostly on the ML side and experiments on a (simple) NLP task.
train
[ "ByUN4M9xM", "rkJwIctlf", "HkLBrbaef", "Sk9e3S2fz", "r1NnBSnfz", "SkVObH3Mz", "B1_Q-r2fM", "rkX6IwPzM", "rkemARblM", "SJdy1V01z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "official_reviewer" ]
[ "** UPDATE ** upgraded my score to 7 based on the new version of the paper.\n\nThe main contribution of this paper is to introduce a new recurrent neural network for language modeling, which incorporates a tree structure More precisely, the model learns constituency trees (without any supervision), to capture syntactic information. This information is then used to define skip connections in the language model, to capture longer dependencies between words. The update of the hidden state does not depend only on the previous hidden state, but also on the hidden states corresponding to the following words: all the previous words belonging to the smallest subtree containing the current word, such that the current word is not the left-most one. The authors propose to parametrize trees using \"syntactic distances\" between adjacent words (a scalar value for each pair of adjacent words w_t, w_{t+1}). Given these distances, it is possible to obtain the constituents and the corresponding gating activations for the skip connections. These different operations can be relaxed to differentiable operations, so that stochastic gradient descent can be used to learn the parameters. The model is evaluated on three language modeling benchmarks: character level PTB, word level PTB and word level text8. The induced constituency trees are also evaluated, for sentence of length 10 or less (which is the standard setting for unsupervised parsing).\n\nOverall, I really like the main idea of the paper. The use of \"syntactic distances\" to parametrize the trees is clever, as they can easily be computed using only partial information up to time t. From these distances, it is also relatively straightforward to obtain which constituents (or subtrees) a word belongs to (and thus, the corresponding gating activations). Moreover, the operations can easily be relaxed to obtain a differentiable model, which can easily be trained using stochastic gradient descent.\n\nThe results reported on the language modeling experiments are strong. One minor comment here is that it would be nice to have an ablation analysis, as it is possible to obtain similarly strong results with simpler models (such as plain LSTM).\n\nMy main concern regarding the paper is that it is a bit hard to understand. In particular in section 4, the authors alternates between discrete and relaxed values: end of section 4.1, it is implied that alpha are in [0, 1], but in equation 6, alpha are in {0, 1}, then relaxed in equation 9 to [0, 1] again. I am also wondering whether it would make more sense to start by introducing the syntactic distances, then the alphas and finally the gates? I also found the section 5 to be quite confusing. While I get the\tgeneral idea, I am not sure what is the relation between hidden states h and m (section 5.1). Is there a mixup between h defined in equation 10 and h from section 5.1? I am aware that it is not straightforward to describe the proposed method, but believe it would be a much stronger paper if written more clearly.\n\nTo conclude, I really like the method proposed in this paper, and believe that the experimental results are quite strong.\nMy main concern\tregarding the paper is its clarity: I will gladly increase my score if the authors can improve the writing.", "Summary: the paper proposes a novel method to leverage tree structures in an unsupervised learning manner. The key idea is to make use of “syntactic distance” to identify phrases, thus building up a tree for input sentence. The proposed model achieves SOTA on a char-level language modeling task and is demonstrated to yield reasonable tree structures.\n\nComment: I like the paper a lot. The idea is very creative and interesting. The paper is well written.\n\nBesides the official comment that the authors already replied, I have some more:\n- I was still wondering how to compute the left hand side of eq 3 by marginalizing over all possible unfinished structures so far. (Of course, what the authors do is showed to be a fast and good approximation.)\n- Using CNN to compute d has a disadvantage that the range of look-back must be predefined. Looking at fig 3, in order to make sure that d6 is smaller than d2, the look-back should have a wide coverage so that the computation for d6 has some knowledge about d2 (in some cases the local information can help to avoid it, but not always). I therefore think that using an RNN is more suitable than using a CNN.\n- Is it possible to extend this framework to dependency structure?\n- It would be great if the authors show whether the model can leverage given tree structures (like SPINN) (for instance we can do a multitask learning where a task is parsing given a treebank to train)\n", "The paper proposes Parsing-Reading-Predict Networks (PRPN), a new model jointly learns syntax and lexicon. The main idea of this model is to add skip-connections to integrate syntax relationships into the context of predicting the next word (i.e. language modeling task).\n\nTo model this, the authors introduce hidden variable l_t, which break down to the decisions of a soft version of gate variable values in the previous possible positions. These variables are then parameterized using syntactic distance to ensure that the final structure inferred by the model has no overlapping ranges so that it will be a valid syntax tree.\n\nI think the paper is in general clearly written. The model is interesting and the experiment section is quite solid. The model reaches state-of-the-art level performance in language modeling and the performance on unsupervised parsing task (which is a by-product of the model) is also quite promising.\n\nMy main question is that the motivation/intuition of introducing the syntactic distance variable. I understand that they basically make sure the tree is valid, but the paper did not explain too much about what's the intuition behind this or is there a good way to interpret this. What motivates these d variables?", "Thanks for the comments and suggestions. In the new revision, we add the ablation test on PTB dataset. The \"-Parsing Net\" model in ablation test shows what the performance will be like without the syntactic modulation.", "Thanks a lot for your kind review and suggestions. We’d like to address your issues as follows:\n\nRegarding \"marginalizing over all possible unfinished structures so far\"\nMarginalizing over all possible unfinished structure is very difficult due to the fact that our model stacks multiple recurrent layers. One better approximation is that we compute left-hand side of both eq2 and eq3 by marginalizing over all possible local structures at each time step. In other words, we can sampling all possible l_t, then compute the weighted sum of the right-hand side of eq2 and eq3 with respect to different l_t and using p(l_{t}=t'|x_0, ...,x_t) as weights.\n\nRegarding \"using an RNN is more suitable than using a CNN\"\nWe totally agree with that. Using an LSTM can provide an unbounded context information for the gates, and that is definitely a good direction to try. We will probably try that in the future iterations of our model.\n\nRegarding \"extend this framework to dependency structure\"\nParsing network can only give boundary information for constituent parsing. However, it's possible to extract dependency information from attention weights, which remains an open question to study.\n\nRegarding \"leverage given tree structures\"\nWe also have thought about this. One possible way is to infer a set of true distances using the given tree structure and train the parsing network to generate a set of distances which align with the true distances. We haven’t done that in this work since we want to focus on unsupervised learning. This will be explored in our next work.\n\nThanks again for the comments and review!\n", "Thanks for the comments and suggestions. We have modified our manuscript accordingly in the updated version of the paper. \n\nFor the ablation studies, we’ve added a set of results in Section 6.2, Table 3. \n\nWe are sorry for the lack of clarity in the paper, and we have largely rewritten Section 4 in the hope of clarifying our explanation. \n\nTo answer the question in the review, \\alpha is expected to be in [0, 1] throughout the paper. In Eq. 6 in the updated paper, the hardtanh() function is a piecewise linear function defined by hardtanh(x) = max(-1, min(1, x)), which has a linear slope near zero, so its output is also in [0, 1]. In section 5.1, the m is the state that we regard as memory. In the case of using an LSTM, which is what we are doing in the experiments, we are modifying both h and c according to the attention weights, so m=(h, c). In Eq. 10, h stands for the hidden states only. We modified Section 5.1 to make these differences between h and m clearer. \n\nThanks again for your precious comments!\n", "Thanks for your review and kind comments. In order to make the motivations and explanations to syntactic distance clearer, Section 4.2 has been rewritten accordingly to include the points we’ve mentioned here. \n\nThe syntactic distance (d value) is motivated by trying to learn a scalar which indicates how semantically close each pair of words is. Our basic hypothesis is that words in the same constituent should have closer syntactic relation within themselves, and the syntactical proximity can be represented by a scalar value. From the tree structure point of view, the distance can be interpreted as positively correlated with the shortest path in the tree (in terms of the number of edges) between the two words. Syntactically the closer the two words are, the shorter this distance will be. Further, with the proof in Appendix C, we proved that by just using this scalar distance, a valid tree can be inferred. \n\nMathematically the syntactic distance can also be naturally introduced from the stick breaking process, as a parametrization of \\alpha in Eq. 6.\n\nFrom the viewpoint of computational linguistics, we did an extensive search and found some related work which tries to identify the beginning and ending words by just using local information, for example, Roark & Hollingshead, (2008). We have cited this work in the updated version.\n\nThanks again for your kind review!\n", "This is really an interesting paper, which models the syntactic structure in a clever way. In the implementation (sec 5.1), an LSTMN is used to perform the recurrent update, where the syntactic gates g_i^t are used to modulate the content-based attention. So, I was wondering how much the model actually benefits from the syntactic modulation. Specifically, what the performance will be like without the syntactic modulation, i.e., with a standard LSTMN. \n\nP.S. I've checked the original LSTMN paper, but the experiment setting (network size, hyper-parameters etc.) is different there.", "Thank you for enlightening comments.\n\nRegarding \"marginalizing over g\":\nAs discussed in section 4.1 and Appendix B, we replace discrete g by its expectation. Thus, we can have a computationally less expensive approximation for p(x_t+1|x0...x_t).\n\nRegarding \"linguistic theory for 'syntactic distance'\"\nThe idea of using a \"syntactic distance\" is inspired by the binary parse tree, which is related to linguistic theory. We introduced the \"syntactic distance\" while trying to render the binary parse tree into a learnable, soft tree structure in the framework of language modeling. So it can be deemed as a set of boundaries which defines the binary parse tree.\n\nRegarding \"try other activation functions\"\nThank you for this enlightening comment. We recently tried to replace sigmoid by ReLU, which makes the model achieve more stable performance regardless of different temperature parameter \\tau.\n\nRegarding \"try any word embedding\"\nIn this experiment, we want to prove the model's ability to learn from scratch, but pretrained word embedding can contain syntactic information. We will use word embedding in future work that focuses on obtaining better syntactic distance.\n\nThis article will be further revised and polished according to your suggestions.\n\n", "The paper proposes a very cute idea. My questions/comment: \n\n1. can you compute p(x_t+1|x0...x_t) (eq 3) by marginalising over g?\n\n2. is the idea of using \"syntactic distance\" related to any linguistic theory? \n\n3. I think eq 5 has a typo: is it g_i or g_t'? \n\n4. the last line on page 4: d_{K-1} (capital K). Shouldn't d index start from 1? (you say that there are K-1 variables)\n\n5. Eq 11: I think d doesn't need to be in [0, 1]. Did you try other activation functions (e.g Tanh)?\n\n6. The line right after Eq 12: shouldn't t+1 be superscript? (It's better to be coherent with the notations above)\n\n7. In 6.3, did you try any word embedding? As suggested by [1], word embeddings can be very helpful. \n\n\n1. Le & Zuidema. Unsupervised Dependency Parsing: Let’s Use Supervised Parsers\n\n" ]
[ 7, 8, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rkgOLb-0W", "iclr_2018_rkgOLb-0W", "iclr_2018_rkgOLb-0W", "rkX6IwPzM", "rkJwIctlf", "ByUN4M9xM", "HkLBrbaef", "iclr_2018_rkgOLb-0W", "SJdy1V01z", "iclr_2018_rkgOLb-0W" ]
iclr_2018_rk6cfpRjZ
Learning Intrinsic Sparse Structures within Long Short-Term Memory
Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10.59x speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to non- LSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is available.
accepted-poster-papers
The reviewers really liked this paper. This paper presents a tweak to the LSTM cell that introduces sparsity, thus reducing the number of parameters in the model. The authors show that their sparse models match the performance of the non-sparse baselines. While the results are not state-of-the-art but vanilla implementations of standard models, this is still of interest to the community.
train
[ "SJ15MyGeG", "SySivF84M", "B15mUMdef", "r14eWGtez", "SJsqUdpzG", "Hyt62VvGG", "SyjB0Vvfz", "SkkbFvaGz", "By6GmBPGz", "rkmp_naTZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "Quality: \nThe motivation and experimentation is sound.\n\nOriginality:\nThis work is a natural follow up on previous work that used group lasso for CNNs, namely learning sparse RNNs with group-lasso. Not very original, but nevertheless important.\n\nClarity:\nThe fact that the method is using a group-lasso regularization is hidden in the intro section and only fully mentioned in section 3.2 I would mention that clearly in the abstract.\n\nSignificance:\nLeaning small models is important and previous sparse RNN work (Narang, 2017) did not do it in a structured way, which may lead to slower inference step time. So this is an investigation of interest for the community.\n\nMinor comments:\n- One main claim in the paper is that group lasso is better than removing individual weights, yet not experimental evidence is provided for that.\n- The authors found that their method beats \"direct design\". This is somewhat unintuitive, yet no explanation is provided. ", "Increasing my score as this strengthens the work.", "The paper spends lots of (repeated) texts on motivating and explaining ISS. But the algorithm is simple, using group lasso to find components that are can retained to preserve the performance. Thus the novelty is limited.\n\nThe experiments results are good.\n\nSec 3.1 should be made more concise. ", "The authors propose a technique to compress LSTMs in RNNs by using a group Lasso regularizer which results in structured sparsity, by eliminating individual hidden layer inputs at a particular layer. The authors present experiments on unidirectional and bidirectional LSTM models which demonstrate the effectiveness of this method. The proposed techniques are evaluated on two models: a fairly large LSTM with ~66.0M parameters, as well as a more compact LSTM with ~2.7M parameters, which can be sped up significantly through compression.\nOverall this is a clearly written paper that is easy to follow, with experiments that are well motivated. To the best of my knowledge most previous papers in the area of RNN compression focus on pruning or compression of the node outputs/connections, but do not focus as much on reducing the computation/parameters within an RNN cell. I only have a few minor comments/suggestions which are listed below:\n\n1. It is interesting that the model structure where the number of parameters is reduced to the number of ISSs chosen from the proposed procedure does not attain the same performance as when training with a larger number of nodes, with the group lasso regularizer. It would be interesting to conduct experiments for a range of \\lambda values: i.e., to allow for different degrees of compression, and then examine whether the model trained from scratch with the “optimal” structure achieves performance closer to the ISS-based strategy, for example, for smaller amounts of compression, this might be the case?\n\n2. In the experiment, the authors use a weaker dropout when training with ISS. Could the authors also report performance for the baseline model if trained with the same dropout (but without the group LASSO regularizer)?\n\n3. The colors in the figures: especially the blue vs. green contrast is really hard to see. It might be nicer to use lighter colors, which are more distinct.\n\n4. The authors mention that the thresholding operation to zero-out weights based on the hyperparameter \\tau is applied “after each iteration”. What is an iteration in this context? An epoch, a few mini-batch updates, per mini-batch? Could the authors please clarify.\n\n5. Clarification about the hyperparameter \\tau used for sparsification: Is \\tau determined purely based on the converged weight values in the model when trained without the group LASSO constraint? It would be interesting to plot a histogram of weight values in the baseline model, and perhaps also after the group LASSO regularized training.\n\n6. Is the same value of \\lambda used for all groups in the model? It would be interesting to consider the effect of using stronger sparsification in the earlier layers, for example.\n\n7. Section 4.2: Please explain what the exact match (EM) and F1 metrics used to measure performance of the BIDAF model are, in the text. \n\nMinor Typographical/Grammatical errors:\n- Sec 1: “... in LSTMs meanwhile maintains the dimension consistency.” → “... in LSTMs while maintaining the dimension consistency.”\n- Sec 1: “... is public available” → “is publically available”\n- Sec 2: Please rephrase: “After learning those structures, compact LSTM units remain original structural schematic but have the sizes reduced.”\n- Sec 4.1: “The exactly same training scheme of the baseline ...” → “The same training scheme as the baseline ...”", "To follow up the concerns on the ptb dataset.\n\nWe have added state-of-the-art model for ptb to show the generalizability of our approach. We select Recurrent Highway Networks. Please refer to Table 1 in the paper (https://arxiv.org/pdf/1607.03474.pdf) for the state-of-the-art models on ptb. \"Variational RHN + WT\" is the one we used as the baseline. \nOur results are covered in Table 2 in our paper. In a nutshell, our approach can reduce the RHN width (the number of units per layer) from 830 to 517 without losing perplexity.", "Thanks for reviewing.\n\n1. ISS approach can learn an “optimal” structure whose accuracy is better than the same “optimal” model but trained without using group Lasso.\nFor example, in Table 1, the first model learned by ISS approach has “optimal” structure with hidden sizes of (373, 315), and its perplexity is better than the same model (in the last row) but trained without using group Lasso regularization.\n\n2. With the same dropout (keep ratio 0.6), the baseline model overfits, and, with early stop, the best validation perplexity is 97.73 which is worse than the original 82.57.\n\n3. Changed green to yellow.\n\n4. Per mini-batch\n\n5. Yes, \\tau is determined purely based on the trained model without group LASSO regularization. No training is needed to select it.\nThanks for sharing this thought. Histogram is added in Appendix C. Instead of plotting the histogram of all weights, we plot the histogram of vector lengths of each “ISS weight groups”. We suppose it is more interesting because group Lasso essentially squeezes the length of each vector. The plot shows that the histogram is shifted to zeros by group Lasso regularization.\n\n6. Yes, to reduce the number of hyper-parameters, an identical \\lambda is used for all groups.\nWe tried to linearly scale the strength of regularization on each group by the vector length of the “ISS group weight” as used by Alvarez et al. 2016, however, it didn’t help to improve sparsity in our experiments.\n\n7. We now add the reference of the definition of EM and F1 (Rajpurkar et al. 2016) into the paper:\n“Exact match. This metric measures the percentage of predictions that match any one of the ground truth answers exactly.”\n“(Macro-averaged) F1 score. This metric measures the average overlap between the prediction and ground truth answer. We treat the prediction and ground truth as bags of tokens, and compute their F1. We take the maximum F1 over all of the ground truth answers for a given question, and then average over all of the questions.”\n\n8. Corrected. Thanks for so many useful details.\n\nPaper are revised based on the comments.", "Thanks for reviewing.\n\nWe have made sec 3.1 as concise as possible. We have moved some to the Appendix A. \n\nThe key novelty/contribution of the paper is to identify the structure inside RNNs (including LSTMs and RHNs) that shall be considered as a group (\"ISS\") to most effectively explore sparsity. Once the group is identified, using group lasso becomes intuitive. That is why we describe ISS, the structure of the group, in details and illustrate the intuitions and analysis behind it. We clarified this in the revision.", "Hi Aaron,\n\nWe have added state-of-the-art model for ptb to show the generalizability of our approach. In limited time, we select Recurrent Highway Networks for fast evaluation since it is open source here https://github.com/julian121266/RecurrentHighwayNetworks. You may refer to Table 1 in the paper (https://arxiv.org/pdf/1607.03474.pdf) for the state-of-the-art models on ptb. \"Variational RHN + WT\" is the one we used as the baseline. \nOur results are covered in Table 2 in our paper. In a nutshell, our approach can reduce the RHN width from 830 to 517 without losing perplexity.\n\nThanks.", "Thanks for reviewing.\n\nTo clarity: \nWe have mentioned group Lasso in the abstract. However, please note that, any structured sparsity optimization can be integrated into ISS, like group connection pruning based on the norm of the group (as used by Hao Li et. al. 2017 ).\n\nTo minor comments:\n- The speedup vs sparsity is added in Fig. 1, to quantitatively justify the gain of structured sparsity over non-structured sparsity.\n- In our context, \"direct design\" refers to using the same network architecture but with smaller hidden sizes. The comparison is in Table 1.\n- We are working on a better ptb baseline -- the RHN model (https://arxiv.org/abs/1607.03474), to solve the concerns on ptb dataset. The training takes time, but we will post our results as soon as the experiments are done. However, our results on SQuAD may reflect that the approach works in general.\n\nThanks!", "If you are going to use the PTB dataset for your language modeling experiments, it would help if you use a newer baseline than the 2014 Zaremba paper. It would be better to cite \"On the State of the Art of Evaluation in Neural Language Models\" from July 2017. (https://arxiv.org/pdf/1707.05589.pdf) They report a perplexity of 59.6 using a single-layer LSTM with 10 million parameters." ]
[ 7, -1, 6, 7, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rk6cfpRjZ", "SJsqUdpzG", "iclr_2018_rk6cfpRjZ", "iclr_2018_rk6cfpRjZ", "By6GmBPGz", "r14eWGtez", "B15mUMdef", "rkmp_naTZ", "SJ15MyGeG", "iclr_2018_rk6cfpRjZ" ]
iclr_2018_ry018WZAZ
Deep Active Learning for Named Entity Recognition
Deep learning has yielded state-of-the-art performance on many natural language processing tasks including named entity recognition (NER). However, this typically requires large amounts of labeled data. In this work, we demonstrate that the amount of labeled training data can be drastically reduced when deep learning is combined with active learning. While active learning is sample-efficient, it can be computationally expensive since it requires iterative retraining. To speed this up, we introduce a lightweight architecture for NER, viz., the CNN-CNN-LSTM model consisting of convolutional character and word encoders and a long short term memory (LSTM) tag decoder. The model achieves nearly state-of-the-art performance on standard datasets for the task while being computationally much more efficient than best performing models. We carry out incremental active learning, during the training process, and are able to nearly match state-of-the-art performance with just 25\% of the original training data.
accepted-poster-papers
The reviewers liked this paper quite a bit. The novelty seems modest and the results are limited to a fairly simple NER task, but there is nothing wrong with the paper, hence recommending acceptance.
train
[ "HklWpb5lz", "S1G9tRFgG", "ry7bzE8eG", "S1S83QJVz", "B1cNJbz7G", "H1Y_RgzXG", "B1M7plzmz", "rJJs3gG7M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper introduces a lightweight neural network that achieves state-of-the-art performance on NER. The network allows efficient active incremental training, which significantly reduces the amount of training data needed to match state-of-the-art performance.\n\nThe paper is well-written. The ideas are simple, but they seem to work well in the experiments. Interestingly, LSTM decoder seems to have slight advantage over CRF decoder although LSTM does not output the best tag sequence. It'd be good to comment on this.\n\n* After rebuttal\nThank you for your response and revision of the paper. I think the empirical study could be useful to the community.", "This paper studies the application of different existing active learning strategies for the deep models for NER.\n\nPros:\n* Active learning may be used for improving the performance of deep models for NER in practice\n* All the proposed approaches are sound and the experimental results showed that active learning is beneficial for the deep models for NER\n\nCons:\n* The novelty of this paper is marginal. The proposed approaches turn out to be a combination of existing active learning strategies for selecting data to query with the existing deep model for NER. \n* No conclusion can be drawn by comparing with the 4 different strategies.\n\n======= After rebuttal ================\n\nThank you for the clarification and revision on this paper. It looks better now.\n\nI understand that the purpose of this paper is to give actionable insights to the practice of deep learning. However, since AL itself is a meta learning framework and neural net as the base learner has been shown to be effective for AL, the novelty and contribution of a general discussion of applying AL for deep neural nets is marginal. What I really expected is a tightly-coupled active learning strategy that is specially designed for the particular deep neural network structure used for NER. Apparently, however, none of the strategies used in this work is designed for this purpose (e.g., the query strategy or model update strategy should at least reflex some properties of deep learning or NER). Thus, it is still below my expectation. \n\nAnyway, since the authors had attempted to improved this paper, and the results may provide some information to practice, I would like to slightly raise my rating to give this attempt a chance.\n\n", "Summary:\nThis paper applies active learning to a deep neural model (CNN-CNN-LSTM) for named-entity recognition, which allows the model to match state-of-the-art performance with about 25% of the full training data.\n\nStrength:\nThe paper is relatively easy to follow. Experiments show significant reduction of training samples needed.\n\nWeaknesses:\nAbout half of the content is used to explain the CNN-CNN-LSTM architecture, which seems orthogonal to the active learning angle, except for the efficiency gain from replacing the CRF with an LSTM.\n\nThe difference in performance among the sampling strategies (as shown in Figure 4) seems very tiny. Therefore, it is difficult to tell what we can really learn from these empirical results.\n\nOther questions and comments:\nIn Table 4: Why is the performance of LSTM-LSTM-LSTM not reported for OntoNotes 5.0, or was the model simply too inefficient? \n\nHow is the variance of the model performance? At the early stage of active learning, the model uses as few as 1% of the training samples, which might cause large variance in terms of dev/test accuracy. \n\nThe SUBMOD method is not properly explained in Section 4. As one of the active learning techniques being compared in experiments, it might be better to formally describe the approach instead of putting it in the appendix.\n", "Thank you for such detailed response and for adding the new experiment results. Overall, I believe this kind of empirical results on applying active learning to deep NLP models would be useful to the community.", "We would like to thank Reviewer 2 for taking the time to leave a clear and thoughtful review. We have improved the draft per your comments and would like to reply to each specific point in turn:\n\n1. While the architecture and active learning contributions may appear on a technical level to be orthogonal contributions, in practice they are closely linked. Active learning requires frequent retraining of the model, during which time new annotations cannot be collected; faster training speed makes more frequent training of the model affordable, which in turn allows collection of more informative labels. Whereas the use of recurrent network in word-level encoder and chain CRF in tag decoder have been considered as a standard approach in the literature (from Collobert et al 2011 to more recent papers like Lample et al 2016, Chiu and Nichols 2015, Yang et al 2016), we demonstrate that convolutional word encoder and LSTM decoder provides 4x speedup in training time with very minimal loss of performance in terms of F1 score.\n\nPer your criticism and to reinforce the significance of the speedup, we have implemented standard CNN-LSTM-CRF model and added its training speed to Table 4, so that the magnitude of the speedup could be demonstrated.\n\n2. We have also added LSTM-LSTM-LSTM experiment on OntoNotes 5.0-English in Table 4. We did not run this experiment in our initial draft since we focus on finding computationally efficient architectures, and it was clear from CoNLL-2003 English experiments that LSTM-LSTM-LSTM is much more computationally expensive than other competitors; but we agree that this result is still informative.\n\n3. In response to your question about variance, we have replicated the active learning experiments multiple times, and added a learning curve plot with error bars; please refer to Appendix B and Figure 6 of the updated paper. While learning curves from active learning methods are indeed close to each other, there is a noticeable trend that MNLP (our proposal) and BALD outperforms traditional LC in early rounds of data acquisition. Also note that MNLP we propose is much more computationally efficient because it requires only a single forward pass, whereas BALD requires multiple forward passes.\n\n4. We agree with your point that the SUBMOD method ought to be explained briefly in the main text and not relegated only to the Appendix. Per your criticism, we’ve added a paragraph to section 4 describing the approach. \n", "Thank you for taking the time to review our paper. We are glad that you recognize the soundness of the approaches and the experimental evaluation. \n\nWe would like to address the reviewer’s main concerns:\n\n1. Regarding “No conclusion can be drawn by comparing with the 4 different strategies”:\n\nOur experiments demonstrate that active learning can produce results comparable to the state of the art methods while using only 25% of the data. Moreover, we demonstrate that combining representativeness-based submodular optimization methods with uncertainty-based heuristics conferred no additional advantage. \n\nWe believe that these are in and of themselves significant conclusions. We agree that the Bayesian approach (BALD) and least confidence approaches (LC and MNLP) produce similar learning curves, but we do not believe that this renders the result inconclusive. To the contrary, parity in performance strongly favors the least confidence approaches owing to their computational advantages; BALD requires multiple forward passes to produce an uncertainty estimate, whereas LC and MNLP require only a single forward pass. \n\nWe have added error bars to our learning curve (Appendix B, Figure 6) in our updated paper, and it can be seen that there is a noticeable trend which MNLP (our proposal) and BALD outperform traditional LC in early active learning rounds.\n\n2. Regarding the novelty of the paper:\n\nIn this paper, we explore several methods for active learning. These include uncertainty-based heuristics, which we emphasize in the main text and which yield compelling experimental results. We also include a representativeness-based sampling method using submodular optimization that is reported in the main text and described in detail in the appendix. In the revised version we have included some of this detail in the main body of the paper. \n\nIt so happens that the simpler approaches outperform our more technically complex contributions. The tendency not to publish compelling results produced by simple methods creates a strong selection bias that might misrepresent the state of the art and can encourages authors to overcomplicate methodology unnecessarily. While the winning method in our paper doesn’t offer substantial mathematical novelty, we hope that the reviewer will appreciate that our work demonstrates empirical rigor. Our results give actionable insights to the deep learning and NLP communities that are not described in the prior literature.\n", "We’d like to thank Reviewer 3 for a thoughtful and constructive review of our paper. We are glad that the reviewer recognized the importance of using a lightweight network to support the incremental training required for our active learning strategy. \n\nThe reviewer asks an interesting question; “LSTM decoder seems to have slight advantage over CRF decoder although LSTM does not output the best tag sequence. It'd be good to comment on this.”\n\nIndeed, our greedily decoded tag sequences from the LSTM decoder may not be the best tag sequence with respect to the LSTM’s predictions whereas the chain CRF can output the best sequence with respect to its predictions via dynamic programming. There are a few important points to make here:\n\n1. We experimented with multiple beam sizes for LSTM decoders, and found that greedy decoding is very competitive to beam search; beam search with size 16 was only marginally better than greedily decoded tags in terms of F1 score. Producing the best sequence is equivalent to setting an arbitrarily large beam width. Out experiments indicate that empirically, increasing the beam width beyond 2 helps only marginally. This suggests that the intractability of finding best sequence from the LSTM decoder is not a significant practical concern. We added these results, plotting the F1 score vs beam width in Appendix A of the updated paper.\n\n2. The LSTM decoder is a more expressive model than the chain CRF model since it models long-range dependencies (vs a single step for CRF), and thus the LSTM’s best sequence may be considerably better than the best sequence according to the CRF. So even if we do not get the very best sequence from the LSTM, it can still outperform the CRF. Indeed, we found in our experiments that when the encoder is fixed, our LSTM decoder outperforms our CRF decoder (Table 3 and 4, compare CNN-CNN-CRF against CNN-CNN-LSTM).\n", "We would like to thank the reviewers for taking the time to review our paper and give thoughtful comments. We are encouraged to see that two reviewers rate the paper as above the acceptance threshold and that all three reviewers recognized the clear demonstration of the empirical benefits of applying active learning when training deep models for named entity recognition. The reviewers also left some insightful critical comments. We have addressed these in the revised draft and responded to each reviewer in the respective threads." ]
[ 6, 6, 7, -1, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ry018WZAZ", "iclr_2018_ry018WZAZ", "iclr_2018_ry018WZAZ", "B1cNJbz7G", "ry7bzE8eG", "S1G9tRFgG", "HklWpb5lz", "iclr_2018_ry018WZAZ" ]
iclr_2018_Syg-YfWCW
Go for a Walk and Arrive at the Answer: Reasoning Over Paths in Knowledge Bases using Reinforcement Learning
Knowledge bases (KB), both automatically and manually constructed, are often incomplete --- many valid facts can be inferred from the KB by synthesizing existing information. A popular approach to KB completion is to infer new relations by combinatory reasoning over the information found along other paths connecting a pair of entities. Given the enormous size of KBs and the exponential number of paths, previous path-based models have considered only the problem of predicting a missing relation given two entities, or evaluating the truth of a proposed triple. Additionally, these methods have traditionally used random paths between fixed entity pairs or more recently learned to pick paths between them. We propose a new algorithm, MINERVA, which addresses the much more difficult and practical task of answering questions where the relation is known, but only one entity. Since random walks are impractical in a setting with unknown destination and combinatorially many paths from a start node, we present a neural reinforcement learning approach which learns how to navigate the graph conditioned on the input query to find predictive paths. On a comprehensive evaluation on seven knowledge base datasets, we found MINERVA to be competitive with many current state-of-the-art methods.
accepted-poster-papers
Good contribution. There was a (heated) debate over this paper but the authors stayed calm and patiently addressed all comments and supplied additional evaluations, etc.
train
[ "rJPLu4kff", "ByeDuWqEG", "SymbMC2xf", "SJdS6W2ef", "H1-nFopxM", "SJcDk8FzG", "rkQzTQTmf", "r1QV2maXG", "SkOUZl6Xf", "Hk6agA5fM", "ByjZWQ9zf", "By1YR_Kzz", "ry7CRLKfM", "Skn0jadzf", "rJtRtNkfM", "HkOqr4kff", "S1frbpDZM", "B1ZpM7FZM", "HkEYGpulf", "H1tRWqdgf", "BJhGlT8JM", "rJ3lxa_1M", "Hkcn5-Dyz", "SJCU15BJf", "H18mNA1kz", "rkFp3JaC-" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "author", "author", "author", "public", "author", "public", "author", "author", "author", "public", "public", "public", "author", "author", "author", "public", "public", "author", "public" ]
[ "Thank you for your helpful reviews!\n\nYou raised an interesting point regarding the performance of MINERVA on KGs with large number of relation types. For a fair comparison, we ran query answering (not fact prediction) experiments on NELL-995 and compared to our implementation of DistMult (which does very well on FB15k-237). DistMult achieves a score of 79.5 whereas MINERVA achieves a score of 82.73. Another important point to note is that MINERVA is much more efficient at inference time. NELL has ~75k entities and algorithms such as DistMult have to rank against all entities to get the final score. However MINERVA just has to walk to the right answer. This can be seen by comparing the wall-clock running times 35 secs wrt 115 secs - (sec 4.1 of the paper Query Answering on NELL-995)\nThis empirically shows that MINERVA works for relations with many relation types. We would additionally like to point out that MINERVA does well on WikiMovies. In WikiMovies the queries are partially structured and are in natural language. Hence the number of query types are actually quite large (and potentially unbounded). This also supports our claim. Thanks for the excellent suggestion again. \n\nWe also have updated the paper with a detailed analysis of the negative results on Fb15k-237 (sec 4.1) and more importantly, how this dataset differs from other KG datasets.\n", "We appreciate your revision of scores and we thank you for your helpful reviews once again!", "The paper present a RL based approach to walk on a knowledge graph to answer queries. The idea is novel, the paper is clear in its exposition, and the authors provide a number of experimental comparisons with prior work on a variety of datasets . \n\nPros:\n1. The approach is simple (no pre-training, no reward shaping, just RL from scratch with terminal reward, uses LSTM for keeping track of past state), computationally efficient (no computation over the full graph), and performs well in most of the experiments reported in the paper. \n2. It scales well to longer path lengths, and also outperforms other methods for partially structured queries.\n\nCons:\n1. You should elaborate more on the negative results on FB15K and why this performance would not transfer to other KB datasets that exist. This seems especially important since it's a large scale dataset, while the datasets a)-c) reported in the paper are small scale. \n2. It would also be good to see if your method also performed well on the Nations dataset where the baselines performed well. That said, if its a small scale dataset, it would be preferable to focus on strengthening the experimental analysis on larger datasets.\n3. In Section 4.2, why have you only compared to NeuralLP and not compared with the other methods? \n\nSuggestions/Questions:\n1. In the datatset statistics, can you also add the average degree of the knowledge graphs, to get a rough sense of the difficulty of each task.\n2. The explanation of the knowledge graph and notation could be made cleaner. It would be easier to introduce the vertices as the entities, and edges as normal edges with a labelled relation on top. A quick example to explain the action space would also help.\n3. Did you try a model where instead of using A_t directly as the weight vector for the softmax, you use it as an extra input? Using it as the weight matrix directly might be over regularizing/constraining your model. \n\nRevision: I appreciate the effort by the authors to update the paper. All my concerns were adequately addressed, plus improvements were made to better understand the comparison with other work. I update my review to 7: Good paper, accept.", "The paper proposes an approach for query answering/link prediction in KBs that uses RL to navigate the KB graph between a query entity and a potential answer entity. The main originality is that, unlike random walk models, the proposed approach learns to navigate the graph while being conditioned on the query relation type.\n\nI find the method sound and efficient and the proposed experiments are solid and convincing; for what they test for.\n\nIndeed, for each relation type that one wants to be testing on, this type of approach needs many training examples of pairs of entities (say e_1, e_2) connected both by this relation type (e_1 R e_2) and by alternative paths (e_1 R' R'' R''' e_2). Because the model needs to discover and learn that R <=> R ' R'' R''' .\n\nThe proposed model seems to be able to do that well when the number of relation types remains low (< 50). But things get interesting in KBs when the number of relation types gets pretty large (hundreds / thousands). Learning the kind of patterns described above gets much trickier then. The results on FB15k are a bit worrying in that respect. Maybe this is a matter of the dataset FB15k itself but then having experiments on another dataset with hundreds of relation types could be important. \n\nNELL has indeed 200 relations but if I'm not mistaken, the NELL dataset is used for fact prediction and not query answering. And as noted in the paper, fact prediction is much easier.\n\n", "The paper proposes a new approach (Minerva) to perform query answering on knowledge bases via reinforcement learning. The method is intended to answer queries of the form (e,r,?) on knowledge graphs consisting of dyadic relations. Minerva is evaluated on a number of different datasets such as WN18, NELL-995, and WikiMovies.\n\nThe paper proposes interesting ideas to attack a challenging problem, i.e., how to perform query answering on incomplete knowledge bases. While RL methods for KG completion have been proposed recently (e.g., DeepPath), Minerva improves over these approaches by not requiring the target entity. This property can be indeed be important to perform query answering efficiently. The proposed model seems technically reasonable and the paper is generally written well and good to understand. However, important parts of the paper seem currently unfinished and would benefit from a more detailed discussion and analysis.\n\nMost importantly, I'm currently missing a better motivation and especially a more thorough evaluation on how Minerva improves over non-RL methods. For instance, the authors mention multi-hop methods such as (Neelakantan, 2015; Guu, 2015) in the introduction. Since these methods are closely related, it would be important to compare to them experimentally (unfortunately, DeepPath doesn't do this comparison either). For instance, eliminating the need to pre-compute paths might be irrelevant when it doesn't improve actual performance. Similarly, the paper mentions improved inference time, which indeed is a nice feature. However, I'm wondering, what is the training time and how does it compare to standard methods like ComplEx. Also, how robust is training using REINFORCE?\n\nWith regard to the experimental results: The improvements over DeepPath on NELL and on WikiMovies are indeed promising. I found the later results the most convincing, as the setting is closest to the actual task of query answering. However, what is worrying is that Minerva doesn't do well on WN18 and FB15k-237 (for which the results are, unfortunately, only reported in the appendix). On FB15k-237 (which is harder than WN18 and arguably more relevant for real-world scenarios since it is a subset of a real-world knowledge graph), it is actually outperformed by the relatively simple DistMult method. From these results, I find it hard to justify that \"MINERVA obtains state-of-the-art results on seven KB datasets, significantly outperforming prior methods\", as stated in the abstract.\n\nFurther comments:\n- How are non-existing relations handled, i.e., queries (e,r,x) where there is no valid x? Does Minerva assume there is always a valid answer?\n- Comparison to DeepPath: Did you evaluate Minerva with fixed embeddings? Since the experiments in DeepPath used fixed embeddings, it would be important to know how much of the improvements can be attributed to this difference. \n- The experimental section covers quite a lot of different tasks and datasets (Countries, UMLS, Nations, NELL, WN18RR, Gridworld, WikiMovies) all with different combinations of methods. For instance, countries is evaluated against ComplEx,NeuralLP and NTP; NELL against DeepPath; WN18RR against ConvE, ComplEx, and DistMult; WikiMovies against MemoryNetworks, QA and NeuralLP. A more focused evaluation with a consistent set of methods could make the experiments more insightful.", "I do not believe your MR results on WN18 and FB15k-237. There are two main reasons:\n\n+ You compare your model with ConvE, Complex and DistMult. MR of ConvE, Complex and DistMult are 7323, 5261 and 5110 on WN18RR and 330, 248 and 254 on FB15-237 (shown in [1]). You absolutely know this but you hide these results. Other similar results are also reported in [2]. But your MR results of 10 on WN18 and 18 on FB15k-237 is unbelievable because there is a too big difference between results. \n\n+ Lower MR shows better performance. You did not tell the reviewers that WN18RR is a subset of WN18 and FB15k-237 is a subset of FB15k. As mentioned by [1], the test sets of WN18 and FB15k contain mostly reversed triples that are present in the training set. So WN18RR and FB15k-237 are created for the link prediction task much more realistic and challenging. So, it's natural that the MR values on WN18RR and FB15k-237 are higher than those on WN18 and FB15k (see MR results on WN18 and FB15k in [2,3,4]). But your MR values of 10 on WN18RR and 18 on FB15k-237 are significantly lower than state-of-the-art values on WN18 and FB15k.\n\nTwo above reason make me that I also do not believe in your MRR and Hits@10 results. \n\n[1] Convolutional 2D Knowledge Graph Embeddings\n[2] Revisiting Simple Neural Networks for Learning Representations of Knowledge Graphs\n[3] Knowledge Base Completion: Baselines Strike Back\n[4] An overview of embedding models of entities and relationships for knowledge base completion\n", "We have updated the paper with a new experiment which explicitly compares with a neural multi-hop (path) model which is designed to work in the query answering setting (when the target entity is unknown). Please refer to sec 4.2 of the paper. Thanks!", "We have updated the paper with a new experiment which explicitly compares with a neural multi-hop (path) model which is designed to work in the query answering setting (when the target entity is unknown). Please refer to sec 4.2 of the paper. Thanks!", "Thank you for your helpful reviews!\n\na) Comparison with multi-hop models: Thanks to your suggestion, we have updated the paper (sec 4.2) with a new experiment which explicitly compares to non-RL neural multihop models which precomputes a set of paths. Starting from the source entity, the model featurizes a set of paths (using a LSTM) and max-pools across them. This feature vector is then concatenated with the query relation and then fed to a feed forward network to score each target entity. This model is similar to that of Neelakantan et al. (2015) except for the fact that it was originally designed to work between a fixed set of source and target entity pair, but in our case the target entity is unknown. The model (baseline) is trained with a multi-class cross entropy objective based on observed triples and during inference we rank target entities according the the score given by the model.\nAs we can see, MINERVA outperforms this model in both freebase and NELL suggesting that RL based approach can effectively reduce the search space and focus on paths relevant to answer the query. Please see sec 4.2 for additional details and results.\n\nb) Training time and robustness of the model:- We actually found MINERVA to be very robust during training. We were able to achieve the reported results without much tuning and they are also very easy to reproduce. However to quantify the results, we report the variance of the results across three independent runs on the freebase and nell datasets. Also we report the learning curve of score on the development set wrt time. (Please see sec 4.5 of the paper)\n\nc) Regarding peformance on WN18RR and FB15k-237: \nMINERVA actually achieves state-of-the-art in the WN18RR dataset. \nOn FB15k-237, MINERVA matches with all baseline model and is outperformed only by DistMult. We have updated the paper with a detailed analysis of the negative results on Fb15k-237 (sec 4.1) and more importantly, how this dataset differs from other KG datasets. (To summarize, FB15k-237 has very (i) low clustering coefficient (ii) the path types don't repeat that often (iii) has a lot of 1-Many query relations.)\n\nd) Query Answering experiment on NELL-995: - We also added a query answering (not fact prediction) experiment on NELL-995 and compared to our implementation of DistMult (which does very well on FB15k-237). DistMult achieves a score of 79.5 whereas MINERVA achieves a score of 82.73. Another important point to note is that MINERVA is much more efficient at inference time. NELL has ~75k entities and algorithms such as DistMult have to rank against all entities to get the final score. However MINERVA just has to walk to the right answer. This can be seen by comparing the wall-clock running times 35 secs wrt 115 secs - (sec 4.1 of the paper Query Answering on NELL-995)\n\nFurther comments:\n\na) How are non-existing relations handled, i.e., queries (e,r,x) where there is no valid x? Does Minerva assume there is always a valid answer? - That is a good point. Currently MINERVA does not support non existing relations and assumes there is always a valid answer. The ability to handle non-existing relations is definitely important and we plan to incorporate this in future work.\n\nb) Comparison to DeepPath: Did you evaluate Minerva with fixed embeddings? Since the experiments in DeepPath used fixed embeddings, it would be important to know how much of the improvements can be attributed to this difference\nWe actually tried both cases - train randomly initialized embeddings from scratch and using fixed pretrained embeddings. We achieved similar results in both cases. For fixed embeddings, the model converged faster but to a similar score. However for uniformity across experiments, we reported results where we trained the embeddings. \n\nc) Consistent baselines: We will update the paper to cover as many reported baselines as possible. However we have made sure to the best of our abilities to compare with the models which have current state of the art results on each dataset.\n", "It is incredible that the commenter continues to be so rude and misleading (should OpenReview have a moderating system?), and continues to frame this interaction as an attempt to convince *them* rather than to correct the constant series of willful misinterpretations and falsehoods that they manage to state about our work in every single interaction, in the hope that they do not mislead others. If they are unconvinced, they are free to not use our code or build on the work.\n\nWe use the same model, per dataset, for each evaluation metric. Our previous comment about training a new DistMult (which it could be noted is not our model, but in fact a baseline against which we are competing) WordNet model to report median rank was because we did not have our own model for WordNet DistMult to report median rank, and were using numbers from prior work for the other metrics. While we are touched by your concern that we may be attempting to make our baselines too powerful, we are following the standard procedure.\n\nContrary to the assertion, as we provided copious evidence for in the last response, \"mean rank\" is not a particularly standard metric for the task (with which the authors are extremely familiar), is unreported by many papers, and has problems of its own in over-weighting outliers and being heavily data-size dependent. This is in addition to its lack of meaning for our model. In our last comment, we described the situation in some detail, not for the hostile commenter's benefit, but for any passers by who could be mislead by the litany of false statements they have made at every step of this regrettable interaction.\n\nAt this point we urge the commenter to work on their own research rather than discuss ours, as they seem unable to do the latter without stating untruths about the work, or impugning the personal ethics and/or common sense of the authors, without having so much as properly read our evaluation protocol. \n\nRegardless, no further comments from this person will receive a response, as bad faith has been more than adequately demonstrated.\n", "It's my mistake when not seeing MR (median rank) carefully. I thought it's mean rank. I am so sorry for that you do not follow the standard metric (mean rank MR) when doing the knowledge base completion task.\n\nI agree with the aspect of question-answer. However, when you are trying to adapt your model to a new task, i.e., the knowledge base completion task that you are not familiar, your evaluation protocol is still not convinced.\n\n1.\tYou should report three metrics:Hits@k, MRR and Mean Rank MR. Because there are standard metrics for the task, then adding the median rank results if you want.\n\n2.\tA trained model is obtained after tuning the hyper-parameters on the validation set with respect to just one single metric (e.g., Hits@k or MRR). After that, the trained model is used to report Hits@k, MRR and mean rank MR on the test set. For each dataset, Hits@k, MRR and MR results are come from the *same* trained model.\n\n3. \tHowever, you always say \"training a new model\" to report addition results. It means that you tune hyper-parameters on the validation set with respect to each metric, i.e., for each dataset, *your results are come from different trained models*. This is not right to evaluate the knowledge base completion task and not fair when comparing with other previous models.\n\nSo, after all, even you add new results, your results are still not convinced to me.\n", "We thank the commenter for an unintentional suggestion to add the *median* rank results for WN18RR DistMult, which we did not have since it requires training a new model. Since, as reported in the same paper the commenter cites [1], DistMult on FB15-237 has reproducibility issues (which we also mention in a footnote), we trained our own model to report those numbers, and were easily able to evaluate *median* rank for comparison.\n\nAs for the suggestion that we falsified results by hiding numbers that we do indeed “absolutely know,” we refute these categorically, find them to be in extremely poor taste, and generally, think one should very carefully read the results section of a paper to be 100% sure before making such a serious accusation.\n\nBecause our model is for query-answering rather than pure triplet-scoring, we perform a beam search at inference time, which we clearly define at the beginning of 4.1. Accordingly, *mean* rank is not a good fine-grained measurement of model quality --- any answers that fall off the beam would have to pessimistically be put all the way at the bottom of the results list. We do not evaluate mean rank or include mean rank results for previous models, of which we are indeed aware, because the metric provides little insight for our model.\n\nMRR does not suffer from this problem, as low-ranked answers past a certain point contribute essentially 0, and *median* rank does not have this problem unless our model was so bad that it was unable to get the right answer in the beam (40) even half the time. So, we decided to report the median rank, as we very clearly stated, and defined the acronym of, at the beginning of 4.1. Since FB15k-237 is a dataset of broad interest and we had an implementation, we included this number for DistMult as well for comparison.\n\nWe are unaware that “MR” is an industry-standard notation for a “mean rank” evaluation, or indeed that it is even a particularly common evaluation at all outside of a few recent papers that you cite. We do not see it, for example, in this large list of metrics for information retrieval [3], or many books or course notes on IR (e.g. the Stanford IR book [4]). For example, the seminal Trans-E work [2] simply refers to it as “Mean Rank”, and the original paper creating FB15k-237 [5], does not even report it. However, we will change the name to “MedR” to avoid confusion. It never crossed our minds that any such confusion would result from using this abbreviation to denote *median*, since as you note, these numbers are 2 decimal orders of magnitude lower than reported in previous literature, while the rest of the metrics are middle-of-the-pack. In fact, we are reporting that DistMult beat us by 8 to 18 on this result. The same metric on which you accuse us of leaving out known, reported numbers for the other models --- to put it in plain English, of falsifying!\n\nRegarding the comment “You did not tell the reviewers that WN18RR is a subset of WN18 and FB15k-237 is a subset of FB15k”: We actually mention specifically in Section 3, Data, that WN18RR is a subset of the original WORDNET18 dataset, and cite relevant papers for the rest, though we don’t see why that has any bearing on anything, let alone meriting a conspiratorial accusation like “you did not tell the reviewers.”\n\nIn final, we will train a model and add *median* rank results for WN18RR DistMult using our implementation, for completeness, change the name to “MedR” to avoid any confusion, and perhaps add a note about the unsuitability of mean rank for evaluating a query-answering model on binary-labeled triplets. \n\nHad those been suggestions made in the comment, it would have been a helpful one. In the future, I think the commenter would be well-served by carefully reading a paper before accusing other scientists of falsification and suggested fabrication.\n\n[1] Convolutional 2D Knowledge Graph Embeddings\n[2] Translating Embeddings for Modeling Multi-relational Data\n[3] https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)\n[4] https://nlp.stanford.edu/IR-book/pdf/08eval.pdf\n[5] Representing Text for Joint Embedding of Text and Knowledge Bases\n", "I am sorry that you do not believe in the results but I don't understand your attitude.\nCan you please explain why you think it is unbelievable??? If hits@10 is larger than 0.5 \nit means the median rank (MR) will upper bounded by 10. Also most KB completion papers \ndon't use MR.", "Thank you for the comment. We have updated the paper with mean reciprocal rank (MRR) and median rank (MR) and compared with baselines whenever they were reported. For Countries and DeepPath, the evaluation metric is AUC-PR and MAP respectively and they are equivalent to MRR when there is one correct answer [1,2]. We outperform baselines on these metrics and hence the results are in agreement with Hits@k and accuracy,\n\n[1]https://stats.stackexchange.com/questions/127041/mean-average-precision-vs-mean-reciprocal-rank\n[2]https://stats.stackexchange.com/questions/157012/area-under-precision-recall-curve-auc-of-pr-curve-and-average-precision-ap\n", "We have moved the results to the experimental section of the paper (Sec 4.1) and also provided detailed analysis regarding the results. Thanks for the suggestion.", "Thank you for your helpful reviews. We have updated the paper with your suggestions.\n- We have updated the paper with a detailed analysis of the negative results on Fb15k-237 (sec 4.1) and more importantly, how this dataset differs from other KG datasets. (To summarize, FB15k-237 has very (i) low clustering coefficient (ii) the path types don't repeat that often (iii) has a lot of 1-Many query relations.)\n\n- In section 4.2 (Grid World), we actually compared to other baselines such as DistMult, but since they are not path based method their performance was very low. We decided to not report the results because it was making the plot look disproportionate. However for completion, here are the numbers - (Path length 2-4) 0.2365, (4-6) 0.1144, (6-8) 0.0808, (8-10) 0.0413\n\n- We have added the avg and mean degree of nodes of knowledge graphs in table 1. MINERVA performs well in KGs with both high/low out degree of nodes.\n\n- Yes!, we did consider using A_t as apart of the input but it comes with few complications w.r.t the implementation. First as the number and ordering of outgoing edges from a node varies, feeding A_t into the MLP is not straightforward. Also since the output probabilities should have support only on the outgoing edges (which are not uniquely determined by only the relations, but also the neighboring entity), the masking logic also becomes tricky. Finally, the excessive amount of parameters required this way might lead to overfitting. Since we were getting promising results with the simpler approach, we decided to continue with the first design choice.\n\n \n", "Your model exploits information about relation paths, so you should carefully denote the experimental results which previous models use the relation path information to avoid a confusion. See an example from [1]. It's not fair when you compare your model with other models not using the relation path information.\n\n[1] Knowledge Base Completion: Baselines Strike Back. Rudolf Kadlec and Ondrej Bajgar and Jan Kleindienst", "While I like the idea of exploiting relation path information in the KB to improve link predictions, I am not convinced by the experimental results: why are MR or MRR scores not reported in the paper? It is abnormal with respect to KB completion evaluation.\n\nIt should definitely be straightforward to compute MR and MRR scores while you already have a model optimized on Hits@10. \n\nI suspect the proposed model could not produce competitive MR or MRR results against the baselines? So, experiments cannot prove the proposed model to be useful.\n", "Quoting a paragraph of the author response:\n\n\"Lastly, we would like to emphasize that the community should be more welcoming to negative results and analysis. Comments such as this only discourages young researchers to not report negative results leading to bad science.\"\n\nIt is a great point to be brought up that *every* researcher should not be obstructed by negative results and the community should be more welcome to them. However, I think this is the point of the main comment, not vice versa. The comment made it clear that it is demanding negative results to be included in the main body of a paper and interesting discussion and analysis to be done and presented, not demanding to \"not submit a paper with negative results\". It is advocating for legit presentation of negative results instead of presenting them in supplementary, separated from positive ones.", "We agree that reporting negative results on a particular dataset is important and that's why we chose to include them in our submission. \ni) There has been a high variance of scores reported for FB15k-237 among many recent papers (ranging from low 40 hits@10 to ~53 hits@10). In fact, the results we currently get is comparable to state-of-the-art results reported in some recent papers. However, instead, we chose to compare to a very high score which our in-house implementation achieved.\nii) We politely disagree that FB15k-237 is a great dataset for comparing query answering performance. The presence of many 1-to-Many query relations, low clustering coefficient (Holland & Leinhardt, 1971; Watts & Strogatz, 1998) and low occurrence of various path types makes it less amenable for path based models and less interesting for query answering. We will update the next version of the paper with more detailed analysis.\niii) The 7 datasets where MINERVA achieves excellent results are i) Countries ii) UMLS iii) Kinship iv) WN18RR v) NELL-995 vi) WikiMovies vii) Grid World (important experiment since we test how MINERVA works for long paths)\n\nLastly, we would like to emphasize that the community should be more welcoming to negative results and analysis. Comments such as this only discourages young researchers to not report negative results leading to bad science.\n\nPaul W Holland and Samuel Leinhardt. Transitivity in structural models of small groups. Comparative Group Studies, 1971.\nDuncan J Watts and Steven H Strogatz. Collective dynamics of small-worldnetworks. nature, 1998.", "We wanted to address/clarify few minor mistakes which we found in the text of the paper which could possibly confuse the reviewers. We will definitely fix these in the next version of the paper.\n\n1. Introduction: We suggest that PRA (Lao et al., 2011) use the same set of collected path to answer diverse query types. PRA, given a query, keeps paths which are supported by at least a fraction of the training queries and are also bounded by a maximum length. Additionally, they are constrained to end at one of the target entities in the training set. These constraints make them query dependent but are heuristic in nature which is in contrast to MINERVA which learns to use the query relation to make decisions at every step.\n2. Sec 2.1 - Our environment is a finite horizon deterministic Markov decision model -> This should be finite horizon, deterministic and partially observed Markov decision process. \n3. Sec 2.1 - From the KB, a knowledge graph G can be constructed where the entities s,t are represented as the nodes -> the identifiers 's' and 't' should be 'e1' and 'e2’.\n4. Section 5 (Effectiveness of Remembering Path History) - We replaced the LSTM with a simple 2 layer MLP -> ‘replace’ might be confusing. In this ablation study, the policy network (2 layer MLP) makes decision based on only the local information (current entity, query). It does not have access to the entire history of decisions encoded by the LSTM.\n5. Figure 3 - Caption - The figure in the right -> The figure at the bottom. Also for consistency, the top figure (left) should have LSU grayed and figure (right) should have NBA grayed.\n6. Figure 1 - The edge from nodes USA to CA is wrongly labeled as 'has_city'. It should be instead labeled as 'has_state'.\n7. Figure 4 - This plot shows the frequency of occurrence of various unique paths (types) of length 3 which occur more than 'x' times in various datasets. Intuitively, a predictive path which generalizes across queries will occur many number of times in the graph. As we can see, the characteristics of FB15k-237 is quite different from other datasets. Path types do not repeat that often, making it hard for MINERVA to learn paths which generalizes.\n", "Thank you for the comment. Since we are doing query answering, during test time, we do not have the information about target entities. That would mean enumerating all paths starting from a source entity and training a classifier to choose one of the target entities that these paths end on. This seemed like a reasonable first step, but the main bottleneck is the huge number of paths that need to be considered. For instance, the avg. number of length 3 paths starting from an entity in the validation set of Fb15k-237 is 1,800,759 (1.8M). That means during inference, we need to gather around 1.8M paths for each query, compute features and choose from one of the end entities. Sub-sampling paths is another approach, but it is difficult to come up with a non-heuristic way of subsampling. Another drawback of this approach that we would like to point out, is that only a few paths are ‘predictive’ of a query relation and subsampling might easily lose them.\n\nYet another way of training using supervised approach would be to sample one path which leads to the target entity and another path which doesn’t and do a gradient update to favor the path which does reach. During inference time, we just sample from the model (RNN for example) a path and return the endpoint as the answer. This approach differs from our RL based approach in that during training we depend on the current model to sample the next path. This has the advantage of utilizing the information that the model has acquired by exploring the graphs till now. For example, it might have learned that a particular kind of path isn’t good for a query relation, even though it might lead to the target entity for this particular query. The RL approach will use that information and not select that path and would instead try to search a path which would generalize more. \n\nIn general, it is a good idea to explore around where the model is, instead of doing uniform sampling and has been well studied in contextual bandit settings (Dudik et al., 2011, Agarwal et al., 2014). While the proposal is not doing uniform exploration since it samples any correct path to the target, even this kind of \"uniform\" sampling can hurt generalization performance since it does not use the current model to help identify which good paths are representable. Thanks again!\n\nEfficient Optimal Learning for Contextual Bandits - Dudik et al., 2011\nTaming the Monster: A Fast and Simple Algorithm for Contextual Bandits - Agarwal et al., 2014", "I liked the idea of the paper but would like to emphasize that negative results on a particular dataset should be included in the main paper instead of the appendix. In this paper, the negative results on FB15K-237 were reported only in the appendix and no experiment results on FB15K-237 were reported in the main paper. FB15K-237 was listed in Table 1, \"Statistics of various datasets used in experiments\", therefore excluding it from the result session makes the results incomplete, and many interesting discussion and analysis that should have been done and presented were omitted as a result. It also makes the last sentence in the abstract \"MINERVA obtains state-of-the-art results on seven KB datasets, significantly outperforming prior methods\" a severe overclaim.\n\nMore importantly, FB15K-237 is a challenging knowledge graph modeling testbed curated by (Toutanova and Chen, 2015), which is more at-scale and more wildly studied than a few of the datasets the paper used (COUNTRIES, KINSHIP, UMLS). Inferior performance on this dataset could indicate serious flaws of the proposed model when applied on open-domain KGs.\n\nMany readers pay attention to the appendix only when necessary. The paper should include pointers to the appendix for contradictory results to this degree. Leaving the results in appendix without making additional claims creates a bias for reviewers and other readers, and should be strictly discouraged.\n\n", "A simple solution for this problem is supervised approach, i.e., treating the path from the source node to target node as *sequences* and training a RNN on these *sequences*. I am wondering how would the proposed approaches compare with the supervised version.\n\nIntuitively, the reinforced version over supervised version is its efficiency in getting positive rewards. But for this problem, enumerating the paths from the source nodes to target nodes sounds like a more efficient way. ", "Thank you for the comment. The performance of DistMult on FB15k-237 (56.8 HITS@10) are scores that we got with our own in-house implementation. To the best of our knowledge, it is better than all of the published scores for DistMult on this dataset (closest being 52.93 by Jain et al (2017) https://arxiv.org/pdf/1706.00637.pdf and 52.3 by Toutanova et al. (2015) http://cs.stanford.edu/~danqi/papers/emnlp2015.pdf). The results of ConvE and ComplEx models were taken from Dettmers et al. (2017) https://arxiv.org/pdf/1707.01476.pdf). We are aware of the high variance in scores reported for the DistMult model on FB15k-237 by many other papers, but we decided to report the results that we got for it. We will make this clear in the next version of the paper. Thanks again!", "Kudos for reporting negative results. Quick question: the results for the other methods on FB15k-237 are obtained by running code or copying results from previous work? I’m skeptical of/surprised by the 56.8 hits@10 for DistMult. " ]
[ -1, -1, 7, 6, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "SJdS6W2ef", "SymbMC2xf", "iclr_2018_Syg-YfWCW", "iclr_2018_Syg-YfWCW", "iclr_2018_Syg-YfWCW", "iclr_2018_Syg-YfWCW", "SJCU15BJf", "S1frbpDZM", "H1-nFopxM", "ByjZWQ9zf", "ry7CRLKfM", "SJcDk8FzG", "SJcDk8FzG", "B1ZpM7FZM", "Hkcn5-Dyz", "SymbMC2xf", "iclr_2018_Syg-YfWCW", "iclr_2018_Syg-YfWCW", "H1tRWqdgf", "Hkcn5-Dyz", "iclr_2018_Syg-YfWCW", "SJCU15BJf", "iclr_2018_Syg-YfWCW", "iclr_2018_Syg-YfWCW", "rkFp3JaC-", "iclr_2018_Syg-YfWCW" ]
iclr_2018_Sk7KsfW0-
Lifelong Learning with Dynamically Expandable Networks
We propose a novel deep network architecture for lifelong learning which we refer to as Dynamically Expandable Network (DEN), that can dynamically decide its network capacity as it trains on a sequence of tasks, to learn a compact overlapping knowledge sharing structure among tasks. DEN is efficiently trained in an online manner by performing selective retraining, dynamically expands network capacity upon arrival of each task with only the necessary number of units, and effectively prevents semantic drift by splitting/duplicating units and timestamping them. We validate DEN on multiple public datasets in lifelong learning scenarios on multiple public datasets, on which it not only significantly outperforms existing lifelong learning methods for deep networks, but also achieves the same level of performance as the batch model with substantially fewer number of parameters.
accepted-poster-papers
PROS: 1. good results; the authors made it work 2. paper is largely well written CONS: 1. some found the writing to be unclear and sloppy in places 2. the algorithm is complicated -- a chain of sub-algorithms A few small points: -I initially found Algorithm 1 to be confusing because it wasn't clear whether it was intended to be invoked for each task (making the training depend on all the datasets). I finally convinced myself that this was not the intention and that the inner loop of the algorithm is what is actually executed incrementally.
train
[ "SJAGR15lz", "SkcwXXqxM", "HJnaoMagf", "ByFTDgp7G", "ryZHNvkmz", "BkOyLPJXG", "HJJ5UP1XG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper was clearly written and pleasant to read. I liked the use of sparsity- and group-sparsity-promoting regularizers to select connections and decide how to expand the network.\n\nA strength of the paper is that the proposed algorithm is interesting and intuitive, even if relatively complex, as it requires chaining a sequence of sub-algorithms. It was good to see the impact of each sub-algorithm studied separately (to some degree) in the experimental section. The results are overall strong.\n\nIt’s hard for me to judge the novelty of the approach though, as I’m not an expert on this topic.\n\nJust a few points below:\n- The experiments focus on a relevant continual learning problem, where each new task corresponds to learning a new class. In this setup, the method consistently outperforms EWC (e.g., Fig. 3), as well as the progressive network baseline.\nDid the authors also check the performance on the permuted MNIST benchmark, as studied by Kirkpatrick et al. and Zenke et al.? It would be important to see how the method fares in this setting, where the tasks are the same, but the inputs have to be remapped, and network expansion is less of an issue.\n\n- Fig. 4 would be clearer if the authors showed also the performance and how much the selected connection subsets would change if instead of using the last layer lasso + BFS, the full L1-penalized problem was solved, while keeping the rest of the pipeline intact.\n\n- Still regarding the proposed selective retraining, the special role played by the last hidden layer seems slightly arbitrary. It may well be that it has the highest task-specificity, though this is not trivial to me. This special role might become problematic when dealing with deeper networks.", "The topic is of great interest to the community, and the ideas explored by the authors are reasonable, but I found the conclusion less-than-clear. Mainly, I was not sure how to interpret the experimental findings, and did not have a clear picture of the various models being investigated (e.g. \"base DNN regularized with l2\"), or even of the criteria being examined. What is \"learning capacity\"? (If it's number of model parameters, the authors should just say, \"number of parameters\"). The relative performance of the different models examined, plotted in the top row of Figure 3, is quite different, and though the authors do devote a paragraph to interpreting the results, I found it slightly hard to follow, and was not sure what the bottom line was.\n\nWhat does the \"batch model\" refer to?\n\nre. \" 11.9%p − 51.8%p\"; remove \"p\"?\n\nReference for CIFAR-100? Explain abbreviation for both CIFAR-100 and AWA-Class?\n\nre. \"... but when the number of tasks is large, STL works better since it has larger learning capacity than MTL\": isn't the number of parameters matched? If so, why is the \"learning capacity\" different? What do the authors mean exactly by \"learning capacity\"?\n\nre. Figure 3, e.g. \"Average per-task performance of the models over number of task t\": this is a general point, but usually the expression \"<f(x)> vs. <x>\" is used rather than \"<f(x)> over <x>\" when describing a plot.\n\n\"DNN: dase (sic) DNN\": how is this trained?\n\n\n", "In this paper, the authors propose a method (Dynamically Expandable Network) that addresses issues of training efficiency, how to dynamically grow the network, and how to prevent catastrophic forgetting.\n\nThe paper is well written with a clear problem statement and description of the method for preventing each of the described issues. Interesting points include the use of an L1 regularization term to enforce sparsity in the weights, as well as the method for identifying which neurons have “drifted” too far and should be split. The use of timestamps is a clever addition as well.\n\nOne question would be how sparse training is done, and how this saves computation, especially with the breadth-first search described on page 5. A critique would be that the base networks (a two layer FF net and LeNet) are not very compelling.\n\nExperiments indicate that the method works well, with a clear improvement over progressive networks. Thus, though there isn’t one particular facet of the paper that leaps out, overall the method and results seem solid and worthy of publication.", "We summarize the updates made in the revision below:\n\nMain updates:\n- To address the comment from AnonReviewer 3 that the base networks are too small, we replaced the experimental results on CIFAR-100 dataset with results from a deeper network, that is a slight modification of AlexNet (8 layers: 5 Conv and 3 FC). Our algorithm achieved significant performance gain over the baseline models on this dataset as well.\n\n- Following the suggestion from AnonReviewer 2, we performed new experiments on the Permuted MNIST dataset with feedforward networks, and included the experimental results in the Appendix Section. The results show DEN achieves comparable performance to batch models (STL and MTL), while significantly outperforming both DNN-EWC and DNN-Progressive.\n\nPage 2: Updated the value of %p to reflect the updates in the CIFAR-100 experimental results. \nPage 7: 1) Corrected the typo: 3) DNN. Dase → 3) DNN. Base\n 2) Replaced the description of LeNet with the description of the deeper CNN used in the new experiments.\nPage 8: Updated the CIFAR-100 plot in Figure 3 with the results from the modified AlexNet.\nPages 7-8, 10: Updated the value of %p to reflect the updates in the CIFAR-100 results.\nPage 11: Included the Appendix section, which contains the results and discussions of the Permuted MNIST experiments.\n", "Q1. One question would be how sparse training is done, and how this saves computation, especially with the breadth-first search described on page 5. \n\nA. Sparse training is done at both initial network training (Eq. (2)) and selective retraining step (Eq. (3)), using L1 regularizer. First, the initial network training obtains a network with sparse connectivity between neurons at consecutive layers. \n\nThen, the selective retraining selects the neurons at the layer just before the output neurons of this sparse network, and then using the topmost layer neurons as starting vertices, it selects neurons at each layer that have connections to the selected upper-layer neurons (See the leftmost model illustration of Figure 2). \n\nThis results in obtaining a subnetwork of the original network that has much less number of parameters (Figure 4.(b)) that can be trained with significantly less training time (Figure 4.(a)). The selected subnetwork also obtains substantially higher accuracy since it leverages only the relevant parts of the network for the given task (Figure 4.(a)).\n\nQ2. A critique would be that the base networks (a two layer FF net and LeNet) are not very compelling.\n\nA. To show that our algorithm obtains performance improvements on any generic networks, we experimented with a larger network that consists of 8 layers, which is a slight modification of AlexNet. With this larger network, our algorithm achieved similar performance gain over the baseline models as in the original experiments. We included the new results in the revision.\n", "Q1. What does the \"batch model\" refer to?\nA. “Batch model” refers to models that are not trained in an incremental manner; in other words, a batch model is trained with all tasks at hand, such as DNN-MTL or DNN-STL. \n\nQ2. re. \" 11.9%p − 51.8%p\"; remove \"p\"?\nA. %p stands for percent point and is a more accurate way of denoting absolute performance improvements compared to %.\n\nQ3. Reference for CIFAR-100? Explain abbreviation for both CIFAR-100 and AWA-Class?\nA. Thank you for the suggestion. We updated the reference for the CIFAR-100 dataset and included the full dataset name for AWA in the revision. CIFAR is simply a dataset \n \n \nQ4. re. \"... but when the number of tasks is large, STL works better since it has larger learning capacity than MTL\": isn't the number of parameters matched? If so, why is the \"learning capacity\" different? What do the authors mean exactly by \"learning capacity\"?\nA. By “learning capacity”, we are referring to the number of parameters in a network. DNN-MTL learns only a single network for all T tasks whereas DNN-STL learns T networks for T tasks. For the experiments that generated the plots in the top row of Figure 3, we used the same network size for both DNN-STL and DNN-MTL, and therefore, DNN-STL used T times more parameters than DNN-MTL. For accuracy / network capacity experiments in the bottom row, we diversified the base network capacity for both baselines.\n \nQ5. \"DNN: dase (sic) DNN\": how is this trained?\nA. Thank you for pointing out the typo. We have corrected it in the revision.\n", "Q1. Did the authors also check the performance on the permuted MNIST benchmark, as studied by Kirkpatrick et al. and Zenke et al.? It would be important to see how the method fares in this setting, where the tasks are the same, but the inputs have to be remapped, and network expansion is less of an issue.\n\nA. Following your suggestion, we have also experimented on the permuted MNIST dataset using feedforward network with 2 hidden layers and included the results in the Appendix section (See Figure 6). As expected, DEN achieves performance comparable to batch models such as STL or MTL, significantly outperforming both DNN-EWC and DNN-Progressive while obtaining a network that has significantly less number of parameters. \n\nQ2. Fig. 4 would be clearer if the authors showed also the performance and how much the selected connection subsets would change if instead of using the last layer lasso + BFS, the full L1-penalized problem was solved, while keeping the rest of the pipeline intact.\n\nA. The suggested comparative study is already done in Fig. 4. DNN-L1 shows the results using the full L1-penalized regularizer instead of the last layer lasso + BFS. This result shows that selective retraining is indeed useful in reducing time complexity of training and perform selective knowledge transfer to obtain better accuracy. \n\n For better understanding of the BFS process, we updated the figure that illustrates the selective retraining process to include arrows (Leftmost figure of Figure 2).\n \nQ3. Still regarding the proposed selective retraining, the special role played by the last hidden layer seems slightly arbitrary. It may well be that it has the highest task-specificity, though this is not trivial to me. This special role might become problematic when dealing with deeper networks.\n\nA. The last hidden layer is not the only layer that is learned to be task-specific, as the BFS process selects units that are useful for the given task at all layers of the network and retrains them. \n\n To show that selective retraining does not become problematic with deeper networks, we performed additional experiments on the CIFAR-100 dataset with a 8-layer network which is a slight modification of AlexNet. The results show that DEN obtains similar performance gain over baselines even with this deeper network. \n" ]
[ 7, 6, 8, -1, -1, -1, -1 ]
[ 3, 3, 2, -1, -1, -1, -1 ]
[ "iclr_2018_Sk7KsfW0-", "iclr_2018_Sk7KsfW0-", "iclr_2018_Sk7KsfW0-", "iclr_2018_Sk7KsfW0-", "HJnaoMagf", "SkcwXXqxM", "SJAGR15lz" ]
iclr_2018_H1VjBebR-
The Role of Minimal Complexity Functions in Unsupervised Learning of Semantic Mappings
We discuss the feasibility of the following learning problem: given unmatched samples from two domains and nothing else, learn a mapping between the two, which preserves semantics. Due to the lack of paired samples and without any definition of the semantic information, the problem might seem ill-posed. Specifically, in typical cases, it seems possible to build infinitely many alternative mappings from every target mapping. This apparent ambiguity stands in sharp contrast to the recent empirical success in solving this problem. We identify the abstract notion of aligning two domains in a semantic way with concrete terms of minimal relative complexity. A theoretical framework for measuring the complexity of compositions of functions is developed in order to show that it is reasonable to expect the minimal complexity mapping to be unique. The measured complexity used is directly related to the depth of the neural networks being learned and a semantically aligned mapping could then be captured simply by learning using architectures that are not much bigger than the minimal architecture. Various predictions are made based on the hypothesis that semantic alignment can be captured by the minimal mapping. These are verified extensively. In addition, a new mapping algorithm is proposed and shown to lead to better mapping results.
accepted-poster-papers
The reviewers were generally positive about this paper with a few caveats: PROS: 1. Important and challenging topic to analyze and any progress on unsupervised learning is interesting. 2. the paper is clear, although more formalization would help sometimes 3. The paper presents an analysis for unsupervised learning of mapping between 2 domains that is totally new as far as I know. 4. A large set of experiments CONS: 1. Some concerns about whether the claims are sufficiently justified in the experiments 2. The paper is very long and quite dense
train
[ "Skz4Z5KlG", "rkXOO8B4f", "SkZzR3E4M", "rJn7jf9xf", "S1wHlhaZM", "B1oTaatGz", "SJhq6pFff", "S1yAtfsbz", "HkHa_foZG" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper addresses the problem of learning mappings between different domains without any supervision. It belongs to the recent family of papers based on GANs.\nThe paper states three conjectures (predictions in the paper):\n1. GAN are sufficient to learn « semantic mappings » in an unsupervised way, if the considered networks are small enough\n2. Controlling the complexity of the network, i.e. the number of the layers, is crucial to come up with what is called « semantic » mappings when learning in an unsupervised way. \nMore precisely there is tradeoff to achieve between the complexity of the model and its simplicity. A rich model is required in order to minimize the discrepancy between the distributions of the domains, while a not too complex model is necessary to avoid mappings that are not « meaningful ».\n To this aim, the authors introduce a new notion of function complexity which can be seen as a proxy of Kolmogorov complexity. The introduced notion is very simple and intuitive and is defined as the depth of a network which is necessary to implement the considered function. \nBased on this definition, and assuming identifiability (i.e. uniqueness up to invariants), and for networks with Leaky ReLU activations, the authors prove that if the number of mappings which preserve a degree of discrepancy (density preserving in the text) is small, then the set of « minimal » mappings of complexity C that achieve the same degree of discrepancy is also small. \nThis result is related to the third conjecture of the paper that is :\n3. the number of the number of mappings which preserve a degree of discrepancy is small.\n\nThe authors also prove a byproduct result stating that identifiability holds for Leaky ReLU networks with one hidden layer.\n\nThe paper comes with a series of experiments to empirically « demonstrate » the conjectures. \n\nThe paper is well written. The different ideas are clearly stated and discussed, and hence open interesting questions and debates.\n\nSome of these questions that need to be addressed IMHO:\n\n- A critical general question: if the addressed problem is the alignment between e.g. images and not image generation, why not formalizing the problem as a similarity search one (using e.g. EMD or any other transport metric). The alignment task hence reduces to computing a ranking from this similarity. I have the impression that we use a jackhammer to break a small brick here (no offence). But maybe that I’m missing something here.\n- Several works consider the size and the depth of the network as hyper-parameters to optimize, and this is not new. What is the actual contribution of the paper w.r.t. to this body of work?\n- It is considered that the GAN are trained without any problem, and therefore work in an optimal regime. But the training of the GAN is in itself a problem. How does this affect the paper statements and results?\n- Are the results still valid for another measure of discrepancy based for instance on another measure, e.g. Wasserstein?\n\n\nSome minor remarks :\n- p3: the following sentence is not clear «  Our hypothesis is that the lowest complexity small discrepancy mapping approximates the alignment of the target semantic function. »\n- p6: $C^{\\epsilon_0}_{A,B}$ is used (after Def. 2) before being defined. \n- p7: build->built\n\nSection II :\nA diagram explaining the different mappings (h_A, h_B, h_AB, etc.) and their spaces (D_A, D_B, D_Z) would greatly help the understanding.\n\nPapers 's pros :\n- clarity\n- technical results\n\ncons:\n- doubts about the interest and originality\n\n\nThe authors provided detailed and convincing answers to my questions. I thank them for that. My scores were changed accrodingly.\n", "Thank you for your kind reply.\n\nRegarding Prediction 1: Predictions 1-3 are made independently of Alg. 1. Therefore, no reconstruction-type loss term is employed in the experiments presented in Sec. 5.1, which validate these predictions. The experiments testing the performance of Alg. 1, which do employ such a term, are the focus of Sec. 5.2. We will make sure this is clearer in the next version.\n\nThe suggested reference will be added. While our work supports the utility of the optimization function of Alg. 1, this reference is a first step in understanding the convergence of this and many other methods that employ distillation.\n\nFollowing the reviewer’s comment, we conducted an experiment testing whether employing the perceptual loss [1], instead of the L1 loss of CycleGAN, would lead to maintaining the alignment in deeper networks. This was tested for the task of mapping handbags to shoes, using the imagenet trained VGG-16 network. \n\nWhen training a network of depth 10 with this loss, we observe that the solutions are not aligned. We verified that with depth 8 (remember that depths increase by jumps of 2 due to the encoder/decoder structure) an aligned solution is found, same as with the L1 loss. This means that the perceptual loss with the pretrained network cannot eliminate, in the experiment conducted, the inherent ambiguity of deeper networks. With domains that are more closely related and with a more relevant pretrained network, training deeper network this way would probably succeed, as is done with the DTN method of [2].\n\nEdit (12 Jan): Sample results obtained with the perceptual loss can be seen at https://imgur.com/hDSusn4 for the task of mapping handbags to shoes. We also ran the perceptual loss experiment for the mapping of celebA males to females and share the results anonymously at https://imgur.com/a/pGF2V\nIn both cases, the perceptual loss, which employs a pretrained network, results in low discrepancy, i.e., the generated images are in the target class. However, it does not solve the alignment problem for non minimal architectures.\n\n[1] Johnson, Justin, Alexandre Alahi, and Li Fei-Fei. \"Perceptual losses for real-time style transfer and super-resolution.\" European Conference on Computer Vision, 2016.\n\n[2] Y. Taigman, A. Polyak, and L. Wolf. “Unsupervised cross-domain image generation.” ICLR 2017.\n", "Thank you for your reply and clarifications on the role of the 1)architecture (striding etc) 2) fully connected versus CNN 3) depth versus width.\n\nI think the ideas in the paper are nice and would benefit from some clarifications putting in it more in context of style and content losses:\n\n1- The claim stated in prediction 1 that :\"1. GAN are sufficient to learn « semantic mappings » in an unsupervised way \" as understood from the paper, is misleading. Since we still have a reconstruction cost, but where the matching is in the feature space that distills the content. Better guarantees for matching in a deep feature space for reconstruction where analyzed in this paper https://arxiv.org/pdf/1705.07576.pdf \n\nI encourage the authors to rephrase those claims and to mention that we still have a style cost function (a la GAN), and content cost function (matching in a feature space rather then in pixels space )\n\n2- on perceptual loss: The matching in the feature space of the low complexity network can be seen as a perceptual loss. \n- Another option for this matching can be to use the feature map of the discriminator to do this content matching at various depths (this was done in some recent papers)\n- If one uses VGG to do the content matching this would be a \"pretrained perceptual loss\" for matching content.\nComparing this to the approach of the paper would be interesting. \n\n", "Quality:\nThe paper appears to be correct\n\nClarity:\nthe paper is clear, although more formalization would help sometimes\n\nOriginality\nThe paper presents an analysis for unsupervised learning of mapping between 2 domains that is totally new as far as I know.\n\nSignificance\nThe points of view defended in this paper can be a basis for founding a general theory for unsupervised learning of mappings between domains.\n\nPros/cons\nPros\n-Adresses an important problem in representation learning\n-The paper proposes interesting assumptions and results for measuring the complexity of semantic mappings\n-A new cross domain mapping is proposed\n-Large set of experiments\nCons\n-Some parts deserve more formalization/justification\n-Too many materials for a conference paper\n-The cost of the algorithm seems high \n\nSummary:\nThis paper studies the problem of unsupervised learning of semantic mappings. It proposes a notion of low complexity networks in this context used for identifying minimal complexity mappings which is assumed to be central for recovering the best cross domain mapping. A theoretical result shows that the number of low-discrepancy (between cross-domains) mappings of low complexity is rather small.\nA large set of experiments are provided to support the claims of the paper.\n\n\nComments:\n\n-The work is interesting, for an important problemin representation learning, while in machine learning in general with the unsupervised aspect.\n\n-In a sense, I find that the approach suggested by algorithm 1 has some connections with structural risk minimization: by increasing k1 and k2 - when looking for the mapping - you increase the complexity of the model searched while trying to optimize the risk which is measured by the discrepancies and loss.\nThe approach seems costly anyway and I wonder if the authors could think of a smoother version of the algorithm to make it more efficient.\n\n-For counting the minimal complexity mappings, I wonder if one can make a connection with Algorithm robustness of Xu&Mannor(COLT,2012) where instead of comparing losses, you work with discrepancies.\n\nTypo:\nSection 5.1 is build of -> is built of\n", "This paper is on ab important topic : unsupervised learning on unaligned data. \n\nThe paper shows that is possible to learn the between domains mapping using GAN only without a reconstruction (cyclic) loss. The paper postulates that learning should happen on shallower networks first, then on a deeper network that uses the GAN cost function and regularizing discrepancy between the deeper and the small network. I did not get the time to go through the proofs, but they handle the fully connected case as far as I understand. Please find my comments are below.\n\nOverall it is an interesting but long paper, the claims are a bit strong for CNN and need further theoretical and experimental verification. The number of layer as a complexity is not appropriate , as we need to take in account many parameters: the pooling or the striding for the resolution, the presence or the absence of residual connections (for content preservation), the number of feature maps. More experimentation is needed. \n\n\n\nPros:\n\nImportant and challenging topic to analyze and any progress on unsupervised learning is interesting.\n\nCons:\n\nI have some questions on the shallow/deep in the context of CNN, and to what extent the cyclic cost is not needed, or it is just distilled from the shallow training: \n\n- Arguably the shallow to deep distillation can be understood as a reconstruction cost , since the shallow network will keep a lot of the spatial information. If the deep network match the shallow one , this can be understood as a form of “distilled content “ loss? and the disc of the deep one will take care of the texture , style content? is this intuition correct? \n\n- original cyclic reconstruction constraint is in the pixel space using l1 norm usually, the regularizer introduced matches in a feature space , which is known to produce better results as a “perceptual loss”, can the author comment on this? is this what is really happening here, moving from cyclic constraint on pixels to a cyclic constraint in a feature space (shallow network)?\n\n- *Spatial resolution*: 1) The analysis seems to be done with respect to DNN not to a CNN. did you study the effect of the architectures in terms of striding and pooling how it affects the results? I think just counting number of layers as a complexity is not reasonable when we deal with images, with respect to what preserves contents and what matches texture or style. \n\n2) - Have you tried resnets generators and discriminators at various depths , with padding so that the spatial resolution is preserved?\n\n- Depth versus width: Another measure that is missing is also the number of feature maps how wide is the network , how does this interplays with the depth?\n\n3) Regularizing deeper networks: in the experiments of varying the length did you see if the results can be stabilized using dropout with deep networks and small feature maps?\n\n4) between training g and h ? how do you initialize h? fully at random ?\n\n5) seems the paper is following implementation by Kim et al. what happens if the discriminator is like in cycle GAN acting on pixels. Pixel GAN rather then only giving a global score for the whole image? ", "Thank you for your supportive review and for the constructive comments, highlighting the significance of the treatment of unsupervised learning. \n\n[[Overall it is an interesting but long paper, the claims are a bit strong for CNN and need further theoretical and experimental verification.]] The paper is indeed quite long, much of the length can be attributed to the need to hold a clear discussion and to the extensive set of experiments done in order to demonstrate the validity of our hypothesis and the consequences it leads to. The experiments suggested in the review seem to be driven mostly by curiosity regarding alternatives and interest in the boundaries of the claims, and do not seem to point to a major gap in the original set of experiments. We ran most if not all of the requested experiments following the review, see below. In all cases, the results support our findings.\n\nOur original experiments employ the DiscoGAN CNN architecture as is. As noted, the theoretical analysis deals with the case of fully connected networks. Fully connected networks are used as an accessible model on which we prove our theorems. This is similar to other contributions with a theoretical component, in which the analysis is done on simplified models.\n\nFor example, in [1], the authors write: “Since a convergence analysis for deep learning is beyond our reach even in the noise free setting, we focus on analyzing properties of our algorithm for linearly separable data, which is corrupted by random label noise, and while using the perceptron as a base algorithm”.\n\nIn [2], the authors prove several theorems regarding the expressivity of fully connected neural networks. The experiments validate their theory on convolutional neural networks. \n\n[1] Eran Malach, Shai Shalev-Shwartz. Decoupling \"when to update\" from \"how to update\". NIPS, 2017.\n[2] Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, Jascha Sohl-Dickstein. On the Expressive Power of Deep Neural Networks. ICML 2017. \n\nNote that in our case (and in [2]), the theoretical model resembles the employed CNNs, since convolutional layers can be written as fully connected layers with restrictions on the weights that arise from locality and weight sharing. Therefore, we can put convolutions in the linear mappings in Eq. 5. In a network with k different convolution types (each specified by stride, kernel size, number of channels), the linear matrices W would be of one of k patterns. The theory would then hold without modifications, except that strictly speaking, the structure of encoder-decoder does not guarantee invertibility. However, as discussed in Sec. 2, invertibility occurs in practice, e.g., autoencoders succeed in replicating the input. \n\n[[- Arguably the shallow to deep distillation can be understood as a reconstruction cost , since the shallow network will keep a lot of the spatial information. If the deep network match the shallow one , this can be understood as a form of “distilled content “ loss? and the disc of the deep one will take care of the texture , style content? is this intuition correct? ]] This intuition is correct, except that the distillation loss is much more restrictive than the cycle (reconstruction) loss. The latter, as we show, is not enough to specify the correct alignment. Indeed, the discrepancy loss of the deeper network makes sure the fine details are correct.\n\n[[- original cyclic reconstruction constraint is in the pixel space using l1 norm usually, the regularizer introduced matches in a feature space , which is known to produce better results as a “perceptual loss”, can the author comment on this?]] The loss between the shallow and the deep network (R_DA[h,g] in Alg. 1) is the L1 loss in our experiments. We do not use the perceptual loss. Since the networks h and g have different architectures it is not immediately clear how to use this loss.\n", "[[1) did you study the effect of the architectures in terms of striding and pooling how it affects the results? I think just counting number of layers as a complexity is not reasonable when we deal with images, with respect to what preserves contents and what matches texture or style. ]] Our experiments show that the number of layers is directly linked to the success obtaining the correct alignment. There is no pooling in the DiscoGAN architecture, the stride is fixed to 2 and kernel size is 4. We therefore did not manipulate these factors. Following the review, we tried a stride of 3. Prediction 2 still holds, but the image quality is slightly worse.\n\n[[2) - Have you tried resnets generators and discriminators at various depths , with padding so that the spatial resolution is preserved?]] Following the review, we conducted experiments using CycleGan's architecture, which uses a Resnet Generator. We varied the number of layers used in the encoder/decoder part of the architecture, as for DiscoGAN's experiments. Running an experiment on the Aerial images to Maps dataset, we found that 8 layers produces an aligned solution. Using 10 layers produces an unaligned map image with low discrepancy. For fewer than 8 layer, the discrepancy is high and the images are not very detailed. This is exactly in line with our hypothesis.\n\nAdding residual connections to the discriminator of the DiscoGAN experiments, seems to leave the results reported for the original DiscoGAN network mostly unchanged. \n\n[[- Depth versus width: Another measure that is missing is also the number of feature maps how wide is the network , how does this interplays with the depth?]] In this paper we focus on networks that have approximately similar number of neurons in each layer. Therefore, in this case, it is more reasonable to treat the depth as a form of complexity and not the width of each layer. In a way, depth multiplies the complexity, while width adds to it [1,2]. Therefore, it is a much better determinant of complexity. \n\nConsider this experiment that we conducted following the review: taking a network that is too shallow by only one layer than what is needed in order to achieve low discrepancy, we double the number of channels (and therefore neurons) in each layer of the architecture and train. The modified network, despite the added complexity, does not achieve a low discrepancy. \n\n[1] Hrushikesh N. Mhaskar and Tomaso Poggio. Deep vs. Shallow Networks: an Approximation Theory Perspective. Analysis and Applications 2016.\n[2] Zhou Lu, Hongming Pu, Feicheng Wang, Zhiqiang Hu, Liwei Wang. The Expressive Power of Neural Networks: A View from the Width. NIPS 2017.\n\n[[3) Regularizing deeper networks: in the experiments of varying the length did you see if the results can be stabilized using dropout with deep networks and small feature maps?]] Following the review, we did the following experiments: we took a network architecture that is too deep by one layer and does not deliver the correct alignment (i.e., it returns another low discrepancy solution) and added to the architecture, at each layer, dropout varying from 10% to 95%. In none of these rates the correct alignment was recovered.\n\n[[4) between training g and h ? how do you initialize h? fully at random ?]] \nh is initialized fully at random. \n\n[[5) seems the paper is following implementation by Kim et al. what happens if the discriminator is like in cycle GAN acting on pixels. Pixel GAN rather than only giving a global score for the whole image?]] Following the review, we run CycleGAN with a varying number of layers (see point 2 above). The results are in full agreement with our hypothesis. In another experiment, one on celebA male to female, we changed the discriminator of Kim et al. to a Pixel GAN. While the results are of better quality (but also less geometric and more texture like), the alignment is achieved and lost at exactly at the same complexities as with a regular discriminator.", "Thank you for your supportive review, highlighting the technical results, the clarity of the manuscript and the work’s potential in initiating interesting research questions and discussions.\n\n[[if the addressed problem is the alignment between e.g. images and not image generation]] The alignment problem is between images of A and images of B. GAN is effective in ensuring that the generated images are in the target domain. The alignment problem is how to make sure that the input image in A is mapped (via image generation) to an analog image in B, where this analogy is not defined by training pairs or in any other explicit way (see Sec. 2). Please let us know if this does not answer the question.\n\n[[Several works consider the size and the depth of the network as hyper-parameters to optimize, and this is not new. What is the actual contribution of the paper w.r.t. to this body of work?]] \nThe main difference is that when optimizing a supervised loss, as is done in this body of work, the train and validation classification errors, the capacity’s effect on the network’s accuracy, and the network’s generalization capability as a function of the size of the training set are well understood and easy to estimate. In unsupervised learning, changing capacity to reduce the training GAN loss will lead, as we show, to the loss of alignment, and there are no clear guidelines for determining generalization as a function of capacity.\n\nFollowing the reviewer’s remark, we have added the following text to the paper:\n“Since the method depicted in Alg. 1 optimizes, among other things, the architecture of the network, our method is somewhat related to work that learns the network's structure during training, e.g., (Saxena & Verbeek, 2016; Wen et al., 2016; Liu et al., 2015; Feng & Darrell, 2015; Lebedev & Lempitsky, 2016). This body of work, which deals exclusively with supervised learning, optimizes the networks loss by modifying both the parameters and the hyperparameters. For GAN based loss, this would not work, since with more capacity one can reduce the discrepancy but quickly lose the alignment.”\n\n[[- It is considered that the GAN are trained without any problem, and therefore work in an optimal regime. But the training of the GAN is in itself a problem. How does this affect the paper statements and results?]] In a subsequent effort to automatically identify a stopping criteria for training cross-domain mappings, we found out that these methods converge and achieve the best results at the last epochs. Therefore, the general issue of GAN instability is not expected to influence our results.\n\n[[- Are the results still valid for another measure of discrepancy based for instance on another measure, e.g. Wasserstein?]] We added additional results in the appendix, where the GAN used is a Wasserstein GAN. As can be seen, our findings seem to hold for WGAN as well. \n\nThank you for noting a few minor issues with the text. These are corrected in the new version, which also includes the diagram you requested (new Fig. 1).", "Thank you for your supportive review and for the constructive comments, highlighting the significance of the paper. As limitations, the length of the paper and the occasional lack of formalism are mentioned. These issues are interleaved and we have made an effort to balance them by moving the most accurate formal statements to the supplementary appendices. Another aspect of the length is the extensive set of experiments done (as noted in the review) in order to demonstrate the validity of our hypothesis and the consequences it leads to.\n\nRegarding the computational cost of the method, we do not necessarily agree that it is costly. The training of the networks G1 and G2 is done in a sequential manner where the first step of the method identifies the complexity of G1 that provides alignment. This is done automatically in our method. We believe that a similar effort is being conducted by others when applying their methods, only there, the selection is being done manually. Therefore, the cost of this step is similar to other methods. \n\nThe second step of training G2 has a similar complexity. Therefore, our method’s computational cost is just twice the computational cost of what is already being practiced. \n\nEven if the assumption behind our analysis is debatable, the computational cost is a small constant times training one network. In addition, multiple architectures of G1 or of G2 can be trained in parallel. \n\nAnonReviewer1 wonders if a smoother method is conceivable. A smoother method can be based, for example, on skip connections, in which depth varies dynamically, depending on whether a skip connection is used. Then, one can use two networks G1 and G2; G1 is restricted to employ all skips, while G2 is optimized to have low discrepancy, small risk w.r.t G1, and not to use skip connections. This is worth pursuing as a future work. \n\nA connection to Structural Risk Minimization is mentioned in Sec. 6, first paragraph of the original submission. Following the review, a stronger linking to Alg. 1 is added to the discussion as follows:\n“A major emphasis in SRM is the dependence on the number of samples: the algorithm selects the hypothesis from one of the nested hypothesis classes depending on the amount of training data. In our case, one can expect higher values of k_2 to be beneficial as the number of training samples grows. However, the exact characterization of this relation is left for future work.“\n\nThe work of Xu and Mannor proposes a measure of complexity that is related to, but different from, algorithmic stability. We cannot find direct links to our method, which is based on a straightforward notion of complexity. One can combine the two methods together and test, for example, the robustness of the discrepancies. However, we are not yet sure what would be the advantage of doing so. \n\nThank you for noting the typo in Sec. 5.1. It is now fixed." ]
[ 7, -1, -1, 7, 6, -1, -1, -1, -1 ]
[ 4, -1, -1, 2, 4, -1, -1, -1, -1 ]
[ "iclr_2018_H1VjBebR-", "SkZzR3E4M", "SJhq6pFff", "iclr_2018_H1VjBebR-", "iclr_2018_H1VjBebR-", "S1wHlhaZM", "S1wHlhaZM", "Skz4Z5KlG", "rJn7jf9xf" ]
iclr_2018_BJuWrGW0Z
Dynamic Neural Program Embeddings for Program Repair
Neural program embeddings have shown much promise recently for a variety of program analysis tasks, including program synthesis, program repair, code completion, and fault localization. However, most existing program embeddings are based on syntactic features of programs, such as token sequences or abstract syntax trees. Unlike images and text, a program has well-defined semantics that can be difficult to capture by only considering its syntax (i.e. syntactically similar programs can exhibit vastly different run-time behavior), which makes syntax-based program embeddings fundamentally limited. We propose a novel semantic program embedding that is learned from program execution traces. Our key insight is that program states expressed as sequential tuples of live variable values not only capture program semantics more precisely, but also offer a more natural fit for Recurrent Neural Networks to model. We evaluate different syntactic and semantic program embeddings on the task of classifying the types of errors that students make in their submissions to an introductory programming class and on the CodeHunt education platform. Our evaluation results show that the semantic program embeddings significantly outperform the syntactic program embeddings based on token sequences and abstract syntax trees. In addition, we augment a search-based program repair system with predictions made from our semantic embedding and demonstrate significantly improved search efficiency.
accepted-poster-papers
PROS: 1. Interesting and clearly useful idea 2. The paper is clearly written. 3. This work doesn't seem that original from an algorithmic point of view since Reed & De Freitas (2015) and Cai et. al (2017) among others have considered using execution traces. However the application to program repair is novel (as far as I know). 4. This work can be very useful for an educational platform though a limitation is the need for adding instrumentation print statements by hand. CONS: 1. The paper has some clarity issues which the authors have promised to fix. ---
train
[ "rkdmp2J-f", "r1a9wIjEM", "H1Pyl4sxM", "H1JAev9gz", "S1e4KmgXM", "HyPdjuszz", "BJ5rj_jGz", "ry5liuiGf", "ryPug02Zf", "BJk11A3bz", "H1DM0ah-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ "This paper considers the task of learning program embeddings with neural networks with the ultimate goal of bug detection program repair in the context of students learning to program. Three NN architectures are explored, which leverage program semantics rather than pure syntax. The approach is validated using programming assignments from an online course, and compared against syntax based approaches as a baseline.\n\nThe problem considered by the paper is interesting, though it's not clear from the paper that the approach is a substantial improvement over previous work. This is in part due to the fact that the paper is relatively short, and would benefit from more detail. I noticed the following issues:\n\n1) The learning task is based on error patterns, but it's not clear to me what exactly that means from a software development standpoint.\n2) Terms used in the paper are not defined/explained. For example, I assume GRU is gated recurrent unit, but this isn't stated.\n3) Treatment of related work is lacking. For example, the Cai et al. paper from ICLR 2017 is not considered\n4) If I understand dependency reinforcement embedding correctly, a RNN is trained for every trace. If so, is this scalable?\n\nI believe the work is very promising, but this manuscript should be improved prior to publication.", "Thank you for the clarifications regarding the difference between semantic and syntactic program traces and the extra experiment. I'm bumping up my score to a 7.", "Summary of paper: The paper proposes an RNN-based neural network architecture for embedding programs, focusing on the semantics of the program rather than the syntax. The application is to predict errors made by students on programming tasks. This is achieved by creating training data based on program traces obtained by instrumenting the program by adding print statements. The neural network is trained using this program traces with an objective for classifying the student error pattern (e.g. list indexing, branching conditions, looping bounds).\n\n---\n\nQuality: The experiments compare the three proposed neural network architectures with two syntax-based architectures. It would be good to see a comparison with some techniques from Reed & De Freitas (2015) as this work also focuses on semantics-based embeddings.\nClarity: The paper is clearly written.\nOriginality: This work doesn't seem that original from an algorithmic point of view since Reed & De Freitas (2015) and Cai et. al (2017) among others have considered using execution traces. However the application to program repair is novel (as far as I know).\nSignificance: This work can be very useful for an educational platform though a limitation is the need for adding instrumentation print statements by hand.\n\n---\n\nSome questions/comments:\n- Do we need to add the print statements for any new programs that the students submit? What if the structure of the submitted program doesn't match the structure of the intended solution and hence adding print statements cannot be automated?\n\n---\n\nReferences \n\nCai, J., Shin, R., & Song, D. (2017). Making Neural Programming Architectures Generalize via Recursion. In International Conference on Learning Representations (ICLR).", "The authors present 3 architectures for learning representations of programs from execution traces. In the variable trace embedding, the input to the model is given by a sequence of variable values. The state trace embedding combines embeddings for variable traces using a second recurrent encoder. The dependency enforcement embedding performs element-wise multiplication of embeddings for parent variables to compute the input of the GRU to compute the new hidden state of a variable. The authors evaluate their architectures on the task of predicting error patterns for programming assignments from Microsoft DEV204.1X (an introduction to C# offered on edx) and problems on the Microsoft CodeHunt platform. They additionally use their embeddings to decrease the search time for the Sarfgen program repair system.\n\nThis is a fairly strong paper. The proposed models make sense and the writing is for the most part clear, though there are a few places where ambiguity arises:\n\n- The variable \"Evidence\" in equation (4) is never defined. \n\n- The authors refer to \"predicting the error patterns\", but again don't define what an error pattern is. The appendix seems to suggest that the authors are simply performing multilabel classification based on a predefined set of classes of errors, is this correct? \n\n- It is not immediately clear from Figures 3 and 4 that the architectures employed are in fact recurrent.\n\n- Figure 5 seems to suggest that dependencies are only enforced at points in a program where assignment is performed for a variable, is this correct?\n\nAssuming that the authors can address these clarity issues, I would in principle be happy for the paper to appear. ", "We thank the reviewers for their helpful comments and feedback, and suggestions for additional experiments. We have uploaded a new revision of the paper with the following revisions:\n\n1. We incorporated the requested clarification questions/definitions in the reviews including defining terms/variables, new figures to show the recurrent models for variable and state trace embeddings, defining error patterns and formulating the repair problem as one of classification, automated instrumentation of programs, data-dependency in traces, training details etc.\n\n2. We added more descriptive examples to showcase the difference between the \"semantic program traces\" (in terms of variable valuations) considered in this work compared to previous works (Reed & De Freitas (2015) and Cai et al. (2017)) that consider \"syntactic traces\".\n\n3. We added additional experimental results to compare syntactic program trace based embeddings (Reed & De Freitas (2015) and Cai et al. (2017)) in Section 5 (Table 3). Although syntactic traces result in better accuracy than Token and AST (~26% vs ~20%), they are still significantly worse than semantic trace embeddings introduced in our work.", "Dear reviewer:\n\nWe have uploaded a revision of our paper that incorporates (1) the requested clarifications in the reviews and (2) additional experimental results from comparing our embeddings with syntactic trace based program embeddings (Reed & De Freitas (2015) and Cai et. al (2017)). Please let us know if any further clarifications are needed. \n", "Dear reviewer:\n\nOur earlier reply mistakenly omitted our answer to the question in your review. Our apologies, and we include the answer below. The instrumentation for adding print statements to a program is fully automated, and requires no manual effort or any assumption on the program’s code structure. It traverses the program’s abstract syntax tree and inserts the appropriate print statement after each side-effecting program statement, i.e., a statement that changes the values of some program variables.\n\nCan you please inform us whether there are any additional clarifications needed beyond those in our response? We have also uploaded a revision of our paper that incorporates (1) the requested clarifications in the reviews and (2) additional experimental results from comparing our embeddings with syntactic trace based program embeddings (Reed & De Freitas (2015) and Cai et. al (2017)).", "Dear reviewer:\n\nCan you please inform us whether there are any additional clarifications needed beyond those in our response? We have also uploaded a revision of our paper that incorporates (1) the requested clarifications in the reviews and (2) additional experimental results from comparing our embeddings with syntactic trace based program embeddings (Reed & De Freitas (2015) and Cai et. al (2017)).", "We appreciate your point on the differences between our work and Reed & De Freitas (2015). We have given a detailed discussion regarding this point in our response to AnonReviewer2, which we include below for your convenience.\n\nThere are fundamental differences between the syntactic program traces explored in prior work (Reed & De Freitas (2015)) and the “semantic program traces” considered in our work. Consider the example in Figure 1. According to Reed & De Freitas (2015), the two sorting algorithms will have an identical representation with respect to statements that modify the variable A:\n\nA[j] = A[j + 1]\t\t\nA[j + 1] = tmp\nA[j] = A[j + 1]\t\t\t\t\nA[j + 1] = tmp\nA[j] = A[j + 1]\t\t\t\t\nA[j + 1] = tmp\nA[j] = A[j + 1]\t\t\nA[j + 1] = tmp\t\t\nA[j] = A[j + 1]\t\t\nA[j + 1] = tmp\t\t\nA[j] = A[j + 1]\t\t\nA[j + 1] = tmp\t\t\nA[j] = A[j + 1]\t\t\nA[j + 1] = tmp\t\t\nA[j] = A[j + 1]\t\t\nA[j + 1] = tmp\t\t\n\nOur representation, on the other hand, can capture their semantic differences in terms of program states by also only considering the variable A: \n\n Bubble Insertion\n[5,5,1,4,3]\t[5,5,1,4,3]\n[5,8,1,4,3]\t[5,8,1,4,3]\n[5,1,1,4,3] \t[5,1,1,4,3]\n[5,1,8,4,3] \t[5,1,8,4,3]\n[1,1,8,4,3] \t[5,1,4,4,3]\n[1,5,8,4,3] \t[5,1,4,8,3]\n[1,5,4,4,3]\t[5,1,4,3,3]\n[1,5,4,8,3] \t[5,1,4,3,8]\n[1,4,4,8,3] \t[1,1,4,3,8]\n[1,4,5,8,3] \t[1,5,4,3,8]\n[1,4,5,3,3] \t[1,4,4,3,8]\n[1,4,5,3,8] \t[1,4,5,3,8]\n[1,4,3,3,8] \t[1,4,3,3,8]\n[1,4,3,5,8] \t[1,4,3,5,8]\n[1,3,3,5,8] \t[1,3,3,5,8]\n[1,3,4,5,8] \t[1,3,4,5,8]\n\nThis example also illustrates concretely the point made in Section 1 that minor syntactic differences can lead to significant semantic differences. Therefore, the approach of Reed & De Freitas is insufficient to capture such semantic differences. As another example, consider the following two programs:\n\nstatic void Main(string[] args)\n{\n string str = String.Empty;\n int x = 0;\n x++;\n}\n\nstatic void Main(string[] args)\n{\n string s = \"\";\n int y = 0;\n y = y+1;\n}\n\nAccording to the representation proposed in Reed & De Freitas (2015), the first program is represented as [string str = String.Empty, int x = 0, x++], while the second represented as [string s = \"\", int y = 0, y = y+1]. Although the two programs share the same semantics, they are represented differently due to syntactic variations. In contrast, our work captures the same semantic trace for both programs, i.e., [ [“”, NA], [“”,0], [“”,1]].\n\nTo sum up, the embedding proposed in Reed & De Freitas (2015) is a syntactic representation, and cannot precisely capture a program’s semantics and abstract away its syntactic redundancies. Consequently, the encoder will not be able to learn the true feature dimensions in the embeddings. We also performed additional experiments to contrast the two trace-based approaches. We used the same configuration of encoder (cf. Section 5) to embed the syntactic traces on the same datasets for the same classification problem. The results are as follows:\n\nProblems Reed & De Freitas (2015) Token AST Dependency Model\nPrint Chessboard 26.3% 16.8% 16.2% 99.3%\nCount Parentheses 25.5% 19.3% 21.7%\t 98.8%\nGenerate Binary Digits 23.8% 21.2% 20.9%\t 99.2%\n\nAlthough syntactic traces result in better accuracy than Token and AST, they are still significantly worse than semantic embeddings introduced in our work. \n\nOur revision will include the representation proposed in Reed & De Freitas (2015) for the example programs in Figure 1. It will also include the experimental setup (in Section 5) and the new results (in a new column of Table 3).\n\nWe will also add a citation to Cai et al. (2017), which uses the exact same program representation as Reed & De Freitas (2015). The other contributions in Cai et al. (2017) are unrelated to our work. \n\nWe hope that our response helped address your concerns. Please let us know if you have any additional questions. Thank you. \n", "Thank you for the review. We clarify below the four specific points raised. \n\n1. By “error patterns”, we mean different types of errors that students made in their programming submissions. This work focuses on providing quality feedback to students. It may be extended in future work to help software developers, where error patterns can correspond to different classes of errors that developers may make. However, it is not the consideration for the current version of the paper.\n\n2. We will clarify all abbreviations and terms used in the paper. \n\nYes, GRU is Gated Recurrent Unit.\n\n3. The results of our latest experiments clearly indicate that this work substantially improves prior work. We briefly highlight the main reasons below. First, there are fundamental differences between the syntactic program traces explored in prior work (Reed & De Freitas (2015)) and the “semantic program traces” considered in our work. Consider the example in Figure 1. According to Reed & De Freitas (2015), the two sorting algorithms will have an identical representation with respect to statements that modify the variable A:\n\nA[j] = A[j + 1]\t\t\nA[j + 1] = tmp\nA[j] = A[j + 1]\t\t\t\t\nA[j + 1] = tmp\nA[j] = A[j + 1]\t\t\t\t\nA[j + 1] = tmp\nA[j] = A[j + 1]\t\t\nA[j + 1] = tmp\t\t\nA[j] = A[j + 1]\t\t\nA[j + 1] = tmp\t\t\nA[j] = A[j + 1]\t\t\nA[j + 1] = tmp\t\t\nA[j] = A[j + 1]\t\t\nA[j + 1] = tmp\t\t\nA[j] = A[j + 1]\t\t\nA[j + 1] = tmp\t\t\n\nOur representation, on the other hand, can capture their semantic differences in terms of program states by also only considering the variable A: \n\n Bubble Insertion\n[5,5,1,4,3]\t[5,5,1,4,3]\n[5,8,1,4,3]\t[5,8,1,4,3]\n[5,1,1,4,3] \t[5,1,1,4,3]\n[5,1,8,4,3] \t[5,1,8,4,3]\n[1,1,8,4,3] \t[5,1,4,4,3]\n[1,5,8,4,3] \t[5,1,4,8,3]\n[1,5,4,4,3]\t[5,1,4,3,3]\n[1,5,4,8,3] \t[5,1,4,3,8]\n[1,4,4,8,3] \t[1,1,4,3,8]\n[1,4,5,8,3] \t[1,5,4,3,8]\n[1,4,5,3,3] \t[1,4,4,3,8]\n[1,4,5,3,8] \t[1,4,5,3,8]\n[1,4,3,3,8] \t[1,4,3,3,8]\n[1,4,3,5,8] \t[1,4,3,5,8]\n[1,3,3,5,8] \t[1,3,3,5,8]\n[1,3,4,5,8] \t[1,3,4,5,8]\n\nThis example also illustrates concretely the point made in Section 1 that minor syntactic differences can lead to significant semantic differences. Therefore, the approach of Reed & De Freitas is insufficient to capture such semantic differences. As another example, consider the following two programs:\n\nstatic void Main(string[] args)\n{\n string str = String.Empty;\n int x = 0;\n x++;\n}\n\nstatic void Main(string[] args)\n{\n string s = \"\";\n int y = 0;\n y = y+1;\n}\n\nAccording to the representation proposed in Reed & De Freitas (2015), the first program is represented as [string str = String.Empty, int x = 0, x++], while the second represented as [string s = \"\", int y = 0, y = y+1]. Although the two programs share the same semantics, they are represented differently due to syntactic variations. In contrast, our work captures the same semantic trace for both programs, i.e., [ [“”, NA], [“”,0], [“”,1]].\n\nTo sum up, the embedding proposed in Reed & De Freitas (2015) is a syntactic representation, and cannot precisely capture a program’s semantics and abstract away its syntactic redundancies. Consequently, the encoder will not be able to learn the true feature dimensions in the embeddings. We also performed additional experiments to contrast the two trace-based approaches. We used the same configuration of encoder (cf. Section 5) to embed the syntactic traces on the same datasets for the same classification problem. The results are as follows:\n\nProblems Reed & De Freitas (2015) Token AST Dependency Model\nPrint Chessboard 26.3% 16.8% 16.2% 99.3%\nCount Parentheses 25.5% 19.3% 21.7%\t 98.8%\nGenerate Binary Digits 23.8% 21.2% 20.9%\t 99.2%\n\nAlthough syntactic traces result in better accuracy than Token and AST, they are still significantly worse than semantic embeddings introduced in our work. \n\nOur revision will include the representation proposed in Reed & De Freitas (2015) for the example programs in Figure 1. It will also include the experimental setup (in Section 5) and the new results (in a new column of Table 3).\n\nWe will also add a citation to Cai et al. (2017), which uses the exact same program representation as Reed & De Freitas (2015). The other contributions in Cai et al. (2017) are unrelated to our work. \n\n4. The first paragraph of Section 4.3 addresses the scalability of the dependency architecture that you questioned. \n“ ...Processing each variable id with a single RNN among all programs in the dataset will not only cause memory issues, but more importantly the loss of precision…”\n\nWe hope that our response helped address your concerns. Please let us know if you have any additional questions. Thank you. \n", "Thank you for the helpful suggestions. Below, we answer the questions that you raised in the review.\n\nOur revision will clarify the definition of the “Evidence” variable, which, in short, denotes the result of multiplying weight on the program embedding vector and then adding the bias.\n\nYes, “predicting the error patterns” means classifying the kinds of errors that students made in their programs. \n\nThe encoders in Figures 3 and 4 are recurrent as they encode variable traces (each variable trace is a sequence of variable values) and states (a state is a set of variable values at a particular program location). The figures in our revision will make these clearer.\n\nDependencies happen primarily in assignment statements. API calls with side effects also introduce dependencies. For example, in the code snippet below, “sb” depends on “s”:\n\nStringBuilder sb = new StringBuilder();\nString s = “str”;\nsb.Append(s);\n" ]
[ 6, -1, 7, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ 2, -1, 3, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJuWrGW0Z", "BJ5rj_jGz", "iclr_2018_BJuWrGW0Z", "iclr_2018_BJuWrGW0Z", "iclr_2018_BJuWrGW0Z", "H1JAev9gz", "H1Pyl4sxM", "rkdmp2J-f", "H1Pyl4sxM", "rkdmp2J-f", "H1JAev9gz" ]
iclr_2018_S1Euwz-Rb
Compositional Attention Networks for Machine Reasoning
We present Compositional Attention Networks, a novel fully differentiable neural network architecture, designed to facilitate explicit and expressive reasoning. While many types of neural networks are effective at learning and generalizing from massive quantities of data, this model moves away from monolithic black-box architectures towards a design that provides a strong prior for iterative reasoning, enabling it to support explainable and structured learning, as well as generalization from a modest amount of data. The model builds on the great success of existing recurrent cells such as LSTMs: It sequences a single recurrent Memory, Attention, and Control (MAC) cell, and by careful design imposes structural constraints on the operation of each cell and the interactions between them, incorporating explicit control and soft attention mechanisms into their interfaces. We demonstrate the model's strength and robustness on the challenging CLEVR dataset for visual reasoning, achieving a new state-of-the-art 98.9% accuracy, halving the error rate of the previous best model. More importantly, we show that the new model is more computationally efficient, data-efficient, and requires an order of magnitude less time and/or data to achieve good results.
accepted-poster-papers
PROS: 1. Good results on CLEVER datasets 2. Writing is clear 3. The MAC unit is novel and interesting. 4. Ablation experiments are helpful CONS: The authors overstate the degree to which they are doing "sound" and "transparent" reasoning. In particular statements such as "Most neural networks are essentially very large correlation engines that will hone in on any statistical, potentially spurious pattern that allows them to model the observed data more accurately. In contrast, we seek to create a model structure that requires combining sound inference steps to solve a problem instance." I think are not supported. As far as I can tell, the authors' do not show that the steps of these solutions are really doing inference in any sound way I also found the interpretability section to be a bit unconvincing. The reviewers and I discussed this and there was some attempt to assess what the operations were actually doing but it is not clear how the language and the image attention are linked. I wonder whether the learned control activations are abstract and re-used across problems the way that the accompanying functional solution's primitives are. Have you looked at how similar the controls are across problems which are identical except for a different choice of attributes? To me, one of the hallmarks of a truly "compositional" solution is one in which the pieces are re-used across problems, not just that there is some sequence of explicit control activations used to solve each individual problem.
train
[ "H1-Icx3Vf", "ByrYKk3VG", "BJoJaojNz", "S1AP0Njxz", "B1Ewp2LVG", "Hyf1B_p7f", "Bkb9w8r4f", "Sk0oVNYlM", "SyKUVctlM", "BJQeEzAWG", "ByjkRDEgz", "SytM0vNlf", "H1gOE_mxM" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public" ]
[ "\nThank you very much for your review - we truly appreciate it! \nWe have uploaded a revision (by the rebuttal deadline, jan 5) that addresses all your comments:\n\n1. We have revised the description of the writing unit to make it more clear - we have experimented with several variants for this unit - the \"standard\" one (for which all the results are about), and 3 variants: a. with self-attention, b. with gating, and c. with both self-attention and gating. In the ablations study section we have included results for each of these for the whole dataset, 10% of the dataset and also showed training curves for each variant.\n\n2. We have trained the models without GloVE and added these results along with clarification to the experiments section.\n\n3. We have included ablation studies in order to justify the architecture design choices and elucidate their impact. We have also added visualizations of attention weights for several examples and discussed them.\n\nThanks a lot again for your review!\n- Paper858 Authors", "Thank you very much for your review and for improving the rating! We will fix the typos and indeed are also working on making the writing a bit more concise to shorten the overall paper length!", "Thanks for the statistical significance analysis, ablation studies, qualitative results and other clarifications, I have increased my rating from 6 to 7. Please fix minor things such as 2 (different) plots for Network Length in Figure 7, short explanation for why the attention is significant on the word \"is\" instead of an important word \"shape\" in qualitative examples 1&2 in figure 8, etc. Also, the current paper length is 14 pages (after the addition of about 4 pages in rebuttal) which is almost double the recommended length of 8 pages, so I would suggest reducing the paper length for future. ", "Summary: \nThe paper presents a new model called Compositional Attention Networks (CAN) for visual reasoning. The complete model consists of an input unit, a sequence of the proposed Memory, Attention and Composition (MAC) cell, and an output unit. Experiments on CLEVR dataset shows that the proposed model outperforms previous models.\n\nStrengths: \n— The idea of building a compositional model for visual reasoning and visual question answering makes a lot of sense, and, I think, is the correct direction to go forward in these fields.\n— The proposed model outperforms existing models pushing the state-of-the-art.\n— The proposed model is computationally cheaper and generalizes well with less training data as compared to existing models.\n— The proposed model has been described in detail in the paper.\n\nWeaknesses: \n— Given that the performance of state-on-art on CLEVR dataset is already very high ( <5% error) and the performance numbers of the proposed model are not very far from the previous models, it is very important to report the variance in accuracies along with the mean accuracies to determine if the performance of the proposed model is statistically significantly better than the previous models.\n— It is not clear which part of the proposed model leads to how much improvement in performance. Ablations studies are needed to justify the motivations for each of the components of the proposed model.\n— Analysis of qualitative results (including attention maps, gate values, etc.) is needed to justify if the model is actually doing what the authors think it should do. For example, the authors mention an example on page 6 at the end of Section 3.2.2, but do not justify if this is actually what the model is doing.\n— Why is it necessary to use both question and memory information to answer the question even when the question was already used to compute the memory information? I would think that including the question information helps in learning the language priors in the dataset. Have the authors looked at some qualitative examples where the model which only uses memory information gives an incorrect answer but adding the question information results in a correct answer?\n— Details such as using Glove word embeddings are important and can affect the performance of models significantly. Therefore, they should be clearly mentioned in the main paper while comparing with other models which do not use them.\n— The comparisons of number of epochs required for training and the training time need fixed batch sizes and CPU/GPU configurations. Is that true? These should be reported in this section.\n— The authors claim that their model is robust to linguistic variations and diverse vocabulary, by which I am guessing they are referring to experiments on CLEVR-Humans dataset. What is there in the architecture of the proposed model which provides this ability? If it is the Glove vectors, it should be clearly mentioned since any other model using Glove vectors should have this ability.\n— On page 6, second paragraph, the authors mention that there are cases which necessitate the model to ignore current memories. Can the authors show some qualitative examples for such cases?\n— In the intro, the authors claim that their proposed cell encourages transparency. But, the design of their cell doesn’t seem to do so, nor it is justified in the paper.\n\nOverall: The performance reported in the paper is impressive and outperforms previous state-of-the-art, but without proper statistical significance analysis of performance, ablation studies, analysis of various attention maps, memory gates, etc. and qualitative results, I am not sure if this work would be directly useful for the research community.", "Alright. Thank you very much for your detailed review - it was very helpful to us and we truly appreciate it! We are currently working on Cornell nlvr and also text-based datasets as well as more qualitative experiments and while I believe I shouldn't upload revisions anymore we will definitely explore these directions further!", "\nDear reviewers,\n\nThank you very much for your detailed suggestions and comments! \nWe have uploaded a revision that addresses the comments raised in the reviews. In particular:\n- We report the model performance with random initialization for the word vectors rather than GloVE embeddings. Using a uniform initialization, we were able to get equivalent results for CLEVR and 1-2% difference for CLEVR-humans in the new setting, compared to those we have achieved with GloVE.\n- We have performed statistical significance analysis, running each model multiple times (10) and updated the plots to show averages and confidence intervals. These experiments show that the results of our model are indeed statistically significant compared to the alternative models.\n- We have performed ablation studies that cover the control, reading, and writing units as well as additional aspects of our model. We have shown results for the standard CLEVR dataset and 10% subset of it along with training curves. The ablation study we have conducted quantifies the contribution of each of the model components to its overall performance and shows its relative significance. Based on the results, we can see the importance of using attention over both the question and the image in a coordinated manner through a dual structure of recurrent control and memory paths. \n- We have looked into attention maps of the model for the question and image and put a few examples of these in the revised version of the paper that demonstrate the model interpretability.\n- For the gating mechanism of the writing unit, we have performed additional experiments showing that untied gate values for each entry of the state vector perform better than having one shared potentially-interpretable gate for the whole state and so have changed the description of that subsection accordingly.\n\nAdditional changes in the paper:\n- We fixed typos that were in the original submission.\n- We have clarified a few missing points that were mentioned in the reviews. \n- This includes in particular clarification about the writing unit mods of operation, and the several variants - with self-attention, with a gate, and with both. Each of these variants is accompanied with training curves and final accuracies as part of the ablation section.\n\nIn response to specific comments by reviewers:\n- The ablations study shows the importance of using both the question and the final memory state. As explained in the revision, since the memory holds information only from the image, it may not contain all required information to answer the question, since potentially crucial aspects of the question are not represented in the image. \nFor instance: given an image with one object, one question can ask about its size and another question about its color, but in both cases the memory will attend to the same one object and thus will not contain enough information to respond to the question correctly.\n- As supported by both the ablations studies and the new experiments that show model's performance on CLEVR-humans without using GloVE, we claim that the model owes its ability to handle diverse language and learn from small amounts of data to the use of attention over question. In particular, the attention allows the model to ignore varied words in the CLEVR-humans dataset that the model hasn't been necessarily trained on but aren't crucial to understand the question, and rather can focus only on the key words that refer to the objects and their properties. \n- As demonstrated by the qualitative results of attention maps over the image and question and explained in the model section, the model is indeed transparent by having access to the attention maps of each computation step. And indeed, examples of such attention maps show interpretable rationales behind the model's predictions.\n\nThank you very much!\n- Paper858 Authors\n", "Thanks for the ablations! My score remains the same.", "This paper describes a new model architecture for machine reasoning. In contrast\nto previous approaches that explicitly predict a question-specific module\nnetwork layout, the current paper introduces a monolithic feedforward network\nwith iterated rounds of attention and memory. On a few variants of the CLEVR\ndataset, it outperforms both discrete modular approaches, existing iterated\nattention models, and the conditional-normalization-based FiLM model. \n\nSo many models are close to perfect accuracy on the standard CLEVR dataset that\nI'm not sure how interesting these results are. In this respect I think the\ncurrent paper's results on CLEVR-Humans and smaller fractions of synthetic CLEVR\nare much more exciting.\n\nOn the whole I think this is a strong paper. I have two main concerns. The\nlargest is that this paper offers very little in the way of analysis. The model\nis structurally quite similar to a stacked attention network or a particular\nfixed arrangement of attentive N2NMN modules, and it's not at all clear based on\nthe limited set of experimental results where the improvements are actually\ncoming from. It's also possible that many of the proposed changes are\ncomplementary to NMN- or CBN-type models, and it would be nice to know if this\nis the case.\n\nSecondarily, the paper asserts that \"our architecture can handle\ndatasets more diverse than CLEVR\", but runs no experiments to validate this. It\nseems like once all the pieces are in place it should be very easy to get\nnumbers on VQA or even a more interesting synthetic dataset like NLVR.\n\nBased on a sibling comment, it seems that there may also be some problems with\nthe comparison to FiLM, and I would like to see this addressed.\n\nOn the whole, the results are probably strong enough on their own to justify\nadmitting this paper. But I will become much more enthusiastic about if if the\nauthors can provide results on other datasets (even if they're not\nstate-of-the-art!) as well as evidence for the following:\n\n1. Does the control mechanism attend to reasonable parts of the sentence?\n\nHere it's probably enough to generate a bunch of examples showing sentence\nattentions evolving over time.\n\n2. Do these induce reasonable attentions over regions of the image?\n\nAgain, examples are fine.\n\n3. Do the self-attention and gating mechanisms recover the right structure?\n\nIn addition to examples, here I think there are some useful qualitative\nmeasures. It should be possible to extract reasonable discretized \"reasoning\nmaps\" by running MST or just thesholding on the \"edge weights\" induced by\nattention and gating. Having extracted these from a bunch of examples, you can\ncompare them to the structural properties of the ground-truth CLEVR network\nlayouts by plotting a comparison of sizes, branching factors, etc.\n\n4. More on the left side of the dataset size / accuracy curve. What happens if\n you only give the model 7000 examples? 700? 70?\n\nFussy typographical notes:\n\n- This paper makes use of a lot of multi-letter names in mathmode. These are\n currently written like $KB$, which looks bad, and should instead be\n $\\mathit{KB}$.\n\n- Variables with both superscripts and subscripts have the superscripts pushed\n off to the right; I think you're writing these like $b_5 ^d$ but they should\n just be $b_5^d$ (no space).\n\n- Number equations and then don't bother carrying subscripts like $W_3$, $W_4$\n around across different parts of the model---this isn't helpful.\n\n- The superscripts indicating the dimensions of parameter matrices and vectors\n are quite helpful, but don't seem to be explained anywhere in the text. I\n think the notation $W^{(d \\times d)}$ is more standard than $W^{d, d}$.\n\n- Put the cell diagrams right next to the body text that describes them (maybe even\n inline, rather than in figures). It's annoying to flip back and forth.", "This paper proposes a recurrent neural network for visual question answering. The recurrent neural network is equipped with a carefully designed recurrent unit called MAC (Memory, Attention and Control) cell, which encourages sequential reasoning by restraining interaction between inputs and its hidden states. The proposed model shows the state-of-the-art performance on CLEVR and CLEVR-Humans dataset, which are standard benchmarks for visual reasoning problem. Additional experiments with limited training data shows the data efficiency of the model, which supports its strong generalization ability.\n\nThe proposed model in this paper is designed with reasonable motivations and shows strong experimental results in terms of overall accuracy and the data efficiency. However, an issue in the writing, usage of external component and lack of experimental justification of the design choices hinder the clear understanding of the proposed model.\n\nAn issue in the writing\nOverall, the paper is well written and easy to understand, but Section 3.2.3 (The Write Unit) has contradictory statements about their implementation. Specifically, they proposed three different ways to update the memory (simple update, self attention and memory gate), but it is not clear which method is used in the end.\n\nUsage of external component\nThe proposed model uses pretrained word vectors called GloVE, which has boosted the performance on visual question answering. This experimental setting makes fair comparison with the previous works difficult as the pre-trained word vectors are not used for the previous works. To isolate the strength of the proposed reasoning module, I ask to provide experiments without pretrained word vectors.\n\nLack of experimental justification of the design choices\nThe proposed recurrent unit contains various design choices such as separation of three different units (control unit, read unit and memory unit), attention based input processing and different memory updates stem from different motivations. However, these design choices are not justified well because there is neither ablation study nor visualization of internal states. Any analysis or empirical study on these design choices is necessary to understand the characteristics of the model. Here, I suggest to provide few visualizations of attention weights and ablation study that could support indispensability of the design choices.\n", "\nDear reviewers, \n\nThank you very much for your detailed comments and insightful suggestions for further exploration. We completely agree that ablation studies, statistical significance measures and qualitative analysis such as visualizations are necessary to justify the performance of the model and elucidate its behavior. We are actively working on a revised version of the paper that will include these studies and address the other comments raised in the reviews, in particular regarding the use of GloVE, the model’s performance on smaller subsets (<10%) of CLEVR, and the necessity of predicting the answer using both the question and the memory.\n\nSeveral clarifications in response to questions from the reviews:\n\n1. For comparative experiments, all the other systems used the original publicly-available authors’ implementations. All the models were trained with an equal batch size of 64 (as in the original implementations) and on the same machine, using a single Titan X Maxwell GPU per model.\n\n2. As mentioned in section 3.2.4, our claims about the model’s robustness to linguistic variability and its ability to handle datasets more diverse than the standard CLEVR indeed refer to its performance on CLEVR-Humans. We believe that attention mechanisms used over the question are a key factor that allows that, as discussed in the supplementary material, section C.4, and supported by Lu et al. (2016) and Yang et al. (2016). We will address this matter further in the revised version of the paper.\n\nThank you!\n- Paper858 Authors", "3) Qualitative Experiments\n\nWhile developing our idea, we have performed a large number of experiments that test ablations, modifications and variations to our architecture, the results of which will be presented in a few days in the revised version of paper (once it becomes possible to submit revisions). These experiments indeed demonstrate the relative importance of each of the model’s different aspects to its overall performance. For instance, they show that while optional components of the Write Unit such as memory gating and self-attention (discussed in section 3.2.2) improve the final performance and accelerate training, the model still achieves strong state-of-the-art results without them. Conversely, they quantitatively show the importance of using attention to decompose the question into a series of control states, in contrast to having only spatial attention layers over the image, as is the case for example in stacked attention networks (Yang et al., 2016; Johnson et al., 2017). In addition, we have examined the performance of the model across different network lengths (i.e. number of MAC cells), hidden state dimensions, and with several types of nonlinearities, all are establishing the robustness of our model to implementation details and model variances.\n\nWe are actively exploring our model behavior in qualitative terms and, in the paper revision, we will show gate-values and attention-map visualizations over the image, question and previous memories, following each reasoning step and for different types of questions. Indeed, soft-attention lies at the heart of our model, conferring several advantages, as discussed in the paper: (1) Robustness against both linguistic and visual variations (corroborated also by Lu et al (2016) and Yang et al (2016)). (2) Capacity for easy-to-train multi-modal translation between attended question words to the corresponding attended regions in the image. (3) Compositionality of reasoning steps by standardizing the content of the dual control and memory states to be selective summaries of the question and image correspondingly, and finally, as you alluded to, (4) Interpretability - Ideally, the model may be able to show a clear step-by-step rationale supporting the predicted answer. \n\nFurthermore, we are working on an error analysis for both CLEVR and CLEVR-Humans datasets to have indeed a better understanding of the nature of mistakes our model makes. Based on the quantitative results in the paper, it is already noticeable that many of the errors are for the counting questions, which is reasonable given the larger output space they have compared to other question types. We are currently looking into potential ways for further improving models counting accuracies, and will add a discussion about that either in a revision of the paper or in future work. \n\n--\n\nThank you very much for the thorough and insightful response and suggestions for further exploration! We will upload a revised version with the aforementioned additions once the option becomes available! :)\n- Paper858 Authors", "Thank you very much for the kind words and for the detailed response! We truly appreciate it!\n\nAddressing the questions you have raised:\n\n1) GloVE Word Embeddings\n\nFor the CLEVR dataset, we have observed an improvement of 0.17% in the final validation accuracy when using GloVE compared to word vectors initialized randomly with standard normal distribution, and 0.24% improvement compared to uniform-distribution initialization with range [-1,1]. \n\nNotably, in the early stages of the training process, the models with learned-from-scratch word embeddings actually outperformed the model with pretrained GloVE word embeddings. Only by the end of the training this trend is reversed to a modest advantage for the GloVE-based model. These results, which we will be happy to add to the paper, suggest that randomly-initialized word vectors are in fact easier for the model to distinguish between initially, whereas the small advantage of some additional semantic notions embodied in the GloVE embeddings become noticeable only by the end of training, useful for a low fraction of the questions. In any case, we will be glad to stress the fact that we have used GloVE in the experiments section of the paper. \n\nFor CLEVR-Humans, we so far have indeed used the pre-trained vectors and, as mentioned in the paper, haven’t trained them any further to prevent a drift in their semantic meaning. I agree that it will be both interesting and fair to check the model performance on CLEVR-Humans for randomly-initialized vectors as well. I will be running an experiment for that right now, and so we will report the scores as soon as they arrive and update the paper with these additional results accordingly. \n \n2) Comparison to other models\n\nFor the competitor models, we have used the original publicly available implementations as-is for both FiLM (https://github.com/ethanjperez/film) and PG+EE (https://github.com/facebookresearch/clevr-iep) with their default arguments, after closely following all the training procedures listed on the websites (image features extraction, question preprocessing etc.). \n\nFor figure 4 Right, while indeed there is some difference between the numbers you report and the numbers we have obtained by self-running the mentioned github version, when compared to performance of other approaches, the difference seems to be quite modest in relative terms. It may be a good idea to run FiLM and other models several times and measure both the average and variance in performance across these attempts. I think some variance between two different runs of the same model can be generally expected even given the equal settings, and this also depends on the model stability and robustness. In any case, as shown in figure 4, the gap in performance between CANs and other models is consistently significant throughout the training process, and it remains the case also for the results you mention.\n\nFor figure 4 Left, accuracy as a function of the training-set size, it may be the case that the discrepancy you claim arises from difference in the training time that has been allowed before collecting the results: with the aim of having a fair comparison, we have run all the models the same amount of time: until the validation score of all models has not shown further improvement for several consecutive iterations. It may be the case that allowing longer time for the training of FiLM and other models will lead to better scores, and we will be happy to mention them as well in the paper. Indeed, there are trade-offs between training time, dataset size and accuracy that are interesting to explore.\n\nAnother important aspect pertaining to the comparability of different models is their size - the number of parameters being used. We have noted that FiLM has a relatively high dimension of 4096 for the question-processing GRU hidden states, especially when compared to competing approaches that use sizes of 128-512 (our model has hidden dimension of 512, achieving similar results for 256 after slightly longer training time). Since the size of the weight matrix used in the GRU is quadratic as a function of the hidden state size, this leads to O(4096*4096) parameters which is, by and large, an order-of-magnitude higher than O(512*512) or O(256*256). In order to have a more fair comparison, it seems that it may be interesting to test FiLM with a smaller state size, comparably to other models.", "Congratulations on the impressive results. The proposed CAN model is quite interesting. We are also grateful that you represented our work with FiLM respectfully :)\n\nWe have a few questions:\n1) How does CAN perform when learning word embeddings from scratch, as competing models do, rather than using GloVE embeddings? It seems that using external knowledge via pre-trained embeddings would lead to unfair comparisons for:\n --CLEVR-Humans performance: CLEVR-Humans has many new words, which competing methods learn from scratch from a small training set or else use <UNK> tokens.\n --CLEVR training curves: External word knowledge frees a model from having to learn word meanings from scratch early in training, facilitating high performance early on.\n --CLEVR performance (to a lesser extent): GloVE might encode helpful notions, not learnable from CLEVR, on how to reason about various words.\n --It seems that results using GloVE should have an accompanying asterisk denoting so in tables and that GloVE’s use should be noted in the main paper body (not just appendix). Even better, would it be too difficult to re-run these CAN experiments with learned word embeddings?\n2) How did you run competitor models? There are discrepancies which make us hesitant to accept the self-run competitors results that the paper reports and compares against:\n --“Training curve” figure: Here are the FiLM validation accuracies for the reported model for first 11 epochs (plotted in (Perez et al. 2017)), higher than your paper plots: [0.5164, 0.7251, 0.802, 0.8327, 0.8568, 0.8887, 0.9063, 0.9243, 0.9328, 0.9346, 0.942]\n --“Accuracy / Dataset size (out of 700k)” figure: Trained on 100% of CLEVR, FiLM achieves 97.84% validation acc., as reported in our paper (Perez et al. 2017), compared to ~94% in your plot. We have also run incomplete FiLM experiments on 25% and 50% subsets of CLEVR and achieved higher numbers than those you report/plot. We can run these full experiments and comment numbers if that would be helpful.\n --“Accuracy / Dataset size (out of 700k)” figure: You plot that PG+EE achieves ~95.5% validation acc., lower than the reported 96.9% test acc. (a likely lower bound for validation acc.).\n3) Do you have evidence supporting the reason(s) behind CAN’s success? The paper only gives 2 pages of experiments and analysis, all quantitative and related to outperforming other models, rather than qualitative, ablative, or analytical. Thus, it’s difficult to tell which of the paper’s several proposed intuitions on why CAN works so well are valid. CAN consists of many components and aspects (control/read/write units, memory gates, compositionality, sequential reasoning, parameter-sharing, spatial attention, self-attention, etc.), and it’s unclear from overall acc. numbers which ones are crucial. In particular:\n --Is there evidence spatial attention allows CAN to reason about space more effectively? Intuitively, spatial attention should help, but (Johnson et al., 2016; Santoro et al., 2017) show that spatial attention models struggle with CLEVR/spatial reasoning; in the extreme, it’s possible that CAN performs well in spite of spatial attention. On the other hand, (Perez et al. 2017) show, with CLEVR acc. and activation visualizations, that an attention-free method (FiLM) can effectively reason about space. Can you run analytical experiments or visualizations to support that spatial attention actually helps CAN reason about space? Where is the model attending after each reasoning step? What is the CLEVR acc. after replacing spatial attention with another form of conditioning? What errors does CAN make and not make, especially to other models? If it would help, we can provide our reported FiLM model via GitHub.\n --It seems possible to build high-level aspects of CAN into simpler reasoning architectures and achieve similar performance. I.e., if spatial attention is key, adding spatial attention layers to a FiLM-based model, as in [1], might recover CAN’s performance. We would be interested to see evidence (such as ablations) showing that the entire CAN model is necessary for successful reasoning, as claimed.\n --How does question-attention vary across reasoning steps? Does CAN iteratively focus on the query’s various parts, as claimed? Or does CAN’s attention not hop around, entirely ignore some question parts, etc.? CAN’s attention lends itself to very neat analysis possibilities.\n --Do simpler questions actually use the memory gate to shorten the number of reasoning steps as expected?\n\nOverall, given the strong numerical results, CAN seems to potentially be a quite promising model, pending the authors’ response to these questions and some further analysis and evidence.\n\nKind Regards,\nEthan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville\n\nReferences:\n[1] Modulating early visual processing by language. Harm de Vries, Florian Strub, Jérémie Mary, Hugo Larochelle, Olivier Pietquin, Aaron Courville. NIPS 2017." ]
[ -1, -1, -1, 7, -1, -1, -1, 7, 6, -1, -1, -1, -1 ]
[ -1, -1, -1, 4, -1, -1, -1, 3, 4, -1, -1, -1, -1 ]
[ "SyKUVctlM", "BJoJaojNz", "S1AP0Njxz", "iclr_2018_S1Euwz-Rb", "Bkb9w8r4f", "iclr_2018_S1Euwz-Rb", "Sk0oVNYlM", "iclr_2018_S1Euwz-Rb", "iclr_2018_S1Euwz-Rb", "iclr_2018_S1Euwz-Rb", "H1gOE_mxM", "H1gOE_mxM", "iclr_2018_S1Euwz-Rb" ]
iclr_2018_BkXmYfbAZ
Beyond Shared Hierarchies: Deep Multitask Learning through Soft Layer Ordering
Existing deep multitask learning (MTL) approaches align layers shared between tasks in a parallel ordering. Such an organization significantly constricts the types of shared structure that can be learned. The necessity of parallel ordering for deep MTL is first tested by comparing it with permuted ordering of shared layers. The results indicate that a flexible ordering can enable more effective sharing, thus motivating the development of a soft ordering approach, which learns how shared layers are applied in different ways for different tasks. Deep MTL with soft ordering outperforms parallel ordering methods across a series of domains. These results suggest that the power of deep MTL comes from learning highly general building blocks that can be assembled to meet the demands of each task.
accepted-poster-papers
PROS: 1. Clear, interesting idea. 2. Largely convincing evaluation 3. Good writing CONS: 1. The model used in the evaluation is a Resnet-50 and could have been more convincing with a more SOTA model. 2. There is some concern about the whether the comparison of results (fig 6c) is really apples to apples.
val
[ "r1q6Vx9lM", "rJbP7B_eG", "r1O8FN5lG", "B1HKW2pQG", "B1KE72qXG", "H1rSSptmf", "HJpV7ptmz", "SJyY-6FQf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Summary: This paper proposes a different approach to deep multi-task learning using “soft ordering.” Multi-task learning encourages the sharing of learned representations across tasks, thus using less parameters and tasks help transfer useful knowledge across. Thus enabling the reuse of universally learned representations and reuse them by assembling them in novel ways for new unseen tasks. The idea of “soft ordering” enforces the idea that there shall not be a rigid structure for all the tasks, but a soft structure would make the models more generalizable and modular. \n\nThe methods reviewed prior work which the authors refer to as “parallel order”, which assumed that subsequences of the feature hierarchy align across tasks and sharing between tasks occurs only at aligned depths whereas in this work the authors argue that this shouldn’t be the case. They authors then extend the approach to “permuted order” and finally present their proposed “soft ordering” approach. The authors argue that their proposed soft ordering approach increase the expressivity of the model while preserving the performance. \n\nThe “soft ordering” approach simply enable task specific selection of layers, scaled with a learned scaling factor, to be combined in which order to result for the best performance for each task. The authors evaluate their approach on MNIST, UCI, Omniglot and CelebA datasets and compare their approach to “parallel ordering” and “permuted ordering” and show the performance gain.\n\nPositives: \n- The paper is clearly written and easy to follow\n- The idea is novel and impactful if its evaluated properly and consistently \n- The authors did a great job summarizing prior work and motivating their approach\n\nNegatives: \n- Multi-class classification problem is one incarnation of Multi-Task Learning, there are other problems where the tasks are different (classification and localization) or auxiliary (depth detection for navigation). CelebA dataset could have been a good platform for testing different tasks, attribute classification and landmark detection. 
\n(TODO) I would recommend that the authors test their approach on such setting.\n- Figure 6 is a bit confusing, the authors do not explain why the “Permuted Order” performs worse than “Parallel Order”. Their assumptions and results as of this section should be consistent that soft order>permuted order>parallel order>single task. \n
(TODO) I would suggest that the authors follow up on this result, which would be beneficial for the reader.\n- Figure 4(a) and 5(b), the results shown on validation loss, how about testing error similar to Figure 6(a)? How about results for CelebA dataset, it could be useful to visualize them as was done for MNIST, Omniglot and UCL.
\n(TODO) I would suggest that the authors make the results consistent across all datasets and use the same metric such that its easy to compare.\n\nNotation and Typos:\n- Figure 2 is a bit confusing, how come the accuracy decreases with increasing number of training samples? Please clarify.\n1- If I assume that the Y-Axis is incorrectly labeled and it is Training Error instead, then the permuted order is doing worse than the parallel order.\n
2- If I assume that the X-Axis is incorrectly labeled and the numbering is reversed (start from max and ending at 0), then I think it would make sense.\n- Figure 4 is very small and not easy to read the text. Does single task mean average performance over the tasks? \n- In eq.(3) Choosing \\sigma_i for a task-specific permutation of the network is a bit confusing, since it could be thought of as a sigmoid function, I suggest using a different symbol.\n
Conclusion: I would suggest that the authors address the concerns mentioned above. Their approach and idea is very interesting and relevant, and addressing these suggestions will make the paper strong for publication.", "This paper proposes a new approach for multi-task learning. While previous approaches assumes the order of shared layers are the same between tasks, this paper assume the order can vary across tasks, and the (soft) order is learned during training. They show improved performance on a number of multi-task learning problems. \n\nMy primary concern about this paper is the lack of interpretation on permuting the layers. For example, in standard vision systems, low level filters \"V1\" learn edge detectors (gabor filters) and higher level filters learn angle detectors [1]. It is confusing why permuting these filters make sense. They accept different inputs (raw pixels vs edges). Moreover, if the network contains pooling layers, different locations of the pooling layer result in different shapes of the feature map, and the soft ordering strategy Eq. (7) does not work. \n\nIt makes sense that the more flexible model proposed by this paper performs better than previous models. The good aspect of this paper is that it has some performance improvements. But I still wonder the effect of permuting the layers. The paper also needs more clarifications in the writing. For example, in Section 3.3, how each s_(i, j, k) is sampled from S? The \"parallel ordering\" terminology also seems to be arbitrary...\n\n[1] Lee, Honglak, Chaitanya Ekanadham, and Andrew Y. Ng. \"Sparse deep belief net model for visual area V2.\" Advances in neural information processing systems. 2008.", "- The paper proposes to learn a soft ordering over a set of layers for multitask learning (MTL) i.e.\n at every step of the forward propagation, each task is free to choose its unique soft (`convex')\n combination of the outputs from all available layers. This idea is novel and interesting.\n- The learning of such soft combination is done jointly while learning the tasks and is not set\n manually cf. setting permutations of a fixed number of layer per task\n- The empirical evaluation is done on intuitively related, superficially unrelated, and a real world\n task. The first three results are on small datasets/tasks, O(10) feature dimensions, and number of\n tasks and O(1000) images; (i) distinguish two MNIST digits, (ii) 10 UCI tasks with feature sizes\n 4--30 and number of classes 2--10, (iii) 50 different character recognition on Omniglot dataset.\n The last task is real world -- 40 attribute classification on the CelebA face dataset of 200K\n images. While the first three tasks are smaller proof of concept, the last task could have been\n more convincing if near state-of-the-art methods were used. The authors use a Resnet-50 which is a\n smaller and lesser performing model, they do mention that benefits are expected to be \n complimentary to say larger model, but in general it becomes harder to improve strong models.\n While this does not significantly dilute the message, it would have made it much more convincing\n if results were given with stronger networks. \n- The results are otherwise convincing and clear improvements are shown with the proposed method.\n- The number of layers over which soft ordering was tested was fixed however. It would be\n interesting to see what would the method learn if the number of layers was explicitly set to be\n large and an identity layer was put as one of the option. In that case the soft ordering could\n actually learn the optimal depth as well, repeating identity layer beyond the option number of\n layers. \n \nOverall, the paper presents a novel idea, which is well motivated and clearly presented. The \nempirical validation, while being limited in some aspects, is largely convincing.", "Thanks for the additional experiments. The arguments and results make sense. Maybe soft ordering works for visual tasks because\n1) It is like the inception architecture, where the model can choose different filters by itself on each layer,\n2) It is like an RNN.\nI am convinced that soft ordering should work well.\n\nThe assumption on the same shape across all layers is still limited though.", "A new revision has been uploaded with changes based on reviewer feedback. For more information about the changes, please see the responses to each reviewer's comments.", "Reviewer Comment: \"My primary concern about this paper is the lack of interpretation on permuting the layers. For example, in standard vision systems, low level filters \"V1\" learn edge detectors (gabor filters) and higher level filters learn angle detectors [1]. It is confusing why permuting these filters make sense. They accept different inputs (raw pixels vs edges).\"\n\nResponse:\nWe have added a new analysis to clarify this effect in the Omniglot and CelebA experiments. The main takeaway is that there may indeed be some amount of useful shared feature hierarchy across tasks, but the functions (i.e., layers) used to produce this hierarchy may also be useful in other contexts. Soft ordering allows this hierarchy to be exploited, and these additional uses to be discovered. This hierarchy is especially salient in the case of convolutional layers, which explains why parallel ordering does better than permuted ordering in the Omniglot experiments. For more details, see Section 4.3: paragraph 3 and Figure 6a and b; and Section 4.4: paragraph 4 and Figure 7a and 7b. These sections and figures are new, i.e., they’ve been added to the paper in response to this suggestion.\n\nReviewer Comment: \"Moreover, if the network contains pooling layers, different locations of the pooling layer result in different shapes of the feature map, and the soft ordering strategy Eq. (7) does not work.\"\n\nResponse: In the Omniglot experiments, pooling layers are included, and Eq. (7) does work, because the pool size, kernel size and number of filters is the same for each layer. In this setting, through soft ordering, the same layers are effectively applied at different resolutions or scales. Extending Eq. (7) to the case of layers that produce feature maps of conflicting shapes is left to future work (Section 6).\n\nReviewer Comment: \"The paper also needs more clarifications in the writing. For example, in Section 3.3, how each s_(i, j, k) is sampled from S?\"\n\nReponse: s_(i, j, k) is retrieved from S via tensor indexing. The commas and parentheses are included for disambiguation, e.g., in the case where j = \\rho_i(k).\n\nReviewer Comment: \"The \"parallel ordering\" terminology also seems to be arbitrary...\"\n\nResponse: The term \"parallel ordering\" is intended to capture the commonalities shared by the methods reviewed in Section 2.1: the sets of sharable layers available at two distinct depths do not intersect. In that sense it isn’t arbitrary but chosen specifically to establish contrast with the other approaches.", "Reviewer Comment: \"Multi-class classification problem is one incarnation of Multi-Task Learning, there are other problems where the tasks are different (classification and localization) or auxiliary (depth detection for navigation). CelebA dataset could have been a good platform for testing different tasks, attribute classification and landmark detection.\n(TODO) I would recommend that the authors test their approach on such setting.\"\n\nResponse: As suggested, we have added setups that include landmark detection as an additional task in Section 4.4. This additional task yielded a marginal improvement in performance for soft ordering, while it yielded a degredation for parallel ordering, showing that soft ordering can more easily handle such different kinds of tasks.\n\nReviewer Comment: \"Figure 6 is a bit confusing, the authors do not explain why the “Permuted Order” performs worse than “Parallel Order”. Their assumptions and results as of this section should be consistent that soft order>permuted order>parallel order>single task.\n
(TODO) I would suggest that the authors follow up on this result, which would be beneficial for the reader.\"\n\nResponse: We have added a new analysis to clarify this effect in the Omniglot and CelebA experiments. The main takeaway is that there may indeed be some amount of useful shared hierarchy across tasks, but the functions (i.e., layers) used to produce this hierarchy may also be useful in other contexts. Soft ordering allows this hierarchy to be exploited, and these additional uses to be discovered. This hierarchy is especially salient in the case of convolutional layers, which explains why parallel ordering does better than permuted ordering in the Omniglot experiments. For more details, see Section 4.3: paragraph 3 and Figure 6a and b; and Section 4.4: paragraph 4 and Figure 7a and 7b. These sections and figures are new, i.e., they’ve been added to the paper in response to this suggestion.\n\nReviewer Comment: \"Figure 4(a) and 5(b), the results shown on validation loss, how about testing error similar to Figure 6(a)? How about results for CelebA dataset, it could be useful to visualize them as was done for MNIST, Omniglot and UCL.
\n(TODO) I would suggest that the authors make the results consistent across all datasets and use the same metric such that its easy to compare.\"\n\nResponse: As suggested, we have updated the figures to report test error, and have added a figure (Figure 7) to visualize the CelebA results.\n\nReviewer Comment: \"Figure 2 is a bit confusing, how come the accuracy decreases with increasing number of training samples? Please clarify.\"\n\nResponse: Figure 2 reports the abilities of parallel and permuted ordering models to fit tasks of random data. This is a test of model expressivity, so accuracy is reported on the training set, and as the size of the training set increases it becomes more difficult for the models to memorize it.\n\nReviewer Comment: \"Figure 4 is very small and not easy to read the text. Does single task mean average performance over the tasks?\"\n\nResponse: We have updated Figure 4 to make it easier to read.\nYes, single task means average performance over the tasks when trained individually.\n\nReviewer Comment: \"In eq.(3) Choosing \\sigma_i for a task-specific permutation of the network is a bit confusing, since it could be thought of as a sigmoid function, I suggest using a different symbol.\"\n\nResponse: Although \\sigma is the most standard notation for a permutation, we understand the potential confusion in this context. We have replaced \\sigma with \\rho to address this issue.", "Reviewer Comment: \"The authors use a Resnet-50 which is a smaller and lesser performing model, they do mention that benefits are expected to be complimentary to say larger model, but in general it becomes harder to improve strong models. While this does not significantly dilute the message, it would have made it much more convincing if results were given with stronger networks.\"\n\nResponse: The computational requirements of a larger model were prohibitive for the experiments in this paper, but we plan to use a stronger model for more complex applications in the future.\n\nReviewer Comment: \"It would be interesting to see what would the method learn if the number of layers was explicitly set to be large and an identity layer was put as one of the option. In that case the soft ordering could actually learn the optimal depth as well, repeating identity layer beyond the option number of layers.\"\n\nResponse: This is an interesting area of future work. So far, based on this suggestion, we have tested the idea of adding an identity layer in the CelebA domain. Including the identity layer does improve performance somewhat, and the identity layer sees increased usage in most contexts. The identity layer creates more consistency across contexts, which can make it easier for soft ordering layers to handle each context effectively. For more details, the results have been added to Section 4.4 in the newly uploaded version of the paper (revised according to these reviews).\n\nWe are currently working on the larger topic of developing methods that optimize the size and design of soft ordering models. In one such experiment, the set of modules to be used in the soft ordering framework is designed automatically. The modules can be heterogeneous and consist of multiple layers. Interestingly, in the final set of optimized modules, one module is always a single convolutional layer with no nonlinearity, suggesting that including such pass-through structure, similar to including the identity layer, is important to scaling performance. The initial results are indeed promising, but it is a big topic and will take more time to study." ]
[ 7, 6, 7, -1, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BkXmYfbAZ", "iclr_2018_BkXmYfbAZ", "iclr_2018_BkXmYfbAZ", "H1rSSptmf", "iclr_2018_BkXmYfbAZ", "rJbP7B_eG", "r1q6Vx9lM", "r1O8FN5lG" ]
iclr_2018_BJQRKzbA-
Hierarchical Representations for Efficient Architecture Search
We explore efficient neural architecture search methods and show that a simple yet powerful evolutionary algorithm can discover new architectures with excellent performance. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies. Our algorithm efficiently discovers architectures that outperform a large number of manually designed models for image classification, obtaining top-1 error of 3.6% on CIFAR-10 and 20.3% when transferred to ImageNet, which is competitive with the best existing neural architecture search approaches. We also present results using random search, achieving 0.3% less top-1 accuracy on CIFAR-10 and 0.1% less on ImageNet whilst reducing the search time from 36 hours down to 1 hour.
accepted-poster-papers
PROS: 1. Overall, the paper is well-written, clear in its exposition and technically sound. 2. With some caveats, an independent team concluded that the results were "largely reproducible" 3. The key idea is a smart evolution scheme. It circumvents the traditional tradeoff between search space size and complexity of the found models. 4. The implementation seems technically sound. CONS: 1. The results were a bit over-stated (the authors promise to correct) 2. Could benefit from more comparison with other approaches (e.g. RL)
test
[ "BJNK-sdxz", "SkbQs_cgz", "HkM52nagG", "HJ9nV7amG", "r1mfNm6mG", "H1yO14fzz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public" ]
[ "The fundamental contribution of the article is the explicit use of compositionality in the definition of the search space. Instead of merely defining an architecture as a Directed Acyclic Graph (DAG), with nodes corresponding to feature maps and edges to primitive operations, the approach in this paper introduces a hierarchy of architectures of this form. Each level of the hierarchy utilises the existing architectures in the preceding level as candidate operations to be applied in the edges of the DAG. As a result, this would allow the evolutionary search algorithm to design modules which might be then reused in different edges of the DAG corresponding to the final architecture, which is located at the top level in the hierarchy.\n\nManually designing novel neural architectures is a laborious, time-consuming process. Therefore, exploring new approaches to automatise this task is a problem of great relevance for the field. \n\nOverall, the paper is well-written, clear in its exposition and technically sound. While some hyperparameter and design choices could perhaps have been justified in greater detail, the paper is mostly self-contained and provides enough information to be reproducible. \n\nThe fundamental contribution of this article, when put into the context of the many recent publications on the topic of automatic neural architecture search, is the introduction of a hierarchy of architectures as a way to build the search space. Compared to existing work, this approach should emphasise modularity, making it easier for the evolutionary search algorithm to discover architectures that extensively reuse simpler blocks as part of the model. Exploiting compositionality in model design is not novel per se (e.g. [1,2]), but it is to the best of my knowledge the first explicit application of this idea in neural architecture search. \n\nNevertheless, while the idea behind the proposed approach is definitely interesting, I believe that the experimental results do not provide sufficiently compelling evidence that the resulting method substantially outperforms the non-hierarchical, flat representation of architectures used in other publications. In particular, the results highlighted in Figure 3 and Table 1 seem to indicate that the difference in performance between both paradigms is rather small. Moreover, the performance gap between the flat and hierarchical representations of the search space, as reported in Table 1, remains smaller than the performance gap between the best performing of the approaches proposed in this article and NASNet-A (Zoph et al., 2017), as reported in Tables 2 and 3.\n\nAnother concern I have is regarding the definition of the mutation operators in Section 3.1. While not explicitly stated, I assume that all sampling steps are performed uniformly at random (otherwise please clarify it). If that was indeed the case, there is a systematic asymmetry between the probability to add and remove an edge, making the former considerably more likely. This could bias the architectures towards fully-connected DAGs, as indeed seems to occur based on the motifs reported in Appendix A.\n\nFinally, while the main motivation behind neural architecture search is to automatise the design of new models, the approach here presented introduces a non-negligible number of hyperparameters that could potentially have a considerable impact and need to be selected somehow. This includes, for instance, the number of levels in the hierarchy (L), the number of motifs at each level in the hierarchy (M_l), the number of nodes in each graph at each level in the hierarchy (| G^{(l)} |), as well as the set of primitive operations. I believe the paper would be substantially strengthened if the authors explored how robust the resulting approach is with respect to perturbations of these hyperparameters, and/or provided users with a principled approach to select reasonable values.\n\nReferences:\n\n[1] Grosse, Roger, et al. \"Exploiting compositionality to explore a large space of model structures.\" UAI (2012).\n[2] Duvenaud, David, et al. \"Structure discovery in nonparametric regression through compositional kernel search.\" ICML (2013).\n", "This work fits well into a growing body of research concerning the encoding of network topologies and training of topology via evolution or RL. The experimentation and basic results are probably sufficient for acceptance, but to this reviewer, the paper spins the actual experiments and results a too strongly.\n\nThe biggest two nitpicks:\n\n> In our work we pursue an alternative approach: instead of restricting the search space directly, we allow the architectures to have flexible network topologies (arbitrary directed acyclic graphs)\n\nThis is a gross overstatement. The architectures considered in this paper are heavily restricted to be a stack of cells of uniform content interspersed with specifically and manually designed convolution, separable convolution, and pooling layers. Only the topology of the cells themselves are designed. The work is still great, but this misleading statement in the beginning of the paper left the rest of the paper with a dishonest aftertaste. As an exercise to the authors, count the hyperparameters used just to set up the learning problem in this paper and compare them to those used in describing the entire VGG-16 network. It seems fewer hyperparameters are needed to describe VGG-16, making this paper hardly an alternative to the \"[common solution] to restrict the search space to reduce complexity and increase efficiency of architecture search.\"\n\n> Table 1\n\nWhy is the second best method on CIFAR (“Hier. repr-n, random search (7000 samples)”) never tested on ImageNet? The omission is conspicuous. Just test it and report.\n\nSmaller nitpicks:\n\n> “New state of the art for evolutionary strategies on this task”\n\n“Evolutionary Strategies”, at least as used in Salimans 2017, has a specific connotation of estimating and then following a gradient using random perturbations which this paper does not do. It may be more clear to change this phrase to “evolutionary methods” or similar.\n\n> Our evolution algorithm is similar but more generic than the binary tournament selection (K = 2) used in a recent large-scale evolutionary method (Real et al., 2017).\n\nA K=5% tournament does not seem more generic than a binary K=2 tournament. They’re just different.", "The authors present a novel evolution scheme applied to neural network architecture search. It relies on defining an expressive search space for conducting optimization, with a constrained search space that leads to a lighter and more efficient algorithm. To balance these constraints, they grow sub-modules in a hierarchical way to form more and more complex cells. Hence, each level is limited to a small search space while the system as a whole converges toward a complex structure. To speed up the search, they focus on finding cells instead of an entire network. In evaluation time, they insert these cells between layers of a network comparable in size to known networks. They find complex cells that lead to state-of-the-art performance on benchmark dataset CIFAR-10 and ImageNet. They also claim that their method is reaching a new milestone in evolutionary search strategies performance.\n\nThe method proposed for an hierarchical representation for optimizing over neural network designs is well thought and sound. It could lead to new insight on automating design of neural networks for given problems. In addition, the authors present results that appear to be on par with the state-of-the-art with architecture search on CIFAR-10 and ImageNet benchmark datasets. The paper presents a good work and is well articulated. However, it could benefit from additional details and a deeper analysis of the results.\n\nThe key idea is a smart evolution scheme. It circumvents the traditional tradeoff between search space size and complexity of the found models. The method is also appealing for its use of some kind of emergence between two levels of hierarchy. In fact, it could be argued that nature tends to exploit the same phenomenon when building more and more complex molecules. Thought, the paper could benefit from a more detailed analysis of the architectures found by the algorithm. Do the modules always become more complex as they jump from a level to another or there is some kind of inter-level redundancy? Are the cells found interpretable? The authors should try to give their opinion about the design obtained.\n\nThe implementation seems technically sound. The experiments and results section shows that the authors are confident and the evaluation seems correct. However, paragraphs on the architectures could be a bit clearer for the reader. The diagram could be more complete and reflect better the description. During evaluation, what is a step? A batch or an epoch or other?\n\nThe method seems relatively efficient as it took 36 hours to converge in a field traditionally considered as heavy in terms of computation, but at the requirement of using 200 GPU. It raises questions on the usability of the method for small labs. At some point, we will have to use insights from this search to stop early, when no improvement is expected. Also, authors claim that their method consume less computation time than reinforcement learning. This should be supported by some quantitative results.\n\nThe paper would greatly benefit from a deeper comparison over other techniques. For instance, it could describe more the advantages over reinforcement learning. An important contribution is to show that a well-defined architecture representation could lead to efficient cells with a simple randomized search. It could have taken more spaces in the paper.\n\nI am also concerned the computational efficiency of the results obtained with this method on current processors. Indeed, the randomness of the found cells could be less efficient in terms of computation that what we can get from a well-structured network designed by hand. Exploiting the structure of the GPUs (cache size, sequential accesses, etc.) allows to get best possible performance from the hardware at hand. Does the solution obtained with the optimization can be run as efficiently? A short analysis forward pass time of optimized cells vs. popular models could be an interesting addition to the paper. This is a general comment over this kind of approach, but I think it should be addressed. \n", "Thank you for taking the effort to implement our algorithm. Your detailed comments are valuable for us to improve the paper further. We provide the clarifications below, which would hopefully be useful for reproducing our results.\n\n* “it would be impossible for the depthwise convolution operation to actually have a constant number of output filters as described.”\nWe did not require depthwise convolution operations to have a constant number of output filters (we’ll remove “of C channels” in the the 2nd bullet point in Sect. 2.3, which was a typo). This is not an issue because the way that we merge the nodes (depthwise concatenation) does not require the input nodes to have the same number of filters.\n\n* “separable convolution would no longer be valid as a primitive as it could be produced by a stacked depthwise and 1x1 convolution”\nIn our case, each convolutional operation comes with batch normalization (BN) and ReLU, hence the separable convolution\n3x3_depthwise_conv->1x1_conv->BN->ReLU\nis not exactly the same as the stack of \n3x3_depthwise_conv->BN->ReLU and 1x1_conv->BN->ReLU.\nWe also note that in general the algorithm remains valid if one primitive operation can be expressed in terms of the others.\n\n* “the initialization routine seems to imply that the identity operation is available at every level”\nNo, the identity operation is only available at the motif level: a motif is initialised as a chain of identity operations, and a cell is initialised as a chain of motifs (note that a chain of identity chains is also an identity chain).\n\n* “The probability distributions used for random sampling in mutation are not given”\nWe always use uniform distributions in all of our experiments. That being said, no hyperparameters were involved or tuned for mutation operations.\n\n* About the number of mutations during initialization.\nWe would like to point out that a large number of mutations is necessary to produce a diverse initial population of architectures. In our case we used 1000.", "We thank all reviewers for their comments. We will incorporate the suggested revisions into the new version of the paper. Our responses below focus on the major points.\n\n* About comparing computation time with RL-based approaches (reviewer 1)\nOur approach is faster than some published RL-based methods (e.g. 2000 GPU days in Zoph et al. (2017) vs 300 GPU days in our case). Having said that, we do not claim that evolution is more efficient than RL-based approaches in general.\n\n* Efficiency of architectures found using architecture search (reviewer 1)\nIn terms of the number of parameters, our ImageNet model is comparable to Inception-Resnet-v2 but larger than NASNet-A. Although identifying fast/compact architectures was not the primary focus of this work, an interesting future direction is to include FLOPS or wall clock time as a part of the evolution fitness, letting the architecture search algorithm to discover architectures that are both accurate and computationally efficient.\n\n* “During evaluation, what is a step?” (reviewer 1)\nAn evolution step refers to training and evaluation of a single architecture. We will make the definition more explicit in the revised paper.\n\n* “The authors should try to give their opinion about the design obtained” (reviewer 1)\nOur visualisation in appendix A shows that architecture search discovers a number of skip connections. For example, the cell contains a direct skip connection between input and output: nodes 1 and 5 are connected by Motif 4, which in turn contains a direct connection between input and output. The cell also contains several internal skip connections, through Motif 5 (which again comes with an input-to-output skip connection similar to Motif 4).\n\n* “the paper spins the actual experiments and results a too strongly.” (reviewers 2 and 3)\nThank you for the suggested improvements. We will revise our writing and soften the claims.\n\n* Missing ImageNet results for certain methods in Table 1 (reviewer 3)\nImageNet experiments under those two settings were still running at the time of the submission deadline. Their results are as follows:\nFlat repr-n, parameter-constrained, evolution (7000 samples): 21.2 / 5.8\nHier. repr-n, random search (7000 samples): 21.0 / 5.5.\nThe latter result is due to the fact that the evolution fitness computed on CIFAR is a proxy for ImageNet performance. Computationally efficient architecture search directly on ImageNet is an interesting direction for future research.\n\n* Mutation is biased towards adding edges (reviewer 2)\nIndeed, in our implementation we don’t ensure an equal probability of adding and deleting edges. We think inferring the mutation bias along with evolution is an interesting direction for future work.\n\n* Regarding a large number of hyperparameters specifying the architecture (reviewers 2 and 3)\nWe note that some hyperparameters can be adaptively tuned by evolution. Namely, M_l and |G^{(l)}| affect only the upper bounds on effective hyperparameters, since the algorithm may learn to not use a particular motif (hence the effective number of motifs becomes smaller than M_l), or to shortcut two nodes using an identity op (hence the effective number of nodes becomes smaller than |G^{(l)}|). Both behaviors have been empirically observed in our visualization (see Figure 5 & Figure 10 in Appendix A).", "We implemented the deep neural network representation described in this paper as a part of the ICLR 2018 Reproducibility challenge and performed small-scale testing of the representation on the CIFAR-10 benchmark utilizing the described search methods.\nOur implementation of the hierarchical encoding of a deep convolutional network was written in Python utilizing Keras with a TensorFlow backend. In the process of writing this implementation, we noticed several key omissions. We presumed that “depthwise” and “separable” convolutions refer to the definition in [Chollet 2017]. In this case, it would be impossible for the depthwise convolution operation to actually have a constant number of output filters as described. Furthermore, separable convolution would no longer be valid as a primitive as it could be produced by a stacked depthwise and 1x1 convolution. As such our implementation replaced depthwise with “standard” convolution. The described merging using depthwise concatenation requires padding the pooling operations, which is both unaddressed and contradicts the traditional use of pooling layers. Additionally, the identity operation is described as a primitive, but the initialization routine seems to imply that the identity operation is available at every level.\nWe also encountered several reproducibility issues in implementing the described evolutionary algorithm. The probability distributions used for random sampling in mutation are not given. We set all of them as uniform, but this biases the mutation method towards increasing complexity. This causes the number of random mutations per architecture in initialization to become an important but unknown parameter. We checked the number of parameters produced by generating 300 random hierarchical and flat architectures, first with 50 mutations each, then with 100 mutations each. The networks were assembled into the “small” CIFAR-10 architecture to check parameter numbers. The hierarchical architectures had 46 potential edges to mutate while the flat architecture had 55. The results of this showed that the flat architecture produced networks with 1.033 ± .574 M parameters after 50 mutations and 1.784 ± .889 M parameters after 100. The hierarchical architecture produced networks with .279 ± .168 M parameters after 50 mutations and .478 ± .280 M parameters after 100. These results show that random mutation does create a diverse initial population, but the complexity of that population is proportional to the number of mutations.\nDespite all of the above mentioned issues, we were able to create a working implementation of the described system based solely off of the paper and so this submission must be given due credit as largely reproducible. Our small scale results do, however, indicate a few potential issues. We found a top validation fitness for random search on the hierarchical representation to be .73 with .42 M parameters and the top validation fitness for the flat representation to be .79 with 1.03 M parameters, both drawn from populations of 50. This is roughly in line with what’s shown in the figures at the top of section 4.2 of the submission, although the flat representation has far more parameters than any of the networks shown. The cause for this is likely due to above described omissions in the mutation and initiation routines. \nThe key obstacles to reproducing the results of this submission were the computational costs. The paper did clearly describe the costs of the experiments, but did not provide baseline results that could be replicated cheaply. The cheapest-to-compute reported results were the CIFAR-10 errors for randomly sampled architectures. Replicating these, however, is useless for evaluating the representation schemes in general or the search strategies. We did attempt this reproduction for about half the training time, producing inconclusive results of test accuracies of .79 for the hierarchical representation and .80 for the flat representation. Our recommendation, not just for these authors but for topology learning papers in general, is to augment the normal large scale benchmark-breaking experiments with mass small scale experiments. Ideally, an experiment could be run on a single GPU in one day. For this submission, this could be achieved by limiting training steps, evolution steps, or testing on an easier benchmark, like MNIST. The goal of these mass small scale experiments would be two fold: publishing results which are accessible for replication to a much larger population as well as conducting enough trials to demonstrate the statistical significance of the improvements shown by the paper’s novel methods. This would address a significant weak point of this paper, the indeterminate significance of the difference in performance between the flat and hierarchical representations.\n\nReferences:\n\nFrançois Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv preprint arXiv:1610.02357, 2017" ]
[ 6, 6, 8, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1 ]
[ "iclr_2018_BJQRKzbA-", "iclr_2018_BJQRKzbA-", "iclr_2018_BJQRKzbA-", "H1yO14fzz", "iclr_2018_BJQRKzbA-", "iclr_2018_BJQRKzbA-" ]
iclr_2018_ryTp3f-0-
Reinforcement Learning on Web Interfaces using Workflow-Guided Exploration
Reinforcement learning (RL) agents improve through trial-and-error, but when reward is sparse and the agent cannot discover successful action sequences, learning stagnates. This has been a notable problem in training deep RL agents to perform web-based tasks, such as booking flights or replying to emails, where a single mistake can ruin the entire sequence of actions. A common remedy is to "warm-start" the agent by pre-training it to mimic expert demonstrations, but this is prone to overfitting. Instead, we propose to constrain exploration using demonstrations. From each demonstration, we induce high-level "workflows" which constrain the allowable actions at each time step to be similar to those in the demonstration (e.g., "Step 1: click on a textbox; Step 2: enter some text"). Our exploration policy then learns to identify successful workflows and samples actions that satisfy these workflows. Workflows prune out bad exploration directions and accelerate the agent’s ability to discover rewards. We use our approach to train a novel neural policy designed to handle the semi-structured nature of websites, and evaluate on a suite of web tasks, including the recent World of Bits benchmark. We achieve new state-of-the-art results, and show that workflow-guided exploration improves sample efficiency over behavioral cloning by more than 100x.
accepted-poster-papers
PROS: 1. well-written and clear 2. added extra comparison to dagger which shows success 3. SOTA results on open ai benchmark problem and comparison to relevant related work (Shi 2017) 4. practical applications 5. created new dataset to test harder aspects of the problem CONS: 1. the algorithmic novelty is somewhat limited 2. some indication of scalability to real-world tasks is provided but it is limited
train
[ "Hy6qBvbGM", "BJ448noJM", "H1asng9lG", "Syye3027M", "ByyXe3hQz", "HJ9ltkOQz", "S1KEPmUXM", "HkelwaS7M", "ryFlSzIMG", "S1WASuMbf", "HyInEOM-f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "Summary:\n\nThe authors propose a method to make exploration in really sparse reward tasks more efficient. They propose a method called Workflow Guided Exploration (WGE) which is learnt from demonstrations but is environment agnostic. Episodes are generated by first turning demonstrations to a workflow lattice. This lattice encodes actions which are in some sense similar to those in the demonstration. By rolling out episodes which are randomly sampled from this set of similar actions for each encountered state, it is claimed that other methods like Behavor Cloning + RL (BC-then-RL) can be outperformed in terms of number of sample complexity since high reward episodes can be sampled with much higher probability.\n\nA novel NN architecture (DOMNet) is also presented which can embed structured documents like HTML webpages.\n\nComments:\n\n- The paper is well-written and relevant literature is cited and discussed.\n- My main concern is that while imitation learning and inverse reinforcement learning are mentioned and discussed in related work section as classes of algorithms for incorporating prior information there is no baseline experiment using either of these methods. Note that the work of Ross and Bagnell, 2010, 2011 (cited in the paper) establish theoretically that Behavior Cloning does not work in such situations due to the non-iid data generation process in such sequential decision-making settings (the mistakes grow quadratically in the length of the horizon). Their proposed algorithm DAgger fixes this (the mistakes by the policy are linear in the horizon length) by using an iterative procedure where the learnt policy from the previous iteration is executed and expert demonstrations on the visited states are recorded, the new data thus generated is added to the previous data and a new policy retrained. Dagger and related methods like Aggrevate provide sample-efficient ways of exploring the environment near where the initial demonstrations were given. WGE is aiming to do the same: explore near demonstration states.\n- The problem with putting in the replay buffer only episodes which yield high reward is that extrapolation will inevitably lead the learnt policy towards parts of the state space where there is actually low reward but since no support is present the policy makes such mistakes. \n- Therefore would be good to have Dagger or a similar imitation learning algorithm be used as a baseline in the experiments.\n- Similar concerns with IRL methods not being used as baselines.\n\nUpdate: Review score updated after discussion with authors below. \n", "SUMMARY\n\nThe paper deals with the problem of training RL algorithms from demonstration and applying them to various web interfaces such as booking flights. Specifically, it is applied to the Mini world of Bids benchmark (http://alpha.openai.com/miniwob/).\n\nThe difference from existing work is that rather than training an agent to directly mimic the demonstrations, it uses demonstrations to constrain exploration. By pruning away bad exploration directions. \n\nThe idea is to build a lattice of workflows from demonstration and randomly sample sequence of actions from this lattice that satisfy the current goal. Use the sequences of actions to sample trajectories and use the trajectories to learn the RL policy.\n\n\n\nCOMMENTS\n\n\nIn effect, the workflow sequences provide more generalization than simply mimicking, but It not obvious, why they don’t run into overfitting problems. However experimentally the paper performs better than the previous approach.\n\nThere is a big literature on learning from demonstrations that the authors could compare with, or explain why their work is different. \n\nIn addition, they make general comparison to RL literature such as hierarchy rather than more concrete comparisons with the problem at hand (learning from demonstrations.)\n\nWhat does DOM stand for? The paper is not self-contained. For example, what does DOM stand for? \n\n\nIn the results of table 1 and Figure 3. Why more steps mean success?\n\nIn equation 4 there seems to exist an environment model. Why do we need to use this whole approach in the paper then? Couldn’t we just do policy iteration?\n", "This paper introduces a new exploration policy for Reinforcement Learning for agents on the web called \"Workflow Guided Exploration\". Workflows are defined through a DSL unique to the domain.\n\nThe paper is clear, very well written, and well-motivated. Exploration is still a challenging problem for RL. The workflows remind me of options though in this paper they appear to be hand-crafted. In that sense, I wonder if this has been done before in another domain. The results suggest that WGE sometimes helps but not consistently. While the experiments show that DOMNET improves over Shi et al, that could be explained as not having to train on raw pixels or not enough episodes.", "We have updated our submission to reflect the comments and suggestions by the reviewers. In particular, in the Discussion section, we have expanded the comparison to related work on learning from demonstrations. Additionally, we have highlighted the important results of our experiments and clarified a few confusing terms.", "We have implemented DAgger with oracle expert labels on a total of 24 episodes (more than twice the number of demonstrations for WGE). DAgger successfully solves the click-checkboxes task, which is expected since BC+RL also solves the task. However, on the harder click-checkboxes-large task where WGE got 84% success rate, DAgger gets 0% success rate even when we run RL on top afterward. Increasing the limit on the number of expert labels to 100 episodes (10 times the demonstrations for WGE) increases the success rate of DAgger to 50%, which is still lower than WGE.\n\nIn our experiments, behavioral cloning (BC) suffers from two problems: compounding errors and data sparsity (from learning from so few demonstrations). DAgger addresses the compounding errors problem, but does not address the data sparsity problem. We empirically observe that the neural policy typically requires over 1000 reward-earning episodes to successfully learn the harder tasks. This suggests that BC is failing mainly due to data sparsity (because we have few demonstrations available in our setting). This also explains why DAgger fails to learn the harder tasks, even with 100 demonstrations. In contrast, WGE succeeds in most harder tasks using only 10 demonstrations.", "But there is a 'schedule' of invoking the expert in the training process of DAgger and related methods where the expert invocation decays over iterations and in practice over all domains not more than five iterations are usually needed. \n\nIs there reason to believe that the total number of expert invocations in DAgger will be less than WGE for similar performance? This is exactly where an empirical comparison will be really useful and seal the usefulness of WGE.", "Our system initially receives a fixed set of human demonstrations and afterward receives no human supervision. In contrast, DAgger and related methods repeatedly query the expert policy (the human) at states reached by the learned policy during training.\n\nIn other words, in DAgger and related methods, the expert policy must be available throughout the entire training procedure, whereas in our work, we only roll out a fixed number of episodes from the expert policy at initialization, with no further access.", "\" In our setting, the system gets a small number of demonstrations and can interact with the environment, but does not have access to an expert policy, so these methods cannot be directly applied\" --Aren't the demonstrations coming from a human? Then isn't the human the expert in this case?\n\n\n\n ", "We would like to thank the reviewer for the feedback!\n\nThe reviewer suggested further comparisons with inverse reinforcement learning (IRL) and DAgger-based methods (e.g., DAgger, AggreVaTe). In our paper revision, we will address the critical differences between our setting and the settings of these methods, which are summarized below:\n\n- In IRL, the system does not receive rewards from the environment and instead extracts a reward function from demonstrations. In our setting, the system already observes the true reward from the environment, so applying IRL would be redundant. Furthermore, IRL would struggle to learn a good reward function from such a small number of demonstrations (e.g., 3-10), which we have in our setting.\n\n- DAgger-based methods require access to an expert policy, which is iteratively queried to augment the training data. In our setting, the system gets a small number of demonstrations and can interact with the environment, but does not have access to an expert policy, so these methods cannot be directly applied. In addition, while DAgger-based methods do indeed provide an alternative way to explore around a neighborhood of the demonstrations, their goal is different from ours: DAgger addresses compounding errors, while our work addresses finding sparse reward.\n\nFinally, we want to clarify the concern that only high reward episodes are placed in the buffer. The neural policy updates both off-policy from the buffer and on-policy during roll-outs. If the neural policy begins to make mistakes, it will be penalized by receiving low reward during the on-policy rollouts, which will correct these mistakes.", "We would like to thank the reviewer for their detailed and thoughtful feedback.\n\nIn our revision, we will significantly expand our comparison to related work on learning from demonstrations. The summary is provided below:\n\nPrevious work on learning from demonstration fall into two broad categories:\n 1) Using demonstrations to directly update policy parameters (e.g., behavioral cloning, IRL, etc.).\n 2) Using demonstrations to guide or constrain exploration.\n\nOur method belongs to category (2). The core idea is to explore trajectories that lie in a \"neighborhood\" surrounding an expert demonstration. In our work, the neighborhood is defined by a workflow, which only permits action sequences analogous to the demonstrated actions.\n\nOther methods in category (2) also explore the neighborhood surrounding a demonstration, using shaping rewards (Brys et al. 2015, Hussein et al. 2017) or off-policy sampling (Levine & Koltun, 2013). A key difference is that we define our neighborhood in terms of action-similarity, rather than state-similarity. This distinction is particularly important for the web tasks we study: we can easily and intuitively describe how two actions are analogous (e.g., \"they both type a username into a textbox\"), while it is harder to decide if two web page states are analogous (e.g., the email inboxes of two different users will have completely different emails, but they could still be analogous, depending on the task.)\n\nRegarding overfitting: our workflow policy does not overfit to demonstrations because the demonstrations are merely used to induce a workflow lattice -- the actual parameters of the workflow policy are learned through trial-and-error reinforcement learning, for which there is infinite data.\n\nThe workflow policy maintains a distribution over possible workflows. Some workflows define a very small neighborhood of trajectories surrounding an expert demonstration (tight), while others impose almost no constraints (loose). As the workflow policy is trained, it converges to the tightest workflow that can successfully generalize.\n\nOur final neural policy also does not overfit, because it is trained on a replay buffer of successful episodes discovered by the workflow policy, which is much larger than the original set of demonstrations.\n\nWe would like to also quickly address the other questions, which we will be sure to clarify in the paper:\n - In Equation 4, we do not have access to the environment model p(s_t | s_{t-1}, a_{t-1}). We merely state that our \n sampling procedure produces episodes e following the distribution p(e | g) of Equation 4, which we do not \n compute.\n - DOM stands for Document Object Model, the standard tree-based representation of a web page.\n - The \"Steps\" column in Table 1 is the number of steps needed to complete the task under the optimal behavior \n (e.g., by a human expert). It is a rough measure of task difficulty and is not related to model performance.\n", "We would like to thank the reviewer for their helpful feedback!\n\nThe key distinction between our work and most existing hierarchical RL approaches (e.g., options, MAXQ) is that our hierarchical structures (workflows) are inferred from demonstrations, rather than manually crafted or learned from scratch.\n\nWe try to keep the constraint language for describing workflow steps as minimal and general as possible. The main part of the language is just an element selector (elementSet) which selects either (1) things that share a specified property, or (2) things that align spatially, both of which are applicable in many typical RL domains (game playing, robot navigation, etc.)\n\nIn our experiments (Figure 3), WGE (red) consistently performs equally or better than behavioral cloning (green). There are some easy tasks in the benchmark (e.g., click-button), where both WGE and the baselines have perfect performance. But in more difficult tasks (Table 1), WGE greatly improves over baselines by an average of 42% absolute success rate.\n\nRegarding the comparison with Shi17:\n - Shi17 used ~200 demonstrations per task, whereas we achieve superior performance with only 3-10.\n - In addition to pixel-level data, the model of Shi17 actually also uses the DOM tree to compute text alignment \n features. Our DOMNet uses the DOM structure more explicitly, which indeed produces better performance.\n - Our DOMNet+BC+RL baseline separates the contribution of DOMNet from Workflow-Guided Exploration. Table 1 \n and Figure 3 illustrate that both are important.\n" ]
[ 7, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ryTp3f-0-", "iclr_2018_ryTp3f-0-", "iclr_2018_ryTp3f-0-", "iclr_2018_ryTp3f-0-", "HJ9ltkOQz", "S1KEPmUXM", "HkelwaS7M", "ryFlSzIMG", "Hy6qBvbGM", "BJ448noJM", "H1asng9lG" ]
iclr_2018_Hksj2WWAW
Combining Symbolic Expressions and Black-box Function Evaluations in Neural Programs
Neural programming involves training neural networks to learn programs, mathematics, or logic from data. Previous works have failed to achieve good generalization performance, especially on problems and programs with high complexity or on large domains. This is because they mostly rely either on black-box function evaluations that do not capture the structure of the program, or on detailed execution traces that are expensive to obtain, and hence the training data has poor coverage of the domain under consideration. We present a novel framework that utilizes black-box function evaluations, in conjunction with symbolic expressions that define relationships between the given functions. We employ tree LSTMs to incorporate the structure of the symbolic expression trees. We use tree encoding for numbers present in function evaluation data, based on their decimal representation. We present an evaluation benchmark for this task to demonstrate our proposed model combines symbolic reasoning and function evaluation in a fruitful manner, obtaining high accuracies in our experiments. Our framework generalizes significantly better to expressions of higher depth and is able to fill partial equations with valid completions.
accepted-poster-papers
Learn to complete an equation by filling the blank with a missing function or numeral, and also to evaluate an expression. Along the way learn to determine if an identity holds (e.g. sin^2(x) + cos^2(x) = 1). They use a TreeNN with a separate node for each expression in the grammar. PROS: 1. They've put together a new dataset of equational expressions for learning to complete an equation by filling in the blank of a missing function (or value) and function evaluation. They've done this in a nice way with a generator and will release it. 2. They've got two interesting ideas here and they seem to work. First, they train the network to jointly learn to manipulate symbols and to evaluate them. This helps ground the symbolic manipulations in the validity of their evaluations. They do this by using a common tree net for both processes with both a symbol node and a number node. They train on identities (sin^2(x) + cos^2(x) = 1) and also on ground expressions (+(1,2) = 3). The second idea is to help the system learn the interpretation map for the numerals like the "2" in "cos^2(x) with the actual number 2. They do this by including equations which relate decimals with their base 10 expansion. For example 2.5 = 2*10^0 + 5*10^-1. The "2.5" is (I think) treated as a number and handled by the number node in the network. The RHS leaves are treated as symbols and handled by the symbol node of the network. This lets them learn to represent decimals using just the 10 digits in their grammar and ties the interpretation of the symbols to what is required for a correct evaluation (in terms of their model this means "aligning" the node for symbol with the node for number). 3. Results are good over what seem to us reasonable baselines CONS: 1. The architecture isn't new and the idea of representing expression trees in a hierarchical network isn't new either. 2. The writing, to me, is a bit unclear in places and I think they still have some work to do follow the reviewers' advice in this area. I really wrestled with this one, and I appreciate the arguments that say it's not novel enough but I feel that there is something interesting in here and if the authors do a clean-up before final submission it will be ok.
train
[ "SJg6Lqp7f", "HkU17bFrM", "rk_xMk8ef", "SyFw0BPgz", "BywjthjgG", "H1fOMFa7z", "r1gRHOpmz", "BklhE_TXf", "HkG84_p7G" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "\n**Reviewer 1:**\n\nPage 8, Table 3: fixed typo for the number of equations in the test set for depth 1 column\n\nPage 7, parag 1: we have included the explanation about the Sympy baseline as suggested by both reviewers 1 and 2. \n\nReferences: Added Piech et al 2015 \n\n**Reviewer 2:**\n\npage 6, paragraph 2 described that the input parse of the equation is inherent within each equation.\n\nPage 6, last subsection: We have modified the model to correct Figure 4(c) as explained in this sections. This also indicated how our model handles both function evaluation and symbolic expressions, which was a question raised by reviewer 3. In this modification we have replaced the logistic loss, with the MSE loss for function evaluation expressions. We have also added an autoencoder design for embedding numbers in order to have a better understanding of the learned representations as suggested by reviewer 2.\n\nPage 7, parag 1: we have included the explanation about the Sympy baseline as suggested by both reviewers 1 and 2. \n\nPage 7, implementation details: We have included the search space for tuning as suggested by reviewers 2 and 3\n\n**Reviewer 3:**\n\nPage 2, parag 1: In order to address reviewer 3’s comment we have added citations for Theano and tensorFlow.\n\nPage 2, Parag 3: emphasized the importance of the decimal expansion tree as suggested by reviewer 3.\n\nPage 3, equation 5: added the precision to address reviewer 3’s comment\n\nPage 3, last parag: fixed typos mentioned by reviewer 3\n\nPage 5, Parag 2: Included the details for data generation as suggested by reviewer 3\n\nPage 5, parag 6: removed the high probability claim as suggested by reviewer 3\n\nPage 5, parag 6: Added text that explains Table 2,\n\nPage 6: added Table 2, including examples of generated equations as suggested by reviewer 3\n\nPage 6, last subsection: We have modified the model to correct Figure 4(c) as explained in this sections. This also indicated how our model handles both function evaluation and symbolic expressions, which was a question raised by reviewer 3. In this modification we have replaced the logistic loss, with the MSE loss for function evaluation expressions. We have also added an autoencoder design for embedding numbers in order to have a better understanding of the learned representations as suggested by reviewer 2.\n\nPage 7, paragraph 3: We have included reviewer 3’s comment about vanishing and exploding gradients. \n\nPage 7, implementation details: We have included the search space for tuning as suggested by reviewers 2 and 3\n\nPage 9, Figure 4(c): We have updated this curve with the MSE training results to address reviewer 3’s comments about the plateau in the curve in the original submission\n\nPage 10, Conclusions: We have removed the claim about the state-of-the-art neural models and added addressable differentiable memory as suggested by reviewer 3.\n\nReferences: Added references for Theano, TensorFlow and Adam optimizer.\n\n**Other modifications:**\n\nPage 5: modified Figure 2(b) to incorporate the autoencoder (W_num and W_num^{-1}) and indicate that we are performing MSE training for function evaluation expressions. This change resulted in a correction to Figure 4(c) such that the values do not plateau.\n\n**update after the rebuttal:**\nPage 1: title changed\nPage 3, Paragraph 1: Added clarification about NPI, pointed by reviewer 3.\nPage 9, Paragraph 2: Added description about Figure 4(b).\nPage 10, Figure 4(b) is updated with the MSE training results.\n", "Thank you for taking the time to go through our response. We are pleased to know that we were able to address your concerns and make clarifications. We have uploaded a revision in which we have included the clarifications regarding NPI. Page 7, Paragraph 3 Includes clarification about comparison with EqNets. Thank you for pointing this out. We have updated Figure 4(b) with the results of MSE training. The output probabilities are now replaced with the prediction squared error as shown in the revised version. We have written the complete list of revisions in the *List of Revisions* post **update after rebuttal** section.", "Summary\n\nThis paper presents a dataset of mathematical equations and applies TreeLSTMs to two tasks: verifying and completing mathematical equations. For these tasks, TreeLSTMs outperform TreeNNs and RNNs. In my opinion, the main contribution of this paper is this potentially useful dataset, as well as an interesting way of representing fixed-precision floats. However, the application of TreeNNs and TreeLSTMs is rather straight-forward, so in my (subjective) view there are only a few insights salvageable for the ICLR community and compared to Allamanis et al. (2017) this paper is a rather incremental extension.\n\nStrengths\n\nThe authors present a new datasets for mathematical identities. The method for generating additional correct identities could be useful for future research in this area.\nI find the representation of fixed-precision floats presented in this paper intriguing. I believe this contribution should be emphasized more as it allows the model to generalize to unseen numbers and I am wondering whether the authors see some wider application of this representation for neural programming models.\nI liked the categorization of the related work.\n\nWeaknesses\n\np2: It is mentioned that the framework is the first to combine symbolic expressions with black-box function evaluations, but I would argue that Neural Programmer-Interpreters (NPI; Reed & De Freitas) are already doing that (see Fig 1 in that paper where the execution trace is a symbolic expression and some expressions \"Act(LEFT)\" are black-box function applications directly changing the image).\nThe differences to Allamanis et al. (2017) are not worked out well. For instance, the authors use the TreeNN model from that paper as a baseline but the EqNet model is not mentioned at all. The obvious question is whether EqNets can be applied to the two tasks (verifying and completing mathematical equations) and if so why this has not been done.\nThe contribution regarding black box function application is unclear to me. On page 6, it is unclear to me what \"handles […] function evaluation expressions\". As far as I understand, the TreeLSTM learns to the return value of function evaluation expressions in order to predict equality of equations, but this should be clarified.\nI find the connection of the proposed model and task to \"neural programming\" weak. For instance, as far as I understand there is no support for stateful programs. Furthermore, it would be interesting to hear how this work can be applied to existing programming languages such as Haskell. What are the limitations of the architecture? Could it learn to identify equality of two lists in Haskell?\np6: The paragraph on baseline models is rather uninformative. TreeLSTMs have been shown to outperform Tree NN's in various prior work. The statement that \"LSTM cell […] helps the model to have a better understanding of the underlying functions in the domain\" is vague. LSTM cells compared to fully-connected layers in Tree NNs ameliorate vanishing and exploding gradients along paths in the tree. Furthermore, I would like to see a qualitative analysis of the reasoning capabilities that are mentioned here. Did you observe any systematic differences in the ~4% of equations where the TreeLSTM fails to generalize (Table 3; first column).\n\nMinor Comments\n\nAbstract: \"Our framework generalizes significantly better\" I think it would be good to already mention in comparison to what this statement is.\np1: \"aim to solve tasks such as learn mathematical\" -> \"aim to solve tasks such as learning mathematical\"\np2: You could add a citation for Theano, Tensorflow and Mxnet.\np2: Could you elaborate how equation completion is used in Mathematical Q&A?\np3: Could you expand on \"mathematical equation verification and completion […] has broader applicability\" by maybe giving some concrete examples.\np3 Eq. 5: What precision do you consider? Two digits?\np3: \"division because that they can\" -> \"division because they can\"\np4 Fig. 1: Is there a reason 1 is represented as 10^0 here? Do you need the distinction between 1 (the integer) and 1.0 (the float)?\np5: \"we include set of changes\" -> \"we include the set of changes\"\np5: In my view there is enough space to move appendix A to section 2. In addition, it would be great to see more examples of generated identities at this stage (including negative ones).\np5: \"We generate all possible equations (with high probability)\" – what is probabilistic about this?\np5: I don't understand why function evaluation results in identities of depth 2 and 3. Is it both or one of them?\np6: The modules \"symbol\" and \"number\" are not shown in the figure. I assume they refer to projections using Wsymb and Wnum?\np6: \"tree structures neural networks\" -> \"tree structured neural networks\"\np6: A reference for the ADAM optimizer should be added.\np6: Which method was used for optimizing these hyperparameters? If a grid search was used, what intervals were used?\np7: \"the superiority of Tree LSTM to Tree NN shows that is important to incorporate cells that have memory\" is not a novel insight.\np8: When you mention \"you give this set of equations to the models look at the top k predictions\" I assume you ranked the substituted equations by the probability that the respective model assigns to it?\np8: Do you have an intuition why prediction function evaluations for \"cos\" seem to plateau certain points? Furthermore, it would be interesting to see what effect the choice of non-linearity on the output of the TreeLSTM has on how accurately it can learn to evaluate functions. For instance, one could replace the tanh with cos and might expect that the model has now an easy time to learn to evaluate cos(x).\np8 Fig 4b; p9: Relating to the question regarding plateaus in the function evaluation: \"in Figure 4b […] the top prediction (0.28) is the correct value for tan with precision 2, but even other predictions are quite close\" – they are all the same and this bad, right?\np9: \"of the state-of-the-art neural reasoning systems\" is very broad and in my opinion misleading too. First, there are other reasoning tasks (machine reading/Q&A, Visual Q&A, knowledge base inference etc.) too and it is not obvious how ideas from this paper translate to these domains. Second, for other tasks TreeLSTMs are likely not state-of-the-art (see for example models on the SQuAD leaderboard: https://rajpurkar.github.io/SQuAD-explorer/) .\np9: \"exploring recent neural models that explicitly use memory cells\" – I think what you mean is models with addressable differentiable memory.\n\n# Update after the rebuttal\nThank you for the in-depth response and clarifications. I am increasing my score by one point. I have looked at the revised paper and I strongly suggest that you add the clarifications and in particular comments regarding comparison to related work (NPI, EqNet etc) to the paper. Regarding Fig. 4b, I am still not sure why all scores are the same (0.9977) -- I assume this is not the desired behavior?", "SUMMARY \n\nThe model evaluates symbolic algebraic/trigonometric equalities for validity, with an output unit for validity level at the root of a tree of LSTM nodes feeding up to the root; the structure of the tree matches the parse tree of the input equation and the type of LSTM cell at each node matches the symbol at that node in the equation: there is a different cell type for each symbol. It is these cell types that are learned. The training data includes labeled true and false algebraic/trigonometric identities (stated over symbols for variables) as well as function-evaluation equalities such as \"tan(0.28) = 0.29\" and decimal-expansion equations like \"0.29 = 2*10^(-1) + 9*10^(-2)\". I believe continuous values like \"0.29\" in the preceding expressions are encoded as the literal value of a single unit (feeding into an embedding unit of type W_{num}), whereas the symbols proper (including digit numerals) are encoded as 1-hot vectors (feeding into an embedding unit of type W_{symb}).\nPerformance is at least 97% when testing on unseen expressions of the same depth (up to 4) as the training data. Performance when trained on 3 levels (among 1 - 4) and testing on generalization to the held-out level is at least 96% when level 2 is held out, at least 92% when level 4 is withheld. Performance degrades (even on symbolic identities) when the function-evaluation equalities are omitted, and degrades when LSTM cells are replaced by plain RNN cells. The largest degradation is when the tree structure is replaced (presumably) by a sequence structure.\nPerformance was also tested on a fill-in-the-blank test, where a symbol from a correct equation was removed and all possible replacements for that symbol with expressions of depth up to 2 were tested, then ranked by the resulting validity score from the model. From the graph it looks like an accuracy of about 95% was achieved for the 1-best substituted expression (accuracy was about 32% for a sequential LSTM).\n\nWEAKNESSES\n\n* The title is misleading; \"blackbox function evaluation\" does not suggest what is intended, which is training on function-evaluation equations. The actual work is more interesting than what the title suggests.\n* The biggest performance boost (roughly 15%) arises from use of the tree structure, which is given by an oracle (implemented in a symbolic expression parser, presumably): the network does not design its own example-dependent structure.\n* What does the sympy baseline mean in Table 2? We are only told that sympy is a \"symbolic solver\". Yet the sympy performance scores are in the 70-80% range. If the solver’s performance is that weak, why is it used during generation of training data to determine the validity of possible equations?\n* Given that this is a conference on \"learning representations\" it would have been nice to see at least a *little* examination of the learned representations. It would be easy to do some interesting tests. How well does the vector embedding for \"2*10^(-1) + 9*10^(-2)\" match the vector for the real value 0.29? W_{num} embeds a continuum of real values in R^d: what is this 1-dimensional embedding manifold like? How do the embeddings of different integers provided by W_{sym} relate to one another? My rating would have been higher had there been some analysis of the learned representations.\n* We are told only that the \"hidden dimension … varies\"; it would be nice if the text or results tables gave at least some idea of what magnitude of embedding dimension we’re talking about.\n\nSTRENGTHS\n\nThe weaknesses above notwithstanding, this is a very interesting piece of work with impressive results. \n* The number of functions learned, 28, is a quantum jump from previous studies using 8 or fewer functions.\n* It is good to see the power of training the same system to learn the semantics of functions from the equations they satisfy AND from the values they produce. \n* The inclusion of decimal-expansion equations for relating numeral embeddings to number embeddings is clever. \n* The general method used for randomly generating a non-negligible proportion of true equations is useful.\n* The evaluation of the model is thorough and clear.\n* In fact the exposition in the paper as a whole is very clear.", "This paper proposes a model that predicts the validity of a mathematical expression (containing trigonometric or elementary algebraic expressions) using a recursive neural network (TreeLSTM). The idea is to take the parse tree of the expression, which is converted to the recursive neural network architecture, where weights are tied to the function or symbol used at that node. Evaluation is performed on a dataset generated specifically for this paper.\n\nThe overall approach described in this paper is technically sound and there are probably some applications (for example in online education). However the novelty factor of this paper is fairly low — recursive neural nets have been applied to code/equations before in similar models. See, for example, “Learning program embeddings to propagate feedback on student code” by Piech et al, which propose a somewhat more complex model applied to abstract syntax trees of student written code.\n\nI’m also not completely sure what to make of the experimental results. One weird thing is that the performance does not seem to drop off for the models as depth grows. Another strange thing is that the accuracies reported do not seem to divide the reported test set sizes (see, e.g., the depth 1 row in Table 2). It would also be good to discuss the Sympy baseline a bit — being symbolic, my original impression was that it would be perfect all the time (if slow), but that doesn’t seem to be the case, so some explanation about what exactly was done here would help. For the extrapolation evaluation — evaluating on deeper expressions than were in the training set — I would have liked the authors to be more ambitious and see how deep they could go (given, say, up to depth 3 training equations). \n\n", "Typos: fixed\n\nTheano, … Citations: added\n\nUse of equation completion in Q/A: For example in a math Q/A system if someone wants to know the value of sin^2(\\theta)+cos^2(\\theta) we are able to answer that the result is 1. This is not possible for EqNets. \n\nConcrete example of the broader applicability of equation completion and equation verification. By this we mean that our network can always identify equivalence classes, since it validates the equality of two sides of the equations. Moreover, it enables equation completion which is not possible for the frameworks proposed by (Allamanis et al 2016) and (Zaremba et al 2014). For example we can verify that 2*x^2 and x^2+x^2 are equivalent which is the task studied by both Allamanis et al and Zaremba et al. Moreover we can validate correctness of classes that have never been seen in the training dataset since we are proposing a new application which is equation validation. EqNets and similar models rely on classification and therefore have difficulty generalizing to classes unseen in the training data, but we are able to validate identities whose equivalence class has never been observed in the training set. Moreover, Allamanis et. al. and Zaremba et al are not able to complete a given equation such as 2*x^2 = x^2 + __. Whereas our algorithm can do this enabling a broader applicability. \n\nPrecision: Precision is two. This is fixed in the paper as described in the revisions. \n\nDifference between 1.0 and 1: We are distinguishing this to make sure that the network learns that the vector representation of 1.0, inputed as a literal, is equivalent to the vector representation of 1, inputed as a one-hot encoding, and is equivalent to its decimal representation 10^0. We already have other equations such as x^0=1 and 1.0=10^0 in our training set. This is our novelty to understand the decimal encoding.\n\nAdd details of data generation back to the paper: Done, \n\nAdd examples of generated equations: This is added to the paper and can be seen in Table 2\n\nWhat is probabilistic about generating all possible equations: We removed the high probability claim\n\nDepth of Function evaluation equations: This generated both equations of depth 2 and 3. For example, sin(2.5)=0.6 is an equation of depth 2 and sin(-2.5)=-0.6 is of depth 3 (look at Figure 2(c)). \n\nSymb and Num modules not shown in Fig 1: Fig 1 shows the actual parse tree of the input equations and not the modules of the neural network. We will later define the w_sym and w_num module in the structure of the neural network depending on the terminal in each input equation.\n\nAdam citation: Added\n\nOptimization method: Grid search. This is added to the paper as explained in the revisions.\n\nthe superiority of Tree LSTM to Tree NN shows that is important to incorporate cells that have memory: Paper modified accordingly\n \nranked the substituted equations for equation completion: This is correct, we rank equations for equation completion using the probability that the network assigns to it at the softmax layer output.\n\nevaluations for \"cos\" seem to plateau: We have corrected this curve by changing the training loss of function evaluation equation to MSE and training an auto-encoder that encodes the value of the floating point numbers and then outputs a number for a given vector embedding. This is explained in the paper revision.\n\nAll the values of Figure 4(b) are close and this is bad: This was also resolved with the MSE training idea\n\nexploring recent neural models that explicitly use memory cells means models with addressable differentiable memory: Yes thank you for pointing this we have corrected this in the paper\n\nof the state-of-the-art neural reasoning systems is too broad: This is fixed in the paper ", "We would like to thank the reviewer for the constructive feedback. Here is our response:\n\nNovelty: We would like to emphasize that the main contribution of the paper is combining high-level symbolic and function evaluation expressions, which none of the existing work has done. We are proposing a new framework for modeling mathematical equations. This framework includes defining new problems, equation validation and equation completion, as well as introducing a dataset generation method and a recursive neural network that combines these function evaluation and symbolic expressions. Indeed treeLSTMs are not new, however, using them to combine both the symbolic and function evaluation expressions by incorporating different loss functions and terminal types is novel. \n\nI believe this contribution [decimal expansion of numbers] should be emphasized more: We have emphasized this contribution as explained in the revisions.\n\nNeural Programmer-Interpreters (NPI; Reed & De Freitas) are already combining symbolic expressions and function evaluations: We would like to point out that NPI is using execution traces which are hard to obtain compared to symbolic expressions. Symbolic expressions can be thought of as the computation graph of a program. Moreover, we do not ground our symbolic expressions at a specific node in the tree which is the case in Figure 1 of the NPI paper. Rather, we have high-level symbolic expressions that summarize the behavior of mathematical functions and apply to many groundings of each formula. These symbolic expressions are combined with input-output examples of the functions such as sin(-2.5)=-0.6. This not only helps the final model’s accuracy, but also enables applications such as equation completion.\n\nComparison to EqNets: Thank you for pointing this out. Have made the destintion more clear in the baseline section as explained in the revisions. We would like to point that EqNets are designed for finding equivalence classes in a symbolic dataset, whereas we are aiming to verify math identities. In our dataset of trigonometric identities and algebraic equations we have way too many equivalence classes. Moreover, in our setting with function evaluation expressions we have so many classes that it does not make sense to use the EqNet’s approach to the problem. We argue that our framework is better since it allows equation completion, which is not possible in EqNets.\n\nWhat do we mean by handling both symbolic and function evaluation expression: What we mean is that our network architecture accounts for symbolic terminals using one-hot encoding vectors input to a 1 layer neural network whereas it accounts for floating point numbers for function evaluations through a 2layer neural network that takes as its input the literal value of the floating point number. We have also added an auto-encoder and changed the loss function of the function evaluation training to MSE in order to improve the results of the function evaluation curve and resolve the plateau of values in Figure 4(c). These changes are explained in the revision description. Please take a look at the updated version of Figure 4(c) in the revised paper.\n\nApplication to neural programming: The connection to neural programs is the fact that our tree structure reflects the expression structure of a program. Applying this to Haskell-like and \nstateful programs would be a good revenue for future work.\n\nLSTM cell […] helps the model to have a better understanding of the underlying functions in the domain: We have changed this in the paper\n\nresponse continues in the next post...", "We would like to thank the reviewer for the constructive feedback. Here is our response:\n\n I believe continuous values like \"0.29\" in the preceding expressions are encoded as the literal value of a single unit (feeding into an embedding unit of type W_{num}), whereas the symbols proper (including digit numerals) are encoded as 1-hot vectors (feeding into an embedding unit of type W_{symb}). \nThis understanding is correct\n\nMisleading title: We can change the title to “Combining Symbolic and Function Evaluation Expressions for Training Neural Programs” if the reviewer feels that this reflects better what we are doing.\n\nExample-dependent structure: If I understand the question correctly, the network’s structure is indeed example dependent and is indicated by the input equation. We have not used any symbolic expression parser to construct the equation parses. The neural network’s structure is dynamic and its structure depends on the parse of the input equation that comes naturally with it. We hypothesize that the input equation’s expression tree in indeed the best compositionality one can obtain as it represents the natural composition of the equations. In fact, if we had access to this clear composition tree in NLP tasks, the models would have been more accurate. many programming languages have access to the tree structure of the program. Without the tree, the problem will be very challenging and we would investigate learning the structure in the future.\n\nSympy performance: \nWe used Sympy to check the correctness of the generated equations. If the correctness of an equation is verified by sympy then it is added to the dataset. Therefore, sympy has a 100% accuracy for predicting correct equalities in our dataset. It is only the incorrect equalities that cause Sympy’s performance to drop as we explain below.\nIn order to assess Sympy’s performance, we give each equation to Sympy. It either returns, True, or False, or returns the equation in its original form (indicating that sympy is incapable of deciding whether the equality holds or not). Let’s call this the Unsure class. In the reported Sympy accuracies we have treated the Unsure class as a miss-classification. Another approach is to perform majority class prediction on the Unsure class. This will result in the same number as shown in table 2 since our majority class is True (50.24% are correct). In order to be fair, we can also treat the Unsure class as a fair coin toss and report half as correctly predicted. If we do this, the Sympy row in table 2 will be updated with these numbers: 90.81 & - & 90.00 & 94.46 & 91.45 & 84.54. If the reviewer believes that this approach is better we can update these numbers in Table 2.\nWe have also added this explanation to the paper. Given some other solver as oracle for adding equations, it would have been interesting to evaluate the accuracy of sympy for predicting the correctness of those equations.\n\nExamination of learned representations: This is a very good suggestion. In order to examine the learned representations we have depicted Figure 4. In order to see how close the vector embedding of an expression, say cos(0.8) is to the vector embedding of 0.69, we have presented Figure 4(c) which is similar to what the reviewer is suggesting about the decimal representation, if we understand correctly. We have also performed a minor modification as explained in the revisions, which makes it easier to interpret the learned representation. Our W_num block encodes the representation of floating point numbers like 0.29. We have trained a decoder, W_num^{-1}, that decodes the d-dimensional representation of 0.29 back to the actual number. Moreover, Table 4(b) indicates that the vector embeddings of tan(x) for x close to 0.28 results in vector embeddings that are close to 0.29 which is the correct value for tan(0.28) with precision 2.\n\nHidden dimension: The hidden dimension is chosen from set {10, 20, 50}. This has been added to the paper as explained in the revisions.", "We would like to thank the reviewer for the constructive feedback. Here is our response:\n\nthe novelty factor of this paper is fairly low: We would like to emphasize that the main contribution of the paper is combining high-level symbolic and function evaluation expressions, which none of the existing work has done. We are proposing a new framework for modeling mathematical equations. This framework includes defining new problems, equation validation and equation completion, as well as introducing a dataset generation method and a recursive neural network that combines these function evaluation and symbolic expressions. Indeed treeLSTMs are not new, however, using them to combine both the symbolic and function evaluation expressions by incorporating different loss functions and terminal types is novel. \n\nDataset only for this paper: This dataset makes sure that there is a good coverage of the properties of different mathematical functions which is critical for proper training of the model. It contains correct and incorrect math equations. An alternative approach that extracts equations online, unfortunately, does not result in enough equations for training. Moreover, we cannot obtain any negative equations. Our proposed dataset generation results in a large scale data of any mathematical domain given a small number of axioms from that domain.\n\nPerformance drop: \nOne weird thing is that the performance does not seem to drop off for the models as depth grows\n-- It is indeed interesting that the performance drops only marginally as depth grows in Table 2. The reason is, In this experiment we are assessing generalizability to unseen equations and not unseen depths. Therefore, the training data has access to equations of all depths. The results indicate that the model has learned to predict equations of all depths well. It also indicates that the dataset has a good coverage of equations of all depths that ensures good performance across all depths.\n-- Furthermore, the extrapolation experiment (table 3) shows that generalization to equations of smaller depth is easier than generalization to higher depth equations.\nAnother strange thing is that the accuracies reported do not seem to divide the reported test set sizes (see, e.g., the depth 1 row in Table 2)\nThere was a typo in table 2 in row “test set size” and column “depth 1”, which we have addressed in the revision. The correct number is 5+3 instead of 5+2. Thank you for pointing this out. \n\nSympy performance: \n--We used Sympy to check the correctness of the generated equations. If the correctness of an equation is verified by sympy then it is added to the dataset. Therefore, sympy has a 100% accuracy for predicting correct equalities in our dataset. It is only the incorrect equalities that cause Sympy’s performance to drop as we explain below.\n--In order to assess Sympy’s performance, we give each equation to Sympy. It either returns, True, or False, or returns the equation in its original form (indicating that sympy is incapable of deciding whether the equality holds or not). Let’s call this the Unsure class. In the reported Sympy accuracies we have treated the Unsure class as a miss-classification. Another approach is to perform majority class prediction on the Unsure class. This will result in the same number as shown in table 2 since our majority class is True (50.24% are correct). In order to be fair, we can also treat the Unsure class as a fair coin toss and report half as correctly predicted. If we do this, the Sympy row in table 2 will be updated with these numbers: 90.81 & - & 90.00 & 94.46 & 91.45 & 84.54. If the reviewer believes that this approach is better we can update these numbers in Table 2.\n--We have also added this explanation to the paper. Given some other solver as oracle for adding equations, it would have been interesting to evaluate the accuracy of sympy for predicting the correctness of those equations.\n\nAccuracy on equations of larger depth given up to depth 3 equations: This is indeed a very interesting test. We performed this test for training equations of up to depth 3 and the performance on equations of depth 5 is 89% and for equations of depth 6 is 85% for the treeLSTM model + data. We will add the full results of this extra experiment to the paper." ]
[ -1, -1, 6, 8, 5, -1, -1, -1, -1 ]
[ -1, -1, 4, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_Hksj2WWAW", "rk_xMk8ef", "iclr_2018_Hksj2WWAW", "iclr_2018_Hksj2WWAW", "iclr_2018_Hksj2WWAW", "r1gRHOpmz", "rk_xMk8ef", "SyFw0BPgz", "BywjthjgG" ]
iclr_2018_rkZB1XbRZ
Scalable Private Learning with PATE
The rapid adoption of machine learning has increased concerns about the privacy implications of machine learning models trained on sensitive data, such as medical records or other personal information. To address those concerns, one promising approach is Private Aggregation of Teacher Ensembles, or PATE, which transfers to a "student" model the knowledge of an ensemble of "teacher" models, with intuitive privacy provided by training teachers on disjoint data and strong privacy guaranteed by noisy aggregation of teachers’ answers. However, PATE has so far been evaluated only on simple classification tasks like MNIST, leaving unclear its utility when applied to larger-scale learning tasks and real-world datasets. In this work, we show how PATE can scale to learning tasks with large numbers of output classes and uncurated, imbalanced training data with errors. For this, we introduce new noisy aggregation mechanisms for teacher ensembles that are more selective and add less noise, and prove their tighter differential-privacy guarantees. Our new mechanisms build on two insights: the chance of teacher consensus is increased by using more concentrated noise and, lacking consensus, no answer need be given to a student. The consensus answers used are more likely to be correct, offer better intuitive privacy, and incur lower-differential privacy cost. Our evaluation shows our mechanisms improve on the original PATE on all measures, and scale to larger tasks with both high utility and very strong privacy (ε < 1.0).
accepted-poster-papers
This paper extends last year's paper on PATE to large-scale, real-world datasets. The model works by training multiple "teacher" models -- one per dataset, where a dataset might be for example, one user's data -- and then distilling those models into a student model. The teachers are all trained on disjoint data. Differential privacy is guaranteed by aggregating the teacher responses with added noise. The paper shows improved teacher consensus by adding more concentrated noise and allowing the teacher to simply not respond to a student query. The new results beat the old results convincingly on a variety of measures. Quality and Clarity: The reviewers and I thought the paper was well written. Originality: In some sense this work is incremental, extending and improving the existing PATE framework. However, the extensions and new analysis are non-trivial and the results are good. PROS: 1. Well written though difficult in places for somebody like myself who is not involved in this area. 2. Much improved scalability to real datasets 3. Good theoretical analysis supporting the extensions. 4. Comparison to related work (with a new comparison to UCI medical datasets used in the original paper and better results) CONS: 1. Perhaps a little dense for the non-expert
train
[ "r1nc91tez", "r1ERFjYef", "H1VNp87bz", "BklAIIiMz", "SkMva0lff", "SJ--T0xGf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper proposes novel techniques for private learning with PATE framework. Two key ideas in the paper include the use of Gaussian noise for the aggregation mechanism in PATE instead of Laplace noise and selective answering strategy by teacher ensemble. In the experiments, the efficacy of the proposed techniques has been demonstrated. I am not familiar with privacy learning but it is interesting to see that more concentrated distribution (Gaussian) and clever aggregators provide better utility-privacy tradeoff. \n\n1. As for noise distribution, I am wondering if the variance of the distribution also plays a role to keep good utility-privacy trade-off. It would be great to discuss and show experimental results for utility-privacy tradeoff with different variances of Laplace and Gaussian noise.\n\n2. It would be great to have an intuitive explanation about differential privacy and selective aggregation mechanisms with examples. \n\n3. It would be great if there is an explanation about the privacy cost for selective aggregation. Intuitively, if teacher ensemble does not answer, it seems that it would reveal the fact that teachers do not agree, and thus spend some privacy cost.\n\n\n\n\n\n\n\n\n\n", "Summary:\nIn this work, PATE, an approach for learning with privacy, is modified to scale its application to real-world data sets. This is done by leveraging the synergy between privacy and utility, to make better use of the privacy budget spent when transferring knowledge from teachers to the student. Two aggregation mechanisms are introduced for this reason. It is demonstrated that sampling from a Gaussian distribution (instead from a Laplacian distribution) facilitates the aggregation of teacher votes in tasks with large number of output classes. \n\non the positive side:\n\nHaving scalable models is important, especially models that can be applied to data with privacy concerns. The extension of an approach for learning with privacy to make it scalable is of merit. The paper is well written, and the idea of the model is clear. \n\n\non the negative side:\n\nIn the introduction, the authors introduce the problem by the importance of privacy issues in medical and health care data. This is for sure an important topic. However, in the following paper, the model is applied no neither medical nor healthcare data. The authors mention that the original model PATE was applied to medical record and census data with the UCI diabetes and adult data set. I personally would prefer to see the proposed model applied to this kind of data sets as well. \n\nminor comments: \n\nFigure 2, legend needs to be outside the Figure, in the current Figure a lot is covered by the legend", "This paper considers the problem of private learning and uses the PATE framework to achieve differential privacy. The dataset is partitioned and multiple learning algorithms produce so-called teacher classifiers. The labels produced by the teachers are aggregated in a differentially private manner and the aggregated labels are then used to train a student classifier, which forms the final output. The novelty of this work is a refined aggregation process, which is improved in three ways:\na) Gaussian instead of Laplace noise is used to achieve differential privacy.\nb) Queries to the aggregator are \"filtered\" so that the limited privacy budget is only expended on queries where the teachers are confident and the student is uncertain or wrong.\nc) A data-dependent privacy analysis is used to attain sharper bounds on the privacy loss with each query.\n\nI think this is a nice modular framework form private learning, with significant refinements relative to previous work that make the algorithm more practical. On this basis, I think the paper should be accepted. However, I think some clarification is needed with regard to item c above:\n\nTheorem 2 gives a data-dependent privacy guarantee. That is, if there is one label backed by a clear majority of teachers, then the privacy loss (as measured by Renyi divergence) is low. This data-dependent privacy guarantee is likely to be much tighter than the data-independent guarantee.\nHowever, since the privacy guarantee now depends on the data, it is itself sensitive information. How is this issue resolved? If the final privacy guarantee is data-dependent, then this is very different to the way differential privacy is usually applied. This would resemble the \"privacy odometer\" setting of Rogers-Roth-Ullman-Vadhan [ https://arxiv.org/abs/1605.08294 ]. \nAnother way to resolve this would be to have an output-dependent privacy guarantee. That is, the privacy guarantee would depend only on public information, rather than the private data. The widely-used \"sparse vector\" technique [ http://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf#page=59 ] does this.\nIn any case, this is an important issue that needs to be clarified, as it is not clear to me how this is resolved.\n\nThe algorithm in this work is similar to the so-called median mechanism [ https://www.cis.upenn.edu/~aaroth/Papers/onlineprivacy.pdf ] and private multiplicative weights [ http://mrtz.org/papers/HR10mult.pdf ]. These works also involve a \"student\" being trained using sensitive data with queries being answered in a differentially private manner. And, in particular, these works also filter out uninformative queries using the sparse vector technique. It would be helpful to add a comparison.\n", "We thank the reviewer for their feedback.\n\nBefore we address specific points raised by the reviewer, we offer a short summary of updates to the submission. The introduction was substantially revised and now includes Figure 1 that illustrates our improvements over the original PATE work. Table 1 includes updated parameters that dominate previously reported state-of-the-art utility and privacy on standard datasets. Figures 4 and 5 illustrate savings to privacy costs due to selective answers. The Appendix was expanded to include discussion of smooth sensitivity.\n\nThe reviewer correctly notes that the privacy guarantees depend on sensitive information and this should be accounted for. In Section 2.4, we briefly mention that we handle this using the smooth sensitivity framework (as in the original PATE paper) and in our revised draft of the submission, we provide a detailed overview of our smooth sensitivity analysis in Appendix C.3. \n\nThe reported numbers in the rest of our work are without this smooth sensitivity noise added to the privacy cost, because we find that the smooth sensitivity is small enough that adding noise to sanitize the privacy cost itself does not have a significant impact on the value of the privacy cost for the cases reported. We will release the code to calculate the smooth sensitivity along with the next version of our paper.\n\nWhile the work of Rogers et al. deals with related issues, it operates in a different setting. There, the privacy cost is input-independent, but is a function of the output and of when the mechanism is stopped. While it would be useful to have an odometer-like scheme where we can run the mechanism until a certain privacy budget is consumed, we find that running for a fixed number of steps and estimating the privacy spent using the smooth sensitivity framework largely suffices for the current work.\n\nThe sparse-vector technique, the works on private multiplicative weight (PMW) and the median mechanism are related to our work and we have added more discussion on them in the related work section (Appendix B). Both PMW and the Median mechanism can be thought of as algorithms that first privately select a small number of queries to answer using the database, and then answer them using a differentially private mechanism. In PMW and Median, the selection is drastic and prunes out all but a tiny fraction of queries, and the sparse vector technique allows one to analyze this step. In our case, the selection ends up choosing (or discarding) a non-trivial fraction of the queries (between 30% and 90% for parameter settings reported in the submission). We explored using the sparse vector technique for this part of the analysis and it did not lead to any reduction in the privacy cost: while we pay only for the selected queries, the additional constant factors in the sparse vector technique wash out this benefit.\n\nFor the second part of actually answering the queries, PMW and Median do a traditional data-independent privacy analysis. In our setting, using a data-independent privacy analysis in the second step would require a lot more noise than the learning process can tolerate, and we benefit greatly from using a data-dependent privacy analysis. The selection not only cuts down the number of queries that are answered, but more importantly, selects for queries that are cheap to answer from the privacy point of view. In summary: In PMW, the goal of the selection is to reduce number of answered queries from Q to log Q, and one does a data-independent privacy analysis for answering those. In our case the goal of filtering is to select a constant fraction of queries that will have a clear majority, so that the data-dependent privacy cost is small.", "We thank you for your feedback, in particular for bringing to our attention the possible improvements to our experimental setup with respect to the datasets considered. In our submission draft, we chose to focus on the Glyph dataset because it presented challenges like class imbalance and mislabeled data. However, we agree that in order to facilitate a comparison with the original PATE publication, it is important to include results on other datasets such as the UCI Adult and Diabetes datasets. As such, we used the resources made publicly available by the authors of the original PATE publication to reproduce their results and measure the performance of our refined aggregation mechanisms on these two datasets.\n\nWe also ran our experiments on the Glyph dataset with the aggregator used by Papernot et al. in the original PATE publication to provide an additional point of comparison.\n\nThese additional results are now included in our last submission revision and are summarized in Figure 5. We show that we compare favorably on all of the datasets and models: we either improve student accuracy, strengthen privacy guarantees, or both simultaneously.\n\nWe also followed your suggestion of making the legend of Figure 2 less intrusive by splitting the Figure into two, reducing the amount of information hidden by the legend.\n", "We thank the reviewer for their feedback. Below are answers for each of the three points included in your feedback.\n\n1. You are right that the variance of the distribution plays a fundamental role in the utility-privacy tradeoff. Roughly speaking, larger noise variances typically yield stronger privacy guarantees but reduce the utility of the aggregated label. We updated Figure 2 to illustrate this relationship, separating the effects of the shape of the noise distribution and the number of teachers. \n\nThe left chart of Figure 2 plots the utility-privacy tradeoff for the Laplace (prior work) and the Gaussian (ours) aggregation mechanisms. The measurement points are labelled with the standard deviation of the noise (sigma) which ranges from 5 to 200. As intuition suggests, the accuracy decreases with the variance of the noise. The privacy cost cannot be measured directly (as it involves considering all possible counterfactuals). Rather, it is computed according to Theorem 2, which is a significant technical contribution of this paper. For some ranges of parameters, privacy costs are a non-monotone function of sigma, which is discussed in the end of Section 4.2.\n\n2. We acknowledge the reviewer's comment about giving an intuitive overview of how we achieve differential privacy. The intuitive privacy guarantee of PATE remains the same as that presented in the original PATE paper by Papernot et al.: Partitioning training data ensures that the presence or absence of a single training data point affects at most one teacher’s vote. By adding noise to the teacher votes, we control the impact of a single teacher vote in the final outcome of the aggregation mechanism, resulting from the plurality of votes in the teacher ensemble. In fact, precisely bounding this impact, in a tighter way, requires the use of data-dependent analysis. The large variability between queries’ privacy costs motivates the Confident Aggregator, which minimizes total privacy budget by being selective in the student queries it chooses to answer.\n\nTo see why the Confident Aggregator is useful, here are a couple of illustrations of cheap and expensive queries in terms of their privacy cost. Consider our ensembles of 5000 teachers with the following votes across classes (4900, 100, 0, … 0). You can see that there is an overwhelming consensus and after adding noise, the chance that Class 1 is output by the ensemble is still very high. The resulting privacy cost (at Gaussian noise with stdev 100) is 3.6e-249.\n\nHowever, if there’s poor consensus amongst the teachers, with votes, say, (2500, 2400, 100, … ) i.e., the ensemble is rather confused between Classes 1 and 2, then the resulting privacy cost is 0.0025. It is also easy to see intuitively for our choices of thresholds, why the first example would almost surely be selected but the second example would likely not pass the threshold check. \n\nUnfortunately due to space constraints, we could not provide detailed intuition on these aspects. Some of them follow from the original PATE paper, and we try to give some insight into how the Confident Aggregator works through the experiment in Section 4.4.2. In particular, Figure 6 shows how queries along the lines of the second example above are eliminated.\n\n3. You are correct: private information may be revealed when the teacher ensemble does not answer because it indicates that teachers do not agree on the prediction. This is why our selective aggregation mechanisms choose in a privacy-preserving way queries that will be answered. This consideration motivates the design of the condition found at line 1 of Algorithm 1 and line 2 of Algorithm 2 in the submission draft, which both add Gaussian noise with variance $\\sigma_1^2$ before applying the maximum operator and comparing the result to the predefined threshold T. This ensures that the teacher consensus is checked in differentially private manner. In fact, _most_ of the total privacy budget is committed to selecting the set of queries whose answers are going to be revealed. We thank you for bringing this to our attention and updated the introduction of Section 3, as well as the captions of Algorithms 1 and 2 to emphasize this important consideration." ]
[ 6, 6, 7, -1, -1, -1 ]
[ 1, 4, 3, -1, -1, -1 ]
[ "iclr_2018_rkZB1XbRZ", "iclr_2018_rkZB1XbRZ", "iclr_2018_rkZB1XbRZ", "H1VNp87bz", "r1ERFjYef", "r1nc91tez" ]
iclr_2018_H1aIuk-RW
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (i.e. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs when applied in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, i.e. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
accepted-poster-papers
The effectiveness of active learning techniques for training modern deep learning pipelines in a label efficient manner is certainly a very well motivated topic. The reviewers unanimously found the contributions of this paper to be of interest, particularly nice empirical gains over several natural baselines.
train
[ "BJfU-qDeG", "ByFZUzFlf", "BytfdZsxf", "H11TdQ44z", "ByBnqXEVG", "HkWB9LC7z", "BJb693emf", "rJrEFw2QG", "BkQix6emz", "HJh6HmUMz", "BJR9cmLzf", "rkZr6-yMM", "rkhz6ptlG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "author", "author", "author", "author", "public", "public" ]
[ "After reading rebuttals from the authors: The authors have addressed all of my concerns. THe additional experiments are a good addition.\n\n************************\nThe authors provide an algorithm-agnostic active learning algorithm for multi-class classification. The core technique is to construct a coreset of points whose labels inform the labels of other points. The coreset construction requires one to construct a set of points which can cover the entire dataset. While this is NP-hard problem in general, the greedy algorithm is 2-approximate. The authors use a variant of the greedy algorithm along with bisection search to solve a series of feasibility problems to obtain a good cover of the dataset each time. This cover tells us which points are to be queried. The reason why choosing the cover is a good idea is because under suitable Lipschitz continuity assumption the generalization error can be controlled via an appropriate value of the covering radius in the data space. The authors use the coreset construction with a CNN to demonstrate an active learning algorithm for multi-class classification. \nThe experimental results are convincing enough to show that it outperforms other active learning algorithms. However, I have a few major and minor comments.\n\nMajor comments:\n\n1. The proof of Lemma 1 is incomplete. We need the Lipschitz constant of the loss function. The loss function is a function of the CNN function and the true label. The proof of lemma 1 only establishes the Lipschitz constant of the CNN function. Some more extra work is needed to derive the lipschitz constant of the loss function from the CNN function. \n\n2. The statement of Prop 1 seems a bit confusing to me. the hypothsis says that the loss on the coreset = 0. But the equation in proposition 1 also includes the loss on the coreset. Why is this term included. Is this term not equal to 0?\n\n3. Some important works are missing. Especially works related to pool based active learning, and landmark results on labell complexity of agnostic active learning.\nUPAL: Unbiased Pool based active learning by Ganti & Gray. http://proceedings.mlr.press/v22/ganti12/ganti12.pdf\nEfficient active learning of half-spaces by Gonen et al. http://www.jmlr.org/papers/volume14/gonen13a/gonen13a.pdf\nA bound on the label complexity of agnostic active learning. http://www.machinelearning.org/proceedings/icml2007/papers/375.pdf\n\n4. The authors use L_2 loss as their objective function. This is a bit of a weird choice given that they are dealing with multi-class classification and the output layer is a sigmoid layer, making it a natural fit to work with something like a cross-entropy loss function. I guess the theoretical results do not extend to cross-entropy loss, but the authors do not mention these points anywhere in the paper. For example, the ladder network, which is one of the networks used by the authors is a network that uses cross-entropy for training.\n\nMinor-comment: \n1. The feasibility program in (6) is an MILP. However, the way it is written it does not look like an MILP. It would have been great had the authors mentioned that u_j \\in {0,1}. \n\n2. The authors write on page 4, \"Moreover, zero training error can be enforced by converting average loss into maximal loss\". It is not clear to me what the authors mean here. For example, can I replace the average error in proposition 1, by maximal loss? Why can I do that? Why would that result in zero training error?\n\nOn the whole this is interesting work and the results are very nice. But, the proof for Lemma 1 seems incomplete to me, and some choices (such as choice of loss function) are unjustified. Also, important references in active learning literature are missing.", "Active learning for deep learning is an interesting topic and there is few useful tool available in the literature. It is happy to see such paper in the field. This paper proposes a batch mode active learning algorithm for CNN as a core-set problem. The authors provide an upper bound of the core-set loss, which is the gap between the training loss on the whole set and the core-set. By minimizing this upper bound, the problem becomes a K-center problem which can be solved by using a greedy approximation method, 2-OPT. The experiments are performed on image classification problem (CIFAR, CALTECH, SVHN datasets), under either supervised setting or weakly-supervised setting. Results show that the proposed method outperforms the random sampling and uncertainty sampling by a large margin. Moreover, the authors show that 2-OPT can save tractable amount of time in practice with a small accuracy drop.\n\nThe proposed algorithm is new and writing is clear. However, the paper is not flawless. The proposed active learning framework is under ERM and cover-set, which are currently not supported by deep learning. To validate such theoretical result, a non-deep-learning model should be adopted. The ERM for active learning has been investigated in the literature, such as \"Querying discriminative and representative samples for batch mode active learning\" in KDD 2013, which also provided an upper bound loss of the batch mode active learning and seems applicable for the problem in this paper. Another interesting question is most of the competing algorithm is myoptic active learning algorithms. The comparison is not fair enough. The authors should provide more competing algorithms in batch mode active learning.", "This paper studies active learning for convolutional neural networks. Authors formulate the active learning problem as core-set selection and present a novel strategy.\n\nExperiments are performed on three datasets to validate the effectiveness of the proposed method comparing with some baselines.\n\nTheoretical analysis is presented to show the performance of any selected subset using the geometry of the data points.\n\nAuthors are suggested to perform experiments on more datasets to make the results more convincing.\n\nThe initialization of the CNN model is not clearly introduced, which however, may affect the performance significantly.\n", "We use 100 random initialization in the k-Median experiment in the paper. We solve each of the k-Median problems and choose the clustering having the minimum k-Median loss. Hence, we believe the results are not due to the random initialization. Moreover, we also qualitatively explain why we think k-Median heuristic fails for large-scale problems in the paper. We believe it fails to sample tails of the distribution since cluster centers are around modes of the distribution.\n\nWe already added a new batch-mode baseline (\"Querying discriminative and representative samples for batch mode active learning\" in KDD 2013) which is the state-of-the-art uncertainty based batch method. Moreover, for the sake of completeness, we will further compare with the suggested baselines in the final version of the paper. \n\nFinally, we disagree that the main point of the paper is \"...uncertainty information is not effective for CNNs...\". The main point of our paper is defining the active learning as core-set selection and theoretically solving this core-set selection problem for CNNs.", "We thank all the reviewers and public commentators for their time and effort spend on our paper. \n\nThe main changes are as follows: \n- Fixed the Lemma1: Proof of our Lemma 1 was incomplete as noted by R1. We fixed the statement and proof of the Lemma. \n- New Baselines: We added two new baselines. One is a clustering based baseline as applying k-Median, and the other one is the state-of-the-art batch-mode active learning algorithm (\"Querying discriminative and representative samples for batch mode active learning\" in KDD 2013). Our method significantly outperforms both baselines, and we explain the details in the paper. \n- Initialization: We added details on initialization of the neural network weights.\n- Related Work: We added all the missing references suggested by reviewers and public commentators. \n- We updated the main text to clarify confusing points and added the missing details.", "Thank you for your answers! \n\nSince K-means and k-Median are highly sensitive to initialization, I wonder what is the number of times to repeat clustering using new initial cluster centroid positions? What is this number used in your experiments? We find that the key of obtain good performance for k-means based batch mode active learning is repeating clustering multiple times with different random initialization. Your argued that the pool size is significantly larger than the query budget. This is exactly why you need repeat K median clustering many times to get good performance. In addition, in Demir et al. work, they use uncertainly information to select the sample from each cluster.\n\nYu's work, (Active learning via transductive experimental design. ICML, 2006), can also be applied to non-linear case. See (Non-greedy Active Learning for Text Categorization using Convex Transductive Experimental Design, 2008, ICML). Yu also provide a sequential solution, which is easy to implement even for large scale datasets. It would be interesting if you can compare with Yu's work. (Because experimental design is mainly about the representativeness, which is quite similar to your proposed core-set idea).\n\nAnother good baseline is Chakraborty, S., 2015. Active batch selection via convex relaxations with guaranteed solution bounds. TPAMI. It provides a theoretical guarantee for batch mode active learning. \n\nFinally, the main point of this paper is that uncertainty information is not effective for CNNs. However, according to this paper, I would only say that \"top K\" strategy based uncertainty information is not effective for batch mode active learning when applied to CNNs. The most straightforward to use uncertainty information for batch setting is the approach in (Patra, S., and Bruzzone, L. 2012a. A batch-mode active learning technique based on multiple uncertainty for svm classifier.) Patra's approach can make use of uncertainty information for batch setting, even in the case of CNNs. The only issue is that you need repeat K-means many times.", "We thank the reviewer for their time and effort spent providing feedback. We appreciate the encouraging comments. We address the concerns as follows:\n\n- ERM and Core-Set for Deep Learning: Recent results (Example 7 of Robustness and Generalization by Xu&Manor 2010) provides a generalization bound for neural networks. Hence, CNNs support ERM. Moreover, our upper bound on core-set is also provided directly for CNNs. In summary, we believe our theoretical analysis is valid for CNNs. We update the paper accordingly and discuss the ERM for deep learning in Section 4.1. \n\n- Wang et al., KDD 2013: We thank the reviewer for pointing out this very related paper we missed in the original submission. We updated our related work section accordingly. Moreover, we also compare with this paper as well as another clustering-based batch-mode active learning baseline in the updated version. Our experiments suggest that these baselines turns out to be not effective for CNNs. We believe this is largely due to the fact that (Wang et al., KDD 2013) heavily uses uncertainty information and treating soft-max probabilities as uncertainty is misleading in general.", "We thank the reviewer for their time and effort spent providing feedback. We appreciate the encouraging comments. We revised the paper with the initialization details of the CNNs. Moreover, we are also planning to release the source code of our method as well as all the experiments for full reproducibility.", "We thank for the comment. We added the suggested references in the related work section. \n\nWe would like to clarify the following mis-understandings:\n\n1-3) We are NOT addressing the coreset for k-Center problem. We are also NOT constructing coreset for k-Center in our algorithm. The main problem we address in this paper is the coreset for CNNs. Our theoretical study shows that the construction of the coreset for CNNs requires solving the k-Center problem as a sub-routine. Hence, we (approximately) solve the k-Center problem. Our solution of k-Center does not use any coreset and it is based on a mixed integer program (MIP).\n\n4) We already performed experiments on Cifar10/100 in the original version. We also use a very large network (VGG16) in all of our experiments.", "We thank the reviewer for their time and effort spent providing feedback. We appreciate the encouraging comments. We address the concerns as follows:\n\nMajor Points:\n1) Proof of Lemma 1: The reviewer is right as the current proof of the Lemma 1 is incomplete. Proposition 1 only requires the loss function to be a Lipschitz continuous function of the x(input data point) for a fixed true label (y) and parameters (w). Current proof of the Lemma 1 indeed proves this more restrictive but sufficient statement. Hence, we re-stated the Lemma 1 correctly in the updated submission. We also added the final step (reverse triangle inequality) to the proof of the Lemma 1 for sake of completeness.\n\n2)The statement of Prop 1: We stated the proposition in this form to be consistent with equation 3. We clarified this in the updated text\n\n3)We updated the related work with the suggested references.\n\n4) L_2 loss: We agree that this is unconventional as cross-entropy is the widely accepted choice of loss function for classification problems. We use L_2 loss for theoretical simplicity and used cross-entropy in the experiments. The experimental results suggest that our method is effective for cross-entropy loss as well. We explicitly discussed and clarified this choice in the updated submission.\n\nMinor Points:\n1)MILP: We added the missing constraint -u_j \\in {0,1}- in the updated submission\n2)Maximal Loss: We agree that this is a confusing statement and we removed it in the updated submission. What we meant was although 0 training error on core-set is a hard assumption, it can be directly enforced by using the maximal loss while learning the model. However, we did not use any such trick.\n\nWe hope that the updated version answer the reviewer's concerns. Moreover, we also added more details and included two additional baselines in the updated submission.", "We thank for the comment and the pointers to the related literature we missed in the original submission. We answer the comments as;\n\n1,2 and 4) We added all suggested work in the related work. However, direct comparison with them seems infeasible. The work by Yang et al. is not tractable for CNNs since it requires inversion of a very large matrix (number of data points by number of datapoints). The work by Yu et al. only applies to linear regression case with Gaussian noise assumption. Work by Demir et al. is indeed partially applicable to CNNs and we implemented and experimented with it. We also implemented the suggested k-Median baseline. Since both methods performed similarly, we only include k-Median in the experimental results in the updated submission. We also added another batch active learning baseline (Zheng Wang and Jieping Ye. Querying discriminative and representative samples for batch mode\nactive learning. ACM TKDD 2015.)\n\nk-Median method turns out to be NOT effective for batch-mode deep learning. This is rather intuitive since the pool size is significantly larger than the query budget in all our experiments. For such a case, cluster centers happen to be around the modes of the data distribution. However, the important requirement of the query selection method is sampling the tails of the distribution instead. Moreover, the neighborhood of such cluster centers are likely to be sampled in the initial labelled pool since they are near modes. Hence, clustering based methods fail for very large pool and a rather small budget case. Distribution matching based batch active learning baseline (Wang&Ye 2015) also fails since it heavily uses uncertainty information and treating soft-max probabilities as confidence values is misleading. We updated the text accordingly.\n\n3) By that statement we meant, both oracle uncertainty and Bayesian estimation of uncertainty performs better than empirical uncertainty baseline. Hence, they are helpful improvements over empirical uncertainty baseline but still not effective when compared with random sampling. We clarified the claim in the updated version.\n ", "The authors construct coreset for k-center and use it for deep learning.\nOn the positive side it is great to see such new applications for core-sets.\n\nOn the negative side:\n1) Novelty: the idea of using coreset for k-center and its guarantees for problems other than k-center (but not deep learning) was already suggested e.g. in: Visual Precis Generation using Coresets, Dan Feldman, Rohan Paul, Daniela Rus and Paul Newman.\nIEEE International Conference on Robotics and Automation (ICRA) 2014\n\n2) The \"Active learning\" approach is simply the classing hitting set approach for computing k-center.\nSee e.g.: https://arxiv.org/abs/1102.1472 by Karthekeyan Chandrasekaran, Richard Karp, Erick Moreno-Centeno, Santosh Vempala\n\n3) Coreset for k-center has size exponential in d and in deep learning d is extremely large. Coreset of size independent or d might be more useful for these experiments e.g.:\nEfficient Frequent Directions Algorithm for Sparse Matrices, by Mina Ghashami, Edo Liberty, Jeff M. Phillips https://arxiv.org/abs/1602.00412\n\nk-Means for Streaming and Distributed Big Sparse Data by Artem Barger, and Dan Feldman\nhttps://arxiv.org/abs/1511.08990\n\nTurning Big Data into Tiny Data: Constant-size Coresets for k-means, PCA and Projective Clustering,\nDan Feldman, Melanie Schmidt and Christian Sohler.\nProc. 24th Annu. ACM Symp. on Discrete Algorithms (SODA) 2013\n\n4) Experiments: the classic benchmarks are with much larger datasets such as Ciphar10/100 etc, with much larger networks.\n\n\n", "(1). The author considered a batch selection setting of active learning. In the field of batch mode active learning, there are a lot of strategies to overcome the information overlap problem in batch setting, such as clustering-based methods (Demir, B.; Persello, C.; and Bruzzone, L. 2011. Batch-mode active-learning methods for the interactive classification of remote sensing images.) and combining uncertainty sampling and diversity (i.e. Yang, Y.; Ma, Z.; Nie, F.; Chang, X.; and Hauptmann, A. G. 2015. Multi-class active learning by uncertainty sampling with diversity maximization. IJCV). However, the baseline methods the author considered is the simplest and stupid way: selecting the \"top K\" samples on its own criterion. Obviously, this approach cannot work well. The author should compare with some smarter batch mode active learning methods, rather than the \"top K\" strategies.\n\n(2). This paper argued \"Since we have no labels available, we perform the core-set selection without using the labels.\" There already exists some active learning methods without using label information, such as Optimal experimental design (e.g., Yu, K.; Bi, J.; and Tresp, V. 2006. Active learning via transductive experimental design. ICML). It is quite similar to the proposed core-set idea. The author should compare with this kind of methods. The proposed core-set idea is also very similar to the K-medoids baseline (in \t\nA Meta-Learning Approach to One-Step Active Learning, arXiv:1706.08334): the examples to label are selected following a k-medoid clustering technique, where we label each example if it is a centroid of a cluster. This paper should also compare with this baseline.\n\n(3) The author argued in Section 5 \"Our results suggest that both oracle uncertainty information and Bayesian estimation of uncertainty is helpful\". We cannot find that the two uncertainty information is helpful. Because these methods perform comparable or even worse than random sampling in Figure 3. I think this statement is not correct.\n\n(4) I would suggest to compare with the following baseline: using CNN extract the features, following by K-means or K-medoids clustering, where K is the batch size. Note that you need repeat the K-means several times (i.e. 50 times) to get a good initialization. I would expect that this baseline presents a very promising result. I would like to see the performance comparison of the proposed core-set method and this baseline." ]
[ 7, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1aIuk-RW", "iclr_2018_H1aIuk-RW", "iclr_2018_H1aIuk-RW", "HkWB9LC7z", "iclr_2018_H1aIuk-RW", "BJR9cmLzf", "ByFZUzFlf", "BytfdZsxf", "rkZr6-yMM", "BJfU-qDeG", "rkhz6ptlG", "iclr_2018_H1aIuk-RW", "iclr_2018_H1aIuk-RW" ]
iclr_2018_BkrSv0lA-
Loss-aware Weight Quantization of Deep Networks
The huge size of deep networks hinders their use in small computing devices. In this paper, we consider compressing the network by weight quantization. We extend a recently proposed loss-aware weight binarization scheme to ternarization, with possibly different scaling parameters for the positive and negative weights, and m-bit (where m > 2) quantization. Experiments on feedforward and recurrent neural networks show that the proposed scheme outperforms state-of-the-art weight quantization algorithms, and is as accurate (or even more accurate) than the full-precision network.
accepted-poster-papers
While novelty is not the main strength of this paper, there is consensus that presentation is clear and the experimental results are convincing. Given the practical importance of designing and benchmarking methods to compactify deep nets, the paper deserves to be presented at ICLR-2018.
train
[ "B1h7tYSlM", "Sy_vAgOgz", "Bk4fUyieG", "rkaeuvwGM", "rJ5VPPDMf", "S1_DIvDfM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "In this paper, the authors propose a method of compressing network by means of weight ternarization. The network weights ternatization is formulated in the form of loss-aware quantization, which originally proposed by Hou et al. (2017).\n\nTo this reviewer’s understanding, the proposed method can be regarded as the extension of the previous work of LAB and TWN, which can be the main contribution of the work.\n\nWhile the proposed method achieved promising results compared to the competing methods, it is still necessary to compare their computational complexity, which is one of the main concerns in network compression.\n\nIt would be appreciated to have discussion on the results in Table 2, which tells that the performance of quantized networks is better than the full-precision network.\n", "This paper proposes a new method to train DNNs with quantized weights, by including the quantization as a constraint in a proximal quasi-Newton algorithm, which simultaneously learns a scaling for the quantized values (possibly different for positive and negative weights). \n\nThe paper is very clearly written, and the proposal is very well placed in the context of previous methods for the same purpose. The experiments are very clearly presented and solidly designed.\n\nIn fact, the paper is a somewhat simple extension of the method proposed by Hou, Yao, and Kwok (2017), which is where the novelty resides. Consequently, there is not a great degree of novelty in terms of the proposed method, and the results are only slightly better than those of previous methods.\n\nFinally, in terms of analysis of the algorithm, the authors simply invoke a theorem from Hou, Yao, and Kwok (2017), which claims convergence of the proposed algorithm. However, what is shown in that paper is that the sequence of loss function values converges, which does not imply that the sequence of weight estimates also converges, because of the presence of a non-convex constraint ($b_j^t \\in Q^{n_l}$). This may not be relevant for the practical results, but to be accurate, it can't be simply stated that the algorithm converges, without a more careful analysis.", "This paper extends the loss-aware weight binarization scheme to ternarization and arbitrary m-bit quantization and demonstrate its promising performance in the experiments.\n\nReview:\n\nPros\nThis paper formulates the weight quantization of deep networks as an optimization problem in the perspective of loss and solves the problem with a proximal newton algorithm. They extend the scheme to allow the use of different scaling parameters and to m-bit quantization. Experiments demonstrate the proposed scheme outperforms the state-of-the-art methods. \n\nThe experiments are complete and the writing is good.\n\nCons\nAlthough the work seems convincing, it is a little bit straight-forward derived from the original binarization scheme (Hou et al., 2017) to tenarization or m-bit since there are some analogous extension ideas (Lin et al., 2016b, Li & Liu, 2016b). Algorithm 2 and section 3.2 and 3.3 can be seen as additive complementary. \n", "Thanks for your review and suggestions.\n\n1. \"compare their computational complexity\"\n\n- For space (assuming that the weight values are stored in 32 bits for full-precision networks), the memory required by ternarized networks are 16 times smaller than the the full-precision network; while m-bit networks are 32/m times smaller.\n- For time, consider the product WX between a rxs weight matrix W and sxn input matrix X. For full-precision networks, the cost of WX is (M+A)rsn, where M and A are the computation costs of 32-bit floating-point multiplication and addition respectively. With the proposed ternarization, WX is computed by steps 3-5 in Algorithm 3. For illustration, we use the approximate solver (Algorithm 2) to compute the scaling parameter \\alpha and ternarized value b (invoked in Step 3 of Algorithm 3). With fixed b, computing \\alpha takes 2rs multiplications and 2rs additions. With fixed \\alpha, computing b takes rs comparisons. Assume that alternating minimization is run for k steps (empirically, k<=10), the computation cost of ternarization using Algorithm 2 is k(2M+2A+U)rs, where U is the computation cost of 32-bit floating-point comparison. Moreover, Steps 4 and 5 of Algorithm 3 take sn multiplications and rsn additions respectively. Thus the total cost for the product is Arsn + Msn + k(2M+2A+U)rs, and the speedup ratio is S = ((M+A) rsn)/(Arsn + Msn + k(2M+2A+U)rs), which is approximately (M+A)/A (some terms can be omitted as usually r >> 1 and n >>1, and k is very small). Following (Hubara et al, 2016), we consider the implication on power in 45nm technology (with A = 0.9J, and M=3.7J). Substituting into the ratio above, the energy reduction is then approximately 5. For the other ternarization algorithms such as TWN and TTQ, they also need at least sn multiplications and rsn additions for the product of WX, and thus the computation cost is similar to the proposed LAT_a.\n- Details and complexity analysis for the other models will be provided in the final version of the paper.\n\n2. \"discussion on the results in Table 2\"\n\n- The quantized LSTM performs better than full-precision network because deep networks often have larger-than-needed capacities, and so are less affected by the limited expressiveness of quantized weights. Besides, low-bit quantization acts as regularization, and so contributes positively to the performance. We will add the discussion in the final version of the paper.\n", "Thanks for your review and suggestions.\n\n1. \"the paper is a somewhat simple extension of the method proposed by Hou, Yao, and Kwok (2017), which is where the novelty resides. Consequently, there is not a great degree of novelty in terms of the proposed method\"\n\n- Please see our reply to reviewer 1 above.\n\n2. \"the results are only slightly better than those of previous methods\"\n\n- The testing errors on these data sets are often only a few percent, and so the improvements may appear small. To have a clearer comparison, we added the percentage degradation of classification error as compared to the full-precision network in Table 1 (https://www.dropbox.com/s/miquko7qhff9kns/iclr2018_rebuttal.pdf?dl=0). \n- As can be seen, among the weight-ternarized networks, the proposed LAT and its variants achieve much smaller performance degradation on all four data sets. Existing methods often have large degradation, while ours has <3% degradation on MNIST and <1% on the other three data sets. On CIFAR-100, the proposed LAT and its variants achieve even better results than the full-precision network.\n- For recurrent networks, we similarly added the percentage degradation of cross-entropy in Table 2. As can be seen, the proposed weight ternarization is the only method that performs even better than the full-precision counterpart on all three data sets. On the Linux Kernel and Penn Treebank data sets, the proposed LAT and its variants even have >5% performance gain. On the War and Peace dataset, the proposed LAT and its variants are the only methods that achieve significantly better results than the full-precision network. \n\n3. \"it can't be simply stated that the algorithm converges\"\n\n- On the theory side, we can only show convergence of the objective value. We will clarify this in the final version of the paper. \n- Empirically, the quantized weight also converges, as can be seen from the convergence of $\\alpha$ in Figure 1(b).\n", "Thanks for your review and suggestions.\n\n1. \"it is a little bit straight-forward derived from the original binarization scheme (Hou et al., 2017) to ternarization or m-bit\"\n\n- While the idea of extending from 1-bit (binarization) to more bits is straightforward, the difficulty and novelty are in the mathematical derivations. In Hou et al. (2017), the optimal closed-form solution for loss-aware binarization can be derived easily. However, for ternarization, the optimal \\alpha and b (in Proposition 3.2) cannot be easily solved. A straightforward solution would require combinatorial search. Instead, we proposed an exact solver (Algorithm 1) which relies only on sorting. This can be further simplified to an efficient alternating minimization procedure (Algorithm 2). The same situation applies to m-bit quantization. \n\n2. \"there are some analogous extension ideas (Lin et al., 2016b, Li & Liu, 2016b). Algorithm 2 and section 3.2 and 3.3 can be seen as additive complementary\"\n\n- While analogous extension ideas have been proposed, their weight solutions obtained are not rigorously derived. Specifically, ternary-connect (Lin et al., 2016b) performs simple stochastic quantization, but does not relate that to any quality measure (e.g., the loss, or distance between the quantized and full-precision weights). In TWN (Li & Liu, 2016b), obtaining the theoretical optimal solution is time-consuming and so they used a heuristic instead. In this paper, we explicitly consider the quantization effect to the loss (as in Hou et al (2017)). However, the resultant optimization problem is much more difficult than theirs as explained above.\n" ]
[ 8, 6, 6, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1 ]
[ "iclr_2018_BkrSv0lA-", "iclr_2018_BkrSv0lA-", "iclr_2018_BkrSv0lA-", "B1h7tYSlM", "Sy_vAgOgz", "Bk4fUyieG" ]
iclr_2018_BJk7Gf-CZ
Global Optimality Conditions for Deep Neural Networks
We study the error landscape of deep linear and nonlinear neural networks with the squared error loss. Minimizing the loss of a deep linear neural network is a nonconvex problem, and despite recent progress, our understanding of this loss surface is still incomplete. For deep linear networks, we present necessary and sufficient conditions for a critical point of the risk function to be a global minimum. Surprisingly, our conditions provide an efficiently checkable test for global optimality, while such tests are typically intractable in nonconvex optimization. We further extend these results to deep nonlinear neural networks and prove similar sufficient conditions for global optimality, albeit in a more limited function space setting.
accepted-poster-papers
Understanding global optimality conditions for deep nets even in the restricted case of linear layers is a valuable contribution. Please add clarifications to ways in which the paper goes beyond the results of Kawaguchi'16, which was the main concern expressed by the reviewers.
train
[ "ryd2EvplG", "BJKj87vxG", "B1Mwz5dxM", "BJ5ebDa7M", "HkImgwTQG", "ryHYJwaQG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper gives sufficient and necessary conditions for the global optimality of the loss function of deep linear neural networks. The paper is an extension of Kawaguchi'16. It also provides some sufficient conditions for the non-linear cases. \n\nI think the main technical concerns with the paper is that the technique only applies to a linear model, and it doesn't sound the techniques are much beyond Kawaguchi'16. I am happy to see more papers on linear models, but I would expect there are more conceptual or technical ingredients in it. As far as I can see, the same technique here will fail for non-linear models for the same reason as Kawaguchi's technique. Also, I think a more interesting question might be turning the landscape results into an algorithmic result --- have an algorithm that can guarantee to converge a global minimum. This won't be trivial because the deep linear networks do have a lot of very flat saddle points and therefore it's unclear whether one can avoid those saddle points. ", "Summary:\nThe paper gives theoretical results regarding the existence of local minima in the objective function of deep neural networks. In particular:\n- in the case of deep linear networks, they characterize whether a critical point is a global optimum or a saddle point by a simple criterion. This improves over recent work by Kawaguchi who showed that each critical point is either a global minimum or a saddle point (i.e., none is a local minimum), by relaxing some hypotheses and adding a simple criterion to know in which case we are.\n- in the case of nonlinear network, they provide a sufficient condition for a solution to be a global optimum, using a function space approach.\n\nQuality:\nThe quality is very good. The paper is technically correct and nontrivial. All proofs are provided and easy to follow.\n\nClarity:\nThe paper is very clear. Related work is clearly cited, and the novelty of the paper well explained. The technical proofs of the paper are in appendices, making the main text very smooth.\n\nOriginality:\nThe originality is weak. It extends a series of recent papers correctly cited. There is some originality in the proof which differs from recent related papers.\n\nSignificance:\nThe result is not completely surprising, but it is significant given the lack of theory and understanding of deep learning. Although the model is not really relevant for deep networks used in practice, the main result closes a question about characterization of critical points in simplified models if neural network, which is certainly interesting for many people.", "\n-I think title is misleading, as the more concise results in this paper is about linear networks I recommend adding linear in the title i.e. changing the title to … deep LINEAR networks\n\n- Theorems 2.1, 2.2 and the observation (2) are nice!\n \n- Theorem 2.2 there is no discussion about the nature of the saddle point is it strict? Does this theorem imply that the global optima can be reached from a random initialization? Regardless of if this theorem can deal with these issues, a discussion of the computational implications of this theorem is necessary.\n\n- I’m a bit puzzled by Theorems 4.1 and 4.2 and why they are useful. Since these results do not seem to have any computational implications about training the neural nets what insights do we gain about the problem by knowing this result? Further discussion would be helpful.\n", "Thank you very much for the review. We especially appreciate that the reviewer recognized the quality and clarity of our paper. Since we think that the reviewer has a good understanding of our paper and the reviewer did not have any specific questions for us, we would like to comment a little more about the significance of this paper.\n\nWe would like to emphasize that our results extend the previous “existence” theorems in Kawaguchi’16 to “computational” theorems that can actually help optimization of linear neural networks. In other words, previous works on linear neural networks only proved that there exist only global minima and saddle points, whereas we provide *computable* tests for distinguishing global minima from others. This means that we can use the conditions while running optimization algorithms to determine which kind of critical point we are at, and choose the next action accordingly.\n\nAside from this computational perspective, considering that optimizing deep linear networks is a nonconvex problem, our checkable global optimality conditions are interesting in their own right, because in the worst cases even checking local optimality of nonconvex problems could be intractable.", "We thank the reviewer for their effort in reviewing our paper and for the encouragement.\n\nWe agree that the main content of this paper is about linear networks, but since we also have some preliminary results on nonlinear case (albeit in the abstract functional space setting), we kept a more general title to serve as a small indicator of this. \n\nOur theorems do not imply anything about strict saddle property of linear neural networks. In fact, it was shown by Kawaguchi’16 that in linear neural networks there are many non-strict saddle points i.e. saddle points without negative eigenvalues. So, our theorems do not imply that global optima can always be reached by random initialization and just running SGD-like methods. However, there is actually some computational implication of these theorems; with these global optimality conditions, whenever we reach a critical point we can always efficiently check if it's a global minimum or a saddle point. If we are indeed at a global minimum, we can just return the current point and terminate. If we are at a saddle, we can then intentionally add random perturbations to the point and try to escape the saddle.\n\nOur nonlinear results are in a function space setting, so their implications are limited in computational aspects. However, we believe that these results can be good initial steps from the theoretical point of view; for example, we can see that one of the sufficient conditions for global optimality is the Jacobian matrix being full rank. Given that a nonlinear function can locally be linearly approximated using Jacobians, this connection is already interesting. An extension of the function space viewpoint to cover different architectures or design new architectures (that have “better” properties when viewed via the function space view) should also be possible and worth studying.", "We appreciate your efforts for reviewing our paper. We admit that the key results of our paper are for the linear case, and global optimality conditions do not directly apply to nonlinear models that are used in practice. \n\nBut we believe that there is strong value in trying to fully understand the linear case, as this offers building blocks towards investigating nonlinear models, for instance in helping identify structures and settings that help us in quest for understanding realistic architectures. \n\nWe note that our results extend previous papers such as Kawaguchi’16 and Hardt and Ma’17 in a substantial manner: our results have direct computational implications and provide a “complete” picture of the landscape of optimal for the deep linear case.\n\nMore concretely, previous works on this topic only show that there are only global minima or saddle points; these results are “existence” results and there is little computational gain one can get from them. In contrast, we present *efficiently checkable* conditions for distinguishing the two different types of critical points (global min or saddle): one can even use these conditions while running optimization algorithms to check whether the critical points we encounter are saddle points or not, if desired. \n\nMore broadly, we would like to emphasize again that since deep linear networks is itself a nonconvex problem, having a checkable necessary and sufficient global optimality condition is quite interesting, because in general for nonconvex problems, not only global but merely verifying even local optimality can be computationally intractable.\n\nDeveloping a provably convergent algorithm based on our results is a very good research direction to improve our understanding of the loss surface. Although our paper does not answer these questions, we appreciate the reviewer’s advice for this valuable future research direction." ]
[ 5, 7, 8, -1, -1, -1 ]
[ 5, 4, 5, -1, -1, -1 ]
[ "iclr_2018_BJk7Gf-CZ", "iclr_2018_BJk7Gf-CZ", "iclr_2018_BJk7Gf-CZ", "BJKj87vxG", "B1Mwz5dxM", "ryd2EvplG" ]
iclr_2018_HJ_aoCyRZ
SpectralNet: Spectral Clustering using Deep Neural Networks
Spectral clustering is a leading and popular technique in unsupervised data analysis. Two of its major limitations are scalability and generalization of the spectral embedding (i.e., out-of-sample-extension). In this paper we introduce a deep learning approach to spectral clustering that overcomes the above shortcomings. Our network, which we call SpectralNet, learns a map that embeds input data points into the eigenspace of their associated graph Laplacian matrix and subsequently clusters them. We train SpectralNet using a procedure that involves constrained stochastic optimization. Stochastic optimization allows it to scale to large datasets, while the constraints, which are implemented using a special purpose output layer, allow us to keep the network output orthogonal. Moreover, the map learned by SpectralNet naturally generalizes the spectral embedding to unseen data points. To further improve the quality of the clustering, we replace the standard pairwise Gaussian affinities with affinities leaned from unlabeled data using a Siamese network. Additional improvement can be achieved by applying the network to code representations produced, e.g., by standard autoencoders. Our end-to-end learning procedure is fully unsupervised. In addition, we apply VC dimension theory to derive a lower bound on the size of SpectralNet. State-of-the-art clustering results are reported for both the MNIST and Reuters datasets.
accepted-poster-papers
The paper proposes interesting deep learning based spectral clustering techniques. The use of functional embeddings for enabling spectral clustering to have an out-of-sample extension has of course been explored earlier (e.g., see Manifold Regularization work of Belkin et al, JMLR 2006). For polynomials or kernel-based spectral clustering, the orthogonality of the outputs can be exactly handled via a generalized eigenvector problem, while here the arguments are statistically flavored and not made very clear in the original draft. Some crucial comparisons, e.g., against large-scale versions of vanilla spectral clustering and against other methods that generalize to new samples is missing or not thorough enough. See reviews for more precise description of issues. As such the paper will benefit from a revision.
train
[ "S1FGCYFef", "HJx_Ub9ez", "HylbmO3lf", "HkRVrYTmM", "Hy75fd6Qz", "B1fEvV3Xf", "By95P4h7M", "HylaPE3Qf", "S1EtIVnmf", "Hy_4L4nmM", "HJ0ur0V1G", "B1L_so4yz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "public" ]
[ "The authors study deep neural networks for spectral clustering in combination with stochastic optimization for large datasets. They apply VC theory to find a lower bound on the size of the network. \n\nOverall it is an interesting study, though the connections with the existing literature could be strengthened:\n\n- The out-of-sample extension aspects and scalability is stressed in the abstract and introduction to motivate the work.\nOn the other hand in Table 1 there is only compared with methods that do not possess these properties.\nIn the literature also kernel spectral clustering has been proposed, possessing out-of-sample properties\nand applicable to large data sets, see\n\n``Multiway Spectral Clustering with Out-of-Sample Extensions through Weighted Kernel PCA, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 2, pp. 335-347, 2010\n\nSparse Kernel Spectral Clustering Models for Large-Scale Data Analysis, Neurocomputing, vol. 74, no. 9, pp. 1382-1390, 2011\n\nThe latter also discussed incomplete Cholesky decomposition which seems related to section 3.1 on p.4.\n\n- related to the neural networks aspects, it would be good to comment on the reproducability of the results with respect to the training results (local minima) and the model selection aspects. How is the number of clusters and number of neurons selected?\n\n", "PAPER SUMMARY\n\nThis paper aims to address two limitations of spectral clustering: its scalability to large datasets and its generalizability to new samples. The proposed solution is based on designing a neural network called SpectralNet that maps the input data to the eigenspace of the graph Laplacian and finds an orthogonal basis for this eigenspace. The network is trained by alternating between orthogonalization and gradient descent steps, where scalability is achieved by using a stochastic optimization scheme that instead of computing an eigendecomposition of the entire data (as in vanilla spectral clustering) uses a Cholesky decomposition of the mini batch to orthogonalize the output. The method can also handle out-of-sample data by applying the learned embedding function to new data. Experiments on the MNIST handwritten digit database and the Reuters document database demonstrate the effectiveness of the proposed SpectralNet.\n\nCOMMENTS\n\n1) I find that the output layer (i.e. the orthogonalization layer) is not well-justified. In principle, different batches require different weights on the output layer. Although the authors observe empirically that orthogonalization weights are roughly shared across different batches, the paper lacks a convincing argument for why this can happen. Moreover, it is not clear why an output layer designed to orthogonalized batches from the training set would also orthogonalize batches from the test set?\n\n2) One claimed contribution of this work is that it extends spectral clustering to large scale data. However, the paper could have commented more on what makes spectral clustering not scalable, and how the method in this paper addresses that. The authors did mention that spectral clustering requires computing eigenvectors for large matrices, which is prohibitive. However, this argument is not entirely true, as eigen-decomposition for large sparse matrices can be carried out efficiently by tools such as ARPACK. On the other hand, computing the nearest neighbor affinity or Gaussian affinity is N^2 complexity, which could be the bottleneck of computation for spectral clustering on large scale data. But this issue can be addressed using approximate nearest neighbors obtained, e.g., via hashing. Overall, the paper compares only to vanilla spectral clustering, which is not representative of the state of the art. The paper should do an analysis of the computational complexity of the proposed method and compare it to the computational complexity of both vanilla as well as scalable spectral clustering methods to demonstrate that the proposed approach is more scalable than the state of the art. \n\n3) Continuing with the point above, an experimental comparison with prior work on large scale spectral clustering (see, e.g. [a] and the references therein) is missing. In particular, the result of spectral clustering on the Reuters database is not reported, but one could use other scalable versions of spectral clustering as a baseline.\n\n4) Another benefit of the proposed method is that it can handle out-of-sample data. However, the evaluation of such benefit in experiments is rather limited. In reporting the performance on out-of-sample data, there is no other baseline to compare with. One can at least compare with the following baseline: apply k-means to the training data in input space, and classify each test data to the nearest centroid.\n\n5) The reason for using an autoencoder to extract features is unclear. In subspace clustering, it has been observed that features extracted from a scattering transform network [b] can significantly improve clustering performance, see e.g. [c] where all methods have >85% accuracy on MNIST. The methods in [c] are also tested on larger datasets.\n\n[a] Choromanska, et. al., Fast Spectral Clustering via the Nystrom Method, International conference on algorithmic learning theory, 2013\n\n[b] Bruna, Mallat, Invariant Scattering Convolution Networks, arXiv 2012\n\n[c] You, et. al., Oracle Based Active Set Algorithm for Scalable Elastic Net Subspace Clustering, CVPR 2016", "Brief Summary:\nThe paper introduces a deep learning based approach that approximates spectral clustering. The basic idea is to train a neural network to map a representation of input in the eigenspace where k-means clustering can be performed as usual. The method is more scalable than the vanilla spectral clustering algorithm and it can also be used to cluster a new incoming data point without redoing the whole spectral clustering procedure. The authors have proved worst case lower bounds on the size of neural network required to perform the task using VC dimension theory. Experiments on MNIST and Reuters dataset show state of the art performance (On Reuter's there is a significant performance improvement under one measure).\n\nMain Contributions:\nIntroduced SpectralNet - a neural network that maps input points to their embeddings in the eigenspace\nUsed constraint optimization (using Cholesky decomposition) to train the final layer of neural network to make sure that the output \"eigenvectors\" are orthonormal\nSolves the problem of scalability by using stochastic optimization (basically using mini-batches to train neural network)\nSolves the problem of generalization to new data points as the neural network can be used to directly compute the embedding for the incoming data point in the eigenspace\nProved a lower bound for VC dimension of spectral clustering (linear in n as opposed to linear in input dimension d for k-means, which explains the expressive power of spectral clustering)\nDerived a worst-case lower bound for the size of neural network that is needed to realize the given objective\nExperimented by using Gaussian kernel similarity and similarity learned using a Siamese neural network (trained in an unsupervised way) on both input space and code space (auto-encoder representation)\n\nOverall:\nThe paper is very clearly written. The idea is simple yet clever. Incorporating the ortho-normalization constraint in the final layer of the neural network is interesting.\nThe VC dimension based result are interesting but useless as the authors themselves argue that in practical cases the size of neural network required will be much less than the worst case lower bound proved in the paper.\nThe experiments demonstrate the effectiveness of the proposed approach.\nThe unsupervised training of Siamese is based on code k-nearest neighbor approach to get positive and negative examples. It is not clear why the learned matrix should outperform Gaussian kernel, but the experiments show that it does.\n\n\n\n", "We thank the reviewer for clarifying their concern. It seems like the reviewer mistakenly confuses rows with columns. As we explain in detail below, we respectfully maintain that our algorithm is correct, and in particular that permuting the points does not change the orthogonalization layer. \n\nUsing our notation, a minibatch {x_1,...x_m} is arranged in an m x k matrix whose *rows* are the x_i's. For each x_i, the network then produces a non-orthogonalized output \\tilde y_i (producing an m x k matrix \\tilde Y, whose rows are the \\tilde y_i's), and then \\tilde y_i is fed to the orthogonalization layer where it is linearly transformed by a k x k matrix \\tilde L = \\sqrt{m} (L^{-T}) which is supposed to orthogonalize \\tilde Y (by multiplying \\tilde Y from the right), so that Y = \\tilde Y \\tilde L is orthogonal (i.e., Y^TY = (1/m)I).\n\nSuppose now that we obtain a new minimatch that contains the exact same points {x_1,...x_m} but permuted, so X' = PX where P is an m x m permutation matrix -- note that in our notation P permutes the *rows* (i.e. the points), not the columns (the features). This will permute the rows of \\tilde Y, and subsequently also the rows of Y, but the result Y' = PY will remain orthogonal, since Y'^TY' = (PY)^T(PY) = Y^T P^T P Y = Y^TY = (1/m)I, where this identity holds since P^T = inverse(P).\n\nConsequently, as long that each minibatch faithfully represents the data distribution the orthogonalization layer should, at convergence, (roughly) orthogonalize all the minibatches. Our experiments indeed demonstrate that spectral net convergences close to the correct eigenvectors (as can be seen in figure 2, where for MNIST the Grassman distance converges to a low number of 0.026). \n\nWe hope this explanation clarifies our algorithm.\n\nWe also thank you for the other comments and will certainly modify our paper according to our response as requested.", "I would like to thank the author for addressing our comments.\n\n- I find that it is still unclear why there is an orthogonalization layer that can approximately orthogonalize every batch of data. The authors made the argument that, as data is drawn from a certain distribution, it is expected that if two large enough batches are sampled from this distribution they tend to share orthogonalization matrix. However, it is easy to construct counter-examples for this argument. Consider two matrices Y1 and Y2 where Y2 is a permutation of the columns of Y1, i.e. Y2 = Y1*P. They have the same probability of being sampled from any given distribution, but the matrices that orthogonalize them, say R1 and R2, have the relation that R2 = P*R1, are very different. This is our main concern in suggesting a weak rejection as technically the algorithm is unjustified (and incorrect if I may say).\n\n- We were suggesting that the paper includes a discussion of prior works on scalable spectral clustering (for which we did not see a response in the rebuttal) as we believe that the reader can better understand the \"scalability\" of the proposed method in context. We also suggested to include experimental results for scalable spectral clustering methods, although they may not be as good and as fast as the proposed method, so that we can get a better sense of the advantage of the proposed method. Overall, we would also suggest the authors to incorporate their responses in 2, 3, 4 to the paper.\n", "Thank you very much for your review!\n\n1.\tIndeed, DEC, DCN, DEPICT, and JULE do not report details on their generalization performance. k-means results on the MNIST test set are now obtained (please see point 5 in our response to reviewer 3). Moreover, our test set performance (reported in the last paragraphs of sections 5.2.1 and 5.2.2) can and should be compared to our training set performance (reported in Table 1). As can be seen, the two are very close, which implies that SpectralNet generalizes well on MNIST and Reuters. \n2.\tThe works of Alzate and Suykens mentioned by the reviewer are indeed impressive and very elegant and we thank the reviewer for bringing them to our attention. This work handles large training data by applying smart subsampling. Unlike their approach, our method uses the entire training data. Unfortunately, their work was not tested on standard benchmarks and we were unable to retrieve their code.\n3.\tThe reproducibility of our results can be inferred from the standard deviations reported in Table 1. Indeed, our results on MNIST and Reuters are highly reproducible. To verify this further, we repeated our experiments 100 times and obtained very similar results.\n4.\tSpecifying an appropriate number of clusters is a general challenge in clustering. In this work we assume that this number is known ahead of time, as pointed out in the first paragraph of Section 3, as well as the input line of Algorithm 1. In our experiments on MNIST and Reuters we simply use the true number or distinct labels (10 for MNIST, 4 for Reuters).\n5.\tFinally, regarding model selection and hyperparameter setting, in order to evaluate the connection between SpectralNet loss and clustering accuracy, we conducted a series of experiments, where we varied the net architectures and and learning rate policies; the Siamese net and Gaussian scale parameter \\sigma were held fixed throughout all experiments. In each experiment, we measured the loss on a validation set and the clustering accuracy (over the entire data). The correlation between loss and accuracy across these experiments was -0.771. This implies that hyperparameter setting for the spectral map learning can be chosen based on the validation loss, and a setup that yields a smaller validation loss should be preferred. We remark that we also use the convergence of the validation loss to determine our learning rate schedule and stopping criterion.\n\n\n", "Reviewer #2:\n1.\tIndeed, DEC, DCN, DEPICT, and JULE do not report details on their generalization performance. k-means results on the MNIST test set are now obtained (see point 5 above). Moreover, our test set performance (reported in the last paragraphs of sections 5.2.1 and 5.2.2) can and should be compared to our training set performance (reported in Table 1). As can be seen, the two are very close, which implies that SpectralNet generalizes well on MNIST and Reuters. \n2.\tThe works of Alzate and Suykens mentioned by the reviewer are indeed impressive and very elegant and we thank the reviewer for bringing them to our attention. This work handles large training data by applying smart subsampling. Unlike their approach, our method uses the entire training data. Unfortunately, their work was not tested on standard benchmarks and we were unable to retrieve their code.\n3.\tThe reproducibility of our results can be inferred from the standard deviations reported in Table 1. Indeed, our results on MNIST and Reuters are highly reproducible. To verify this further, we repeated our experiments 100 times and obtained very similar results.\n4.\tSpecifying an appropriate number of clusters is a general challenge in clustering. In this work we assume that this number is known ahead of time, as pointed out in the first paragraph of Section 3, as well as the input line of Algorithm 1. In our experiments on MNIST and Reuters we simply use the true number or distinct labels (10 for MNIST, 4 for Reuters).\n5.\tFinally, regarding model selection and hyperparameter setting, in order to evaluate the connection between SpectralNet loss and clustering accuracy, we conducted a series of experiments, where we varied the net architectures and and learning rate policies; the Siamese net and Gaussian scale parameter \\sigma were held fixed throughout all experiments. In each experiment, we measured the loss on a validation set and the clustering accuracy (over the entire data). The correlation between loss and accuracy in these experiments was -0.771. This implies that hyperparameter setting for the spectral map learning can be chosen based on the validation loss, and a setup that yields a smaller validation loss should be preferred. We remark that we also use the convergence of the validation loss to determine our learning rate schedule and stopping criterion.\n\nWe thank the reviewers for the thoughtful comments, which we will address in the final version. We hope that at this point the reviewers will be willing to reconsider (in a somewhat positive manner) their rating for this work, which we believe to be novel and important from both algorithmic, theoretical and performance perspectives, and also as a deep learning tool for more general eigendecomposition and manifold learning problems, which may possibly be more stable than current approaches that rely on direct eigendecomposition.\n", "We thank the reviewers for their helpful comments.\nBelow we address the issues pointed out by each reviewer.\n\nReviewer #1:\n1.\tThe Siamese net is trained in an unsupervised fashion on pairs which are constructed based on Euclidean nearest neighbor relations. Yet, it learns distances that yield a significant improvement in the clustering performance comparing to the performance using Euclidean distances. We find this empirical observation quite remarkable. To this end, we have several conjectures about the mathematical reason behind this behavior, for example, the ability of this training procedure to exploit local characteristics. Understanding this further is the topic of ongoing work.\n\nReviewer #3\n1.\tThe weights of the output layer define a linear transformation that orthogonalizes its input batches. When the batches are small, we agree with the reviewer that different transformations are likely to be needed for different batches. However, when a batch is sufficiently large (as discussed in the last paragraph of section 3.1) and sampled iid from the data distribution, the linear transformation that orthogonalizes it is expected to approximately orthogonalize other (sufficiently large) batches of points sampled in a similar fashion. Indeed, we empirically found that for batch sizes of 2048 on Reuters and 1024 on MNIST, the weights of the output layer also (approximately) orthogonalize the entire dataset.\n2.\tIndeed, there are ways to increase the scalability of vanilla spectral clustering. Methods like ARPACK and PROPACK are very efficient for eigendecomposition of sparse matrices with a rapidly decaying spectrum (which is also the typical case for spectral clustering as well). Following the reviewer’s recommendation, we applied ARPACK to our affinity matrix on the Reuters dataset (n=685,071) using the sparse Gaussian affinities obtained by kNN search with k=3000 neighbors per point (a similar setting to SpectralNet; the number of neighbors is adapted proportionally to the number of neighbors SpectralNet uses in each batch). While SpectralNet takes less than 20 minutes to converge on this dataset (using the same affinity matrix), ARPACK needed 110.4 minutes to obtain the first four eigenvectors of the affinity matrix. We therefore see that SpectralNet scales well compared to ARPACK. \nPlease note that both SpectralNet and spectral clustering require pre-computed nearest neighbor graph. In our Reuters experiments we indeed used an approximate nearest neighbor search, which took 20 minutes to run. \n3.\tWe performed extensive experiments of spectral clustering using ARPACK on Reuters, using various scales and numbers of neighbors. Unfortunately, we were not able to achieve a reasonable accuracy. We conjecture that this is due of the well-known sensitivity of spectral clustering to noisy data and outliers. SpectralNet, on the other hand, appears to be more robust, due to its stochastic training. This is actually an ongoing research we are currently pursuing. \n4.\tWe followed the procedure proposed by the reviewer to evaluate the generalization performance of k-means on MNIST. The accuracy of the test set is .546 when using the input space and .776 when using the code space. Both these results are inferior to SpectralNet performance on this dataset. Moreover, we do compare our performance to a baseline - we actually want the performance on the test data to be similar to the performance on the training data. This is indeed the case in our experiments on both the MNIST and Reuters, as also appears in the manuscript (see the last paragraphs of sections 5.2.2 and 5.2.1, which should be compared to the results in Table 1).\n5.\tPerforming the learning task in feature spaces is a standard practice machine learning in general and deep learning in particular. Autoencoders are often used in deep clustering; see for example DCN, Vade, and DEC. Moreover, to be sure that our results were not obtained merely due to a better feature space, we even do not use our own autoencoder; rather, we use the one made publicly available by the authors of VaDE. Finally, SpectralNet can also be applied to the features of a scattering transform, as well as or to any other representation.\n", "Thank you very much for your review!\n\n1.\tThe weights of the output layer define a linear transformation that orthogonalizes its input batches. When the batches are small, we agree with the reviewer that different transformations are likely to be needed for different batches. However, when a batch is sufficiently large (as discussed in the last paragraph of section 3.1) and sampled iid from the data distribution, the linear transformation that orthogonalizes it is expected to approximately orthogonalize other (sufficiently large) batches of points sampled in a similar fashion. Indeed, we empirically found that for batch sizes of 2048 on Reuters and 1024 on MNIST, the weights of the output layer also (approximately) orthogonalize the entire dataset.\n2.\tIndeed, there are ways to increase the scalability of vanilla spectral clustering. Methods like ARPACK and PROPACK are very efficient for eigendecomposition of sparse matrices with a rapidly decaying spectrum (which is also the typical case for spectral clustering as well). Following the reviewer’s recommendation, we applied ARPACK to our affinity matrix on the Reuters dataset (n=685,071) using the sparse Gaussian affinities obtained by kNN search with k=3000 neighbors per point (a similar setting to SpectralNet; the number of neighbors is adapted proportionally to the number of neighbors SpectralNet uses in each batch). While SpectralNet takes less than 20 minutes to converge on this dataset (using the same affinity matrix), ARPACK needed 110.4 minutes to obtain the first four eigenvectors of the affinity matrix. We therefore see that SpectralNet scales well compared to ARPACK. \nPlease note that both SpectralNet and spectral clustering require pre-computed nearest neighbor graph. In our Reuters experiments we indeed used an approximate nearest neighbor search, which took 20 minutes to run. \n3.\tWe performed extensive experiments of spectral clustering using ARPACK on Reuters, using various scales and numbers of neighbors. Unfortunately, we were not able to achieve a reasonable accuracy. We conjecture that this is due of the well-known sensitivity of spectral clustering to noisy data and outliers. SpectralNet, on the other hand, appears to be more robust, due to its stochastic training. This is actually an ongoing research we are currently pursuing. \n4.\tWe followed the procedure proposed by the reviewer to evaluate the generalization performance of k-means on MNIST. The accuracy of the test set is .546 when using the input space and .776 when using the code space. Both these results are inferior to SpectralNet performance on this dataset. Moreover, we do compare our performance to a baseline - we actually want the performance on the test data to be similar to the performance on the training data. This is indeed the case in our experiments on both the MNIST and Reuters, as also appears in the manuscript (see the last paragraphs of sections 5.2.2 and 5.2.1, which should be compared to the results in Table 1).\n5.\tPerforming the learning task in feature spaces is a standard practice machine learning in general and deep learning in particular. Autoencoders are often used in deep clustering; see for example DCN, Vade, and DEC. Moreover, to be sure that our results were not obtained merely due to a better feature space, we even do not use our own autoencoder; rather, we use the one made publicly available by the authors of VaDE. Finally, SpectralNet can also be applied to the features of a scattering transform, as well as or to any other representation.\n", "Thank you very much for your review!\n\nThe Siamese net is trained in an unsupervised fashion on pairs which are constructed based on Euclidean nearest neighbor relations. Yet, it learns distances that yield a significant improvement in the clustering performance comparing to the performance using Euclidean distances. We find this empirical observation quite remarkable. To this end, we have several conjectures about the mathematical reason behind this behavior, for example, the ability of this training procedure to exploit local characteristics. Understanding this further is the topic of ongoing work.", "Thank you very much for bringing this work to our attention. We will examine it thoroughly. ", "I believe that you should also cite “Learning Discrete Representations via Information Maximizing Self-Augmented Training” (ICML 2017) http://proceedings.mlr.press/v70/hu17b.html.\nThis paper is closely related to your work and is also about unsupervised clustering using deep neural networks.\nAs far as I know, the proposed method, IMSAT, is the current state-of-the-art method in deep clustering (November 2017). \nCould you compare your results against their result?" ]
[ 6, 4, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJ_aoCyRZ", "iclr_2018_HJ_aoCyRZ", "iclr_2018_HJ_aoCyRZ", "Hy75fd6Qz", "S1EtIVnmf", "S1FGCYFef", "iclr_2018_HJ_aoCyRZ", "iclr_2018_HJ_aoCyRZ", "HJx_Ub9ez", "HylbmO3lf", "B1L_so4yz", "iclr_2018_HJ_aoCyRZ" ]
iclr_2018_Hk8XMWgRb
Not-So-Random Features
We propose a principled method for kernel learning, which relies on a Fourier-analytic characterization of translation-invariant or rotation-invariant kernels. Our method produces a sequence of feature maps, iteratively refining the SVM margin. We provide rigorous guarantees for optimality and generalization, interpreting our algorithm as online equilibrium-finding dynamics in a certain two-player min-max game. Evaluations on synthetic and real-world datasets demonstrate scalability and consistent improvements over related random features-based methods.
accepted-poster-papers
New effective kernel learning methods are very well aligned with ICLR's focus on Representation Learning. As a reviewer pointed out, not all aspects of the paper are algorithmically "clean". However, the proposed approach is natural and appears to give consistent improvements over a couple of expected baselines. The paper could be strengthened with more comparisons against other kernel learning methods, but acceptance at ICLR-2018 will increase the diversity of the conversation around advances in Representation Learning.
train
[ "SyPGF6dxM", "r1fhwN9eG", "Hyzp__aWM", "HJS-kvamM", "rkWcMz3QG", "HyJQF6S7f", "S1XVxiSGG", "rJGMxiBMf", "ryzCJiBfM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "author", "author" ]
[ "The paper proposes to learn a custom translation or rotation invariant kernel in the Fourier representation to maximize the margin of SVM. Instead of using Monte Carlo approximation as in the traditional random features literature, the main point of the paper is to learn these Fourier features in a min-max sense. This perspective leads to some interesting theoretical results and some new interpretation. Synthetic and some simple real-world experiments demonstrate the effectiveness of the algorithm compared to random features given the fix number of bases.\n\nI like the idea of trying to formulate the feature learning problem as a two-player min-max game and its connection to boosting. As for the related work, it seems the authors have missed some very relevant pieces of work in learning these Fourier features through gradient descent [1, 2]. It would be interesting to compare these algorithms as well.\n\n[1] Zichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, Ziyu Wang. Deep Fried Convnets. ICCV 2015.\n[2] Zichao Yang, Alexander J. Smola, Le Song, Andrew Gordon Wilson. A la Carte — Learning Fast Kernels. AISTATS 2015.", "\nIn this paper, the authors proposed an interesting algorithm for learning the l1-SVM and the Fourier represented kernel together. The model extends kernel alignment with random feature dual representation and incorporates it into l1-SVM optimization problem. They proposed algorithms based on online learning in which the Langevin dynamics is utilized to handle the nonconvexity. Under some conditions about the quality of the solution to the nonconvex optimization, they provide the convergence and the sample complexity. Empirically, they show the performances are better than random feature and the LKRF. \n\nI like the way they handle the nonconvexity component of the model. However, there are several issues need to be addressed. \n\n1, In Eq. (6), although due to the convex-concave either min-max or max-min are equivalent, such claim should be explained explicitly. \n\n2, In the paper, there is an assumption about the peak of random feature \"it is a natural assumption on realistic data that the largest peaks are close to the origin\". I was wondering where this assumption is used? Could you please provide more justification for such assumption?\n\n3, Although the proof of the algorithm relies on the online learning regret bound, the algorithm itself requires visit all the data in each update, and thus, it is not suitable for online learning. Please clarify this in the paper explicitly. \n\n4, The experiment is weak. The algorithm is closely related to boosting and MKL, while there is no such comparison. Meanwhile, Since the proposed algorithm requires extra optimization w.r.t. random feature, it is more convincing to include the empirical runtime comparison. \n\nSuggestion: it will be better if the author discusses some other model besides l1-SVM with such kernel learning. \n", "In this paper the authors consider learning directly Fourier representations of shift/translation invariant kernels for machine learning applications. They choose the alignment of the kernel to data as the objective function to optimize. They empirically verify that the features they learned lead to good quality SVM classifiers. My problem with that paper is that even though at first glance learning adaptive feature maps seems to be an attractive approach, authors' contribution is actually very little. Below I list some of the key problems. First of all the authors claim in the introduction that their algorithm is very fast and with provable theoretical guarantees. But in fact later they admit that the problem of optimizing the alignment is a non-convex problem and the authors end up with a couple of heuristics to deal with it. They do not really provide any substantial theoretical justification why these heuristics work in practice even though they observe it empirically. The assumptions that large Fourier peaks happen close to origin is probably well-justified from the empirical point of view, but it is a hack, not a well established well-grounded theoretical method (the authors claim that in their experiments they found it easy to find informative peaks, even in hundreds of dimensions, but these experiments are limited to the SVM setting, I have no idea how these empirical findings would translate to other kernelized algorithms using these adaptive features). The Langevin dynamics algorithm used by the authors to find the peaks (where the gradient is available) gives only weak theoretical guarantees (as the authors actually admit) and this is a well known method, certainly not a novelty of that paper. Finally, the authors notice that \"In the rotation-invariant case, where Ω is a discrete set, heuristics are available\". That is really not very informative (the authors refer to the Appendix so I carefully read that part of the Appendix, but it is extremely vague, it is not clear at all how the Langevin dynamics can be \"emulated\" by a discrete Markov chain that mixes fast; the authors do not provide any justification of that approach, what is the mixing time ?; how the \"good emulation property\" is exactly measured ?). In the conclusions the authors admit that: \"Many theoretical questions remain, such as accelerating the search for Fourier peaks\". I think that the problem of accelerating this approach is a critical point that this publication is missing. Without this, it is actually really hard to talk about general mechanism of learning adaptive Fourier features for kernel algorithms (which is how the authors present their contribution); instead we have a method heavily customized and well-tailored to the (not particularly exciting) SVM scenario (with optimization performed by the standard annealing method; it is not clear at all whether for other downstream kernel applications this approach for optimizing the alignment would provide good quality models) that uses lots of task specific hacks and heuristics to efficiently optimize the alignment. Another problem is that it is not clear at all to me how authors' approach can be extended to non shift-invariant kernels that do not benefit from Bochner's Theorem. Such kernels are very related to neural networks (for instance PNG kernels with linear rectifier nonlinearities correspond to random layers in NNs with ReLU) and in the NN context are much more interesting that radial basis function or in general shift-invariant kernels. A general kernel method should address this issue (the authors just claim in the conclusions that it would be interesting to explore the NN context in more detail).\n\nTo sum it up, it is a solid submission, but in my opinion without a substantial contribution and working only in a very limited setting when it is heavily relying on many unproven hacks and heuristics.", "Thanks for the reviews and comments!\n\nWe've made a few minor revisions to the manuscript, mostly for clarity and brevity.", "Thanks for your interest!\n\nOur algorithm for maximizing the Fourier potential picks the highest peak along the trajectory of Langevin dynamics (rather than returning the final $\\omega$). This seems necessary for refining the margin effectively, and requires evaluation of the full Fourier potential at each iteration. Evaluating the Fourier potential function requires a full pass over the data (rather than an SGD minibatch).\n\nFurthermore, the Online Gradient Ascent steps need to be taken on all entries of $\\alpha$, which also requires a full pass over data; subsampled analogues such as Online Coordinate Ascent give a worse regret bound.\n\nSince each iteration requires a full pass over data anyway, Langevin dynamics is preferred over SGD.", "The two player game formulation for SVM is pretty cool! Computing the optimal feature by searching the largest Fourier peak under adversarially picked alpha also looks interesting. \n\nHowever, it is not clear to me why the authors are using Langevin dynamics for searching the peak -- why not SGD? Is the additional noise in Langevin dynamics more useful empirically? ", "Similar to our work, paper [2] also considers learning the Fourier spectrum of a shift-invariant kernel, where the spectrum is parameterized as a mixture of (a fixed number of) Gaussians or a piecewise linear function, which can definitely fit in our min-max formulation. However, in comparison, our end-to-end method is more general since it doesn’t rely on any specific parameterization. Paper [1] is also interesting and relevant since it draws connections between spectrally learned kernel machines and deep neural networks. As we mention in the conclusion, it is an exciting future direction to build our method into a deep neural network. We thank the reviewer for pointing out these references, and we’ll add them to the related work section.\n", "\n@1: We mention the minimax theorem in the proof, in the appendix. We can add a brief clarification in the main paper.\n\n@2: The only assumption (for the theorems to hold) is that Algorithm 1 finds an eps-approximate global maximum. Our discussion on band-limitedness of real-world data is simply to argue that this non-convex problem is plausibly easy on realistic optimization landscapes: low-frequency features (on the same scale as the RBF bandwidth parameter; see Appendix A.1) are informative.\n\n@3: That’s correct, just as in boosting. We are happy to find a way to further emphasize the distinction, to reduce confusion.\n\n@4:\n- As mentioned in the paper, MKL methods take >100 times longer on datasets as large as CIFAR-10.\n- Unlike the selling point of methods such as LKRF and Quasi-Monte Carlo, our method has much greater expressivity (thus evidently saturates at a much higher accuracy). Hence, the value of a quantitative wall clock time comparison is unclear. We believe that the existing discussion (primal like RF; parallelizable; reasonable wall clock time in practice) suffices to address qualitative questions on efficiency as compared to other paradigms.\n- Is there a specific boosting method the reviewer believes to be related enough, so as to require an end-to-end comparison? We found that it’s unclear how to choose an ensemble in boosting for fair comparison with learning an optimal translation-invariant kernel (an infinite-dimensional continuous family). As far as we know, though our theoretical analysis bears a strong relationship to boosting, the end-to-end methodology is somewhat dissimilar.\n\n@Suggestion: We agree that considering state-of-the-art settings and applications is an important and interesting direction (as we note in the conclusion). As we mentioned in another review, any convex kernel machine admitting a dual could fit in our min-max formulation, while the min-max SVM objective captures the structure of learning a kernel for any such kernel machine.\n", "The reviewer’s summary (“they choose the alignment of the kernel to data as the objective function to optimize”) appears to have missed our main contribution, which is our formulation of the kernel learning problem as a min-max game whose Nash equilibrium gives the optimal kernel. This is harder than simply maximizing kernel alignment, which is the objective considered by most previous works; it is also more useful and principled, as this optimizes the generalization bound directly. Our contribution lies in the provable black-box reduction from computing this Nash to solving *adversarially weighted* instances of kernel alignment. We are concerned that this confusion possibly underlies the reviewer’s score and conclusion.\n\n\n@Langevin: We never claim that the use of Langevin gradient is the crux (or novelty) of the paper. Again, our contribution is giving a theoretically sound reason and strong empirical evidence to use multiple rounds of *adversarially weighted* kernel alignment rather than the uniformly weighted one (which the reviewer possibly had in mind). The role of Langevin is to provide an end-to-end pipeline for this reduction which works well in practice; we don’t believe that this makes the entire methodology a “hack”.\n\n@Spherical harmonics: The mention of the Markov chain on spherical harmonic indices should be viewed as a side note for practical implementation. Finding a band-limited Fourier peak in discrete space is *easier* in practice, even though there is no gradient; we first mention that enumeration of indices is possible, with no possible optimization error (unlike the continuous case). By “emulating” Langevin dynamics, we refer to a random walk on the lattice of valid indices. Again, the role of all discussion here on Monte Carlo for optimization is to complete a practical end-to-end pipeline, in this case for the rotation-invariant version.\n\n@Non-convexity: An end-to-end polynomial-time guarantee with no assumptions would be a *significant* breakthrough: it would entail an optimal way to train two-layer neural nets with cosine activations. Our reduction connects this daunting task to a classic and natural non-convex problem (high-dimensional FFT), an active area of theoretical research as pointed out in the conclusion.\n\n@Fourier assumption: We don’t believe that the hypothesis that large Fourier peaks are close to the origin is such a controversial one. This is essentially the same as why \\ell_2 regularization is widely adopted and well-justified (see paper [1]). Algorithmically, as we point out in a response to another review, the assumption does not even need to show up. However, we could allay this concern more rigorously via an explicitly-enforced \\ell_2 regularizer or constraint. Since multiple reviewers have mentioned this, we’ll revise the manuscript to clarify.\n\n@Downstream kernel methods: Our method applies to downstream supervised kernel algorithms at least as well as kernel alignment, a widely considered objective in MKL. (Note that kernel alignment is a special case of our method, with T=1 round of boosting.) We found that a comprehensive presentation and evaluation of the plethora of kernel methods would distract from the main focus. We further note that the min-max formulation is possible for any convex kernel machine admitting a dual: e.g. support vector regression, kernel ridge regression, and statistical testing (which we can add to the appendix). The min-max SVM objective captures the structure of learning a kernel for any such kernel machine.\n\n@More expressive kernels: Shift-invariant kernels are very expressive and general, compared to kernel families with any comparable theory (existing work in random features). In ICML/NIPS/JMLR 2016-2017, there is a huge amount of research solely on efficiently approximating (not learning) a *single* RBF kernel (see, e.g. papers [3, 4, 5, 6]). We also agree that non-shift-invariant kernels are a very interesting direction to consider. However, much care must be taken, as some restriction on the kernel must be chosen; otherwise, overfitting is inevitable (due to the no-free-lunch theorem).\n\n@ReLU-NN: To fully address the issue of learning a two-layer ReLU neural network is a well-known NP-hard problem (paper [2]). We agree that hardness results shouldn’t prevent us from advancing. However, any non-trivial improvement on this problem should surely deserve a separate paper, which is why we mention it as an interesting future direction.\n\n\n[1] Kakade et al. On the Complexity of Linear Prediction: Risk Bounds, Margin Bounds, and Regularization, NIPS 2008.\n[2] Klivans, Sherstov. Cryptographic Hardness for Learning Intersections of Halfspaces, FOCS 2006.\n[3] Yu et al. Orthogonal random features, NIPS 2016.\n[4] Lyu. Spherical Structured Feature Maps for Kernel Approximation, ICML 2017.\n[5] Avron et al. Quasi-Monte Carlo Feature Maps for Shift-Invariant Kernels, JMLR 2016.\n[6] Dao et al. Gaussian Quadrature for Kernel Features, NIPS 2017." ]
[ 7, 6, 4, -1, -1, -1, -1, -1, -1 ]
[ 3, 5, 5, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Hk8XMWgRb", "iclr_2018_Hk8XMWgRb", "iclr_2018_Hk8XMWgRb", "iclr_2018_Hk8XMWgRb", "HyJQF6S7f", "iclr_2018_Hk8XMWgRb", "SyPGF6dxM", "r1fhwN9eG", "Hyzp__aWM" ]
iclr_2018_Hkn7CBaTW
Learning how to explain neural networks: PatternNet and PatternAttribution
DeConvNet, Guided BackProp, LRP, were invented to better understand deep neural networks. We show that these methods do not produce the theoretically correct explanation for a linear model. Yet they are used on multi-layer networks with millions of parameters. This is a cause for concern since linear models are simple neural networks. We argue that explanation methods for neural nets should work reliably in the limit of simplicity, the linear models. Based on our analysis of linear models we propose a generalization that yields two explanation techniques (PatternNet and PatternAttribution) that are theoretically sound for linear models and produce improved explanations for deep networks.
accepted-poster-papers
The paper shows that many of the current state-of-the-art interpretability methods are inaccurate even for linear models. Then based on their analysis of linear models they propose a technique that is thus accurate for them and also empirically provides good performance for non-linear models such as DNNs.
train
[ "S1_hZzxBM", "ByUZ0g5lG", "Hk0lS3teG", "H1AKArLNf", "BJ711zqxG", "rJv-eUTQG", "SJZuJ8TQf", "HJ4bkI6mG", "HyG5aL9C-", "rJUSnbE0b", "B12BY_ATZ" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "public", "public" ]
[ "Both integrated gradients [1] and DeepLift [2] are good, commonly used methods which are certainly fast enough to be used in the image degradation experiment. A meaningful comparison to them, or any method published within the past two years, is very clearly missing from this paper.\n\nContext: I'm an active researcher in the space, was going to post an unsolicited, highly negative review, due to the poor comparison with state of the art, but couldn't find the time. \n\n[1] https://arxiv.org/abs/1703.01365\n[2] https://arxiv.org/abs/1704.02685", "summary of article: \nThis paper organizes existing methods for understanding and explaining deep neural networks into three categories based on what they reveal about a network: functions, signals, or attribution. “The function extracts the signal from the data by removing the distractor. The attribution of output values to input dimensions shows how much an individual component of the signal contributes to the output…” (p. 5). The authors propose a novel quality criterion for signal estimators, inspired by the analysis of linear models. They also propose two new explanatory methods, PatternNet (for signal estimation) and PatternAttribution (for relevance attribution), based on optimizing their new quality criterion. They present quantitative and qualitative analyses comparing PatternNet and PatternAttribution to several existing explanation methods on VGG-19.\n\n* Quality: The claims of the paper are well supported by quantitative results and qualitative visualizations. \n* Clarity: Overall the paper is clear and well organized. There are a few points that could benefit from clarification.\n* Originality: The paper puts forth an original framing of the problem of explaining deep neural networks. Related work is appropriately cited and compared. The authors's quality criterion for signal estimators allows them to do a quantitative analysis for a problem that is often hard to quantify.\n* Significance: This paper justifies PatternNet and PatternAttribution as good methods to explain predictions made by neural networks. These methods may now serve as an important tool for future work which may lead to new insights about how neural networks work. \n\nPros:\n* Helps to organize existing methods for understanding neural networks in terms of the types of descriptions they provide: functions, signals or attribution.\n* Creative quantitative analyses that evaluate their signal estimator at the level of single units and entire networks.\n\nCons:\n* Experiments consider only the pre-trained VGG-19 model trained on ImageNet. Results may not generalize to other architectures/datasets.\n* Limited visualizations are provided. \n\nComments:\n* Most of the paper is dedicated to explaining these signal estimators and quality criterion in case of a linear model. Only one paragraph is given to explain how they are used to estimate the signal at each layer in VGG-19. On first reading, there are some ambiguities about how the estimators scale up to deep networks. It would help to clarify if you included the expression for the two-component estimator and maybe your quality criterion for an arbitrary hidden unit. \n* The concept of signal is somewhat unclear. Is the signal \n * (a) the part of the input image that led to a particular classification, as described in the introduction and suggested by the visualizations, in which case there is one signal per image for a given trained network?\n * (b) the part of the input that led to activation of a particular unit, as your unit wise signal estimators are applied, in which case there is one signal for every unit of a trained network? You might benefit from two terms to separate the unit-level signal (what caused the activation of a particular unit?) from the total signal (what caused all activations in this network?).\n* Assuming definition (b) I think the visualizations would be more convincing if you showed the signal for several output units. One would like to see that the signal estimation is doing more than separating foreground from background but is actually semantically specific. For instance, for the mailbox image, what does the signal look like if you propagate back from only the output unit for umbrella compared to the output unit for mailbox? \n* Do you have any intuition about why your two-component estimator doesn’t seem to be working as well in the convolutional layers? Do you think it is related to the fact that you are averaging within feature maps? Is it strictly necessary to do this averaging? Can you imagine a signal estimator more specifically designed for convolutional layers?\n\nMinor issues: \n* The label \"Figure 4\" is missing. Only subcaptions (a) and (b) are present.\n* Color scheme of figures: Why two oranges? It’s hard to see the difference.", "The authors analyze show theoretical shortcomings in previous methods of explaining neural networks and propose an elegant way to remove these shortcomings in their methods PatternNet and PatternAttribution.\n\nThe quest of visualizing neural network decision is now a very active field with many contributions. The contribution made by the authors stands out due to its elegant combination of theoretical insights and improved performance in application. The work is very detailed and reads very well.\n\nI am missing at least one figure with comparison with more state-of-the-art methods (e.g. I would love to see results from the method by Zintgraf et al. 2017 which unlike all included prior methods seems to produce much crisper visualizations and also is very related because it learns from the data, too).\n\nMinor questions and comments:\n* Fig 3: Why is the random method so good at removing correlation from fc6? And the S_w even better? Something seems special about fc6.\n* Fig 4: Why is the identical estimator better than the weights estimator and that one better than S_a?\n* It would be nice to compare the image degradation experiment with using the ranking provided by the work from Zintgraf which should by definition function as a kind of gold standard\n* Figure 5, 4th row (mailbox): It looks like the umbrella significantly contributes to the network decision to classify the image as \"mailbox\" which doesn't make too much sense. Is is a problem of the visualization (maybe there is next to no weight on the umbrella), of PatternAttribution or a strange but interesting a artifact of the analyzed network?\n* page 8 \"... closed form solutions (Eq (4) and Eq. (7))\" The first reference seems to be wrong. I guess Eq 4. should instead reference the unnumbered equation after Eq. 3.\n\nUpdate 2018-01-12: Upgraded Rating from 7 to 8 (see comment below)", "Thank you for your detailed response.\n\nI appreciate the added comparison with the Zintgraf method in the appendix. To me, the attribution often looks surprisingly different (ignoring the obvious differnence that only the Zintgraf method can output negative attribution). \n\nRegarding the image degradation experiment in Figure 4: I understand that it is not feasible to run the Zintgraf method on the full validation dataset, but I still\n think it would be interesting to use it as a gold standard at least on a small subset of images. Since the paper is now already one year old, chances are that the\n computation times are shorter on modern hardware. But arguably it is still not feasible on a reasonbly representative dataset, which is very unfortunate.\n\nThe newly added summary of the different methods in the appendix is very helpful, thankyou for that!\n\nBy the way, do you plan to release the code? I would be interested in applying your method to a few of my networks.\n\nTo summarize, I am very happy with the paper now and will upgrade my rating to \"8: Top 50% of papers\".\n", "I found this paper an interesting read for two reasons: First, interpretability is an increasingly important problem as machine learning models grow more and more complicated. Second, the paper aims at generalization of previous work on confounded linear model interpretation in neuroimaging (the so-called filter versus patterns problem). The problem is relevant for discriminative problems: If the objective is really to visualize the generative process, the \"filters\" learned by the discriminative process need to be transformed to correct for spatial correlated noise. \n\nGiven the focus on extracting visualization of the generative process, it would have been meaningful to place the discussion in a greater frame of generative model deep learning (VAEs, GANs etc etc). At present the \"state of the art\" discussion appears quite narrow, being confined to recent methods for visualization of discriminative deep models.\n\nThe authors convincingly demonstrate for the linear case, that their \"PatternNet\" mechanism can produce the generative process (i.e. discard spatially correlated \"distractors\"). The PatternNet is generalized to multi-layer ReLu networks by construction of node-specific pattern vectors and back-propagating these through the network. The \"proof\" (eqs. 4-6) is sketchy and involves uncontrolled approximations. The back-propagation mechanism is very briefly introduced and depicted in figure 1.\n\nYet, the results are rather convincing. Both the anecdotal/qualitative examples and the more quantitative patch elimination experiment figure 4a (?number missing) \n\nI do not understand the remark: \"However, our method has the advantage that it is not only applicable to image models but is a generalization of the theory commonly used in neuroimaging Haufe et al. (2014).\" what ??\n\nOverall, I appreciate the general idea. However, the contribution could have been much stronger based on a detailed derivation with testable assumptions/approximations, and if based on a clear declaration of the aim.\n\n", "We thank the reviewer for his detailed comments!\n\nQuote:\nI am missing at least one figure with comparison with more state-of-the-art methods (e.g. I would love to see results from the method by Zintgraf et al. 2017 which unlike all included prior methods seems to produce much crisper visualizations and also is very related because it learns from the data, too).\n\nAnswer\nThe reason the Zintgraf paper was left out initially is that it does not perform an exact decomposition of the output value into input contributions such as defined by LRP and DTD. Instead it defines a different, bayesian, measure on importance. The visualisations are therefore not directly comparable. The work by Zintgraf is excellent and we have included a comparison for the qualitative visualisation in the appendix.\n\n\n\n\nQuote:\n* Fig 3: Why is the random method so good at removing correlation from fc6? And the S_w even better? Something seems special about fc6.\n\nAnswer:\nThe weight vector always has a dot product of 1 with the informative direction. They do not coincide but they are correlated. Therefore it is to be expected that S_w performs better than the random direction. The fc_6 result is indeed surprising. It is the first fully connected layer and has therefore the largest dimensionality in the input. This makes measuring the quality of the signal estimators more difficult in this layer. \n\n\n\n\nQuote:\n* Fig 4: Why is the identical estimator better than the weights estimator and that one better than S_a?\n\nAnswer:\nThe degradation experiment favors the identity estimator since LRP(=identity estimator) reduces to gradient x input. If the gradient would be constant (as it is in a linear model) the biggest change you can create in the logit by changing a single dimension.That the linear pattern does not work as well as the two-component version can be attributed to the fact that it incorrectly models the signal component by ignoring nonlinear component introduced by the ReLu.\n\n\n\n\nQuote:\n* It would be nice to compare the image degradation experiment with using the ranking provided by the work from Zintgraf which should by definition function as a kind of gold standard\n\nAnswer:\nIt would be interesting but not feasible unfortunately. The work by Zintgraf et al takes for VGG 70 minutes per image (according to their paper). Processing each of the 50.000 validation images already takes 50.000 images * 70 min /60 (min) /24 (hours) = 2430 days of compute. Since we have to process each image multiple times times this is simply not possible.\n\n\n\n\nQuote:\n* Figure 5, 4th row (mailbox): It looks like the umbrella significantly contributes to the network decision to classify the image as \"mailbox\" which doesn't make too much sense. Is is a problem of the visualization (maybe there is next to no weight on the umbrella), of PatternAttribution or a strange but interesting a artifact of the analyzed network?\n\nAnswer:\nWe have no definitive explanation yet. A possible explanation is that the second to last layer has to contain information on all the classes. Umbrella is one of these classes. Also in the explanation of the Zintgraf method the umbrella contributes positively to the mailbox class. This is an indication that it could be an artifact of the analyzed network.\n\n\n\n\nQuote:\n* page 8 \"... closed form solutions (Eq (4) and Eq. (7))\" The first reference seems to be wrong. I guess Eq 4. should instead reference the unnumbered equation after Eq. 3.\n\nAnswer:\nThanks: we will update the manuscript.\n", "We thank the reviewer for the insightful and balanced review. \n\n\n\n\nQuote:\nGiven the focus on extracting visualization of the generative process, it would have been meaningful to place the discussion in a greater frame of generative model deep learning (VAEs, GANs etc etc). At present the \"state of the art\" discussion appears quite narrow, being confined to recent methods for visualization of discriminative deep models.\n\nAnswer:\nWe intentionally kept the scope of the state of the art focussed on the methods for discriminative models. We motivate this choice as follows: our analysis focuses on these methods that analyse discriminative models. The goal of the interpretability methods is to find what the informative component is in the data. The general field of generative modelling on the other hand tries to model the full data, both the informative and non-informative components. \nWe refrained from directly comparing to the greater framework of generative models in deep learning because we wanted to prevent confusion among the readers and to limit the length of the manuscript. That being said, we do believe that these models (GAN’s, VAE, …) can become part of methods for interpretability, e.g. they could be used for signal estimation. \n\n\n\n\nQuote:\nThe authors convincingly demonstrate for the linear case, that their \"PatternNet\" mechanism can produce the generative process (i.e. discard spatially correlated \"distractors\"). The PatternNet is generalized to multi-layer ReLu networks by construction of node-specific pattern vectors and back-propagating these through the network. The \"proof\" (eqs. 4-6) is sketchy and involves uncontrolled approximations. The back-propagation mechanism is very briefly introduced and depicted in figure 1.\n\nAnswer:\nTo obtain PatternNet, we started by maximizing equation 1. \nThis equation describes that we want to remove the signal from the input.\nThe signal being the component in the input that is predictive (linearly) about the output of the neuron. \nIn Equation 2, we show that this can be done by ensuring that the covariance between the signal and the output is identical to the covariance between the original input and the output. This is shown generally without making additional assumptions on the signal.\n\nHowever, to turn Eq. 2 into an actionable approach, we must make an assumption on the functional form of the signal estimator. This is what we do in Eq. 3 for the linear case and Eq. 4 for the non-linear case. \nOn the other hand, Equations 5 and 6 are simply re-writing the covariance. There is no additional approximation. \nThe step to Eq. 7 introduces a new assumption. Here we assume that the contribution to the covariances for 5 and 6 are equal in the non-firing regime are equal. The same holds for the firing (activation above 0) regime. Since this is an approximation, we carefully designed our experiments (including the one in Fig. 3) to measure the quality of this approximation.\n\nWe do agree with the reviewer that it would be extremely valuable to create a formal proof that this is the optimal approach. However, so far we did not find a way to create this proof. Furthermore, we are not aware of any formal approach within the field of interpretability that is able to do this. Instead, we have to rely on the evidence that (1) our approach can solve the linear toy problem correctly and (2) our experimental results indicate that it is a quantitative and qualitative improvement over previous methods. \nTo clarify the back-propagation mechanism we updated the manuscript with an appendix making the algorithms explicit.\n\n\n\n\nQuote:\nYet, the results are rather convincing. Both the anecdotal/qualitative examples and the more quantitative patch elimination experiment figure 4a (?number missing) \n\nAnswer:\nWe will update the caption of the figure.\n\n\n\n\nQuote: \nI do not understand the remark: \"However, our method has the advantage that it is not only applicable to image models but is a generalization of the theory commonly used in neuroimaging Haufe et al. (2014).\" what ??\n\nAnswer:\nWe will rephrase this as: Our method is a generalization of the analysis of linear models known in Neuroimaging (Haufe et al. (2014)) that makes it applicable to deep networks.\n", "We thank the reviewer for the in depth and careful review! \n\n\nQuote:\n* Most of the paper is dedicated to explaining these signal estimators and quality criterion in case of a linear model. Only one paragraph is given to explain how they are used to estimate the signal at each layer in VGG-19. On first reading, there are some ambiguities about how the estimators scale up to deep networks. It would help to clarify if you included the expression for the two-component estimator and maybe your quality criterion for an arbitrary hidden unit. \n\nAnswer:\nThe two component estimator is in Eq. 4 with the direction defined as by Eq. 7. The Quality criterion is eq. 1. Since we analyze neurons between non-linearities. In the manuscript we focussed on ReLu networks, but ideally different estimators will be developed for different non-linearities in the future. \nThe algorithm for the back-propagation will be added to the manuscript in the appendices.\n\n\n\n\nQuote:\n* The concept of signal is somewhat unclear. Is the signal \n * (a) the part of the input image that led to a particular classification, as described in the introduction and suggested by the visualizations, in which case there is one signal per image for a given trained network?\n * (b) the part of the input that led to activation of a particular unit, as your unit wise signal estimators are applied, in which case there is one signal for every unit of a trained network? You might benefit from two terms to separate the unit-level signal (what caused the activation of a particular unit?) from the total signal (what caused all activations in this network?).\n\nAnswer:\nIn our analysis we used definition b and define it neuron-wise. As mentioned in the manuscript, the visualised signal is a superposition of what are assumed to be the neuron-wise signals. \n\n\n\n\nQuote:\n* Do you have any intuition about why your two-component estimator doesn’t seem to be working as well in the convolutional layers? Do you think it is related to the fact that you are averaging within feature maps? Is it strictly \nWe have no definitive explanation and are still investigating this.\n", "In a linear model the gradient is constant, so the integrated gradients method produces the element-wise product of the gradient and the image (assuming an all-zero baseline).\n\nMy understanding is that the authors here make the point that the image is generally signal + distractors (see Fig. 2), and the correct attribution would be the element-wise product of the gradient and the signal (without distractors). Thus, you need to first estimate what's the signal, which is what the authors address here.", "Why don't you baseline against prior work in your evaluation?\n\nThere's a big problem with clutter in this space, with lots of other approaches to doing this, even that have been published just this year. It's incumbent upon you to demonstrate that your method is better than the 10 other methods you cite, rather than tucking away comparisons into a one-paragraph section at the end of your results, which reads more like a related work section.", "Doesn't integrated gradients, which you cite (https://arxiv.org/abs/1703.01365, ICML 2017), produce the theoretically correct explanation for a linear model, and also prove that their method is the only method which can do so under pretty reasonable assumptions?" ]
[ -1, 8, 8, -1, 6, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, 4, -1, 4, -1, -1, -1, -1, -1, -1 ]
[ "H1AKArLNf", "iclr_2018_Hkn7CBaTW", "iclr_2018_Hkn7CBaTW", "rJv-eUTQG", "iclr_2018_Hkn7CBaTW", "Hk0lS3teG", "BJ711zqxG", "ByUZ0g5lG", "B12BY_ATZ", "iclr_2018_Hkn7CBaTW", "iclr_2018_Hkn7CBaTW" ]
iclr_2018_ByOfBggRZ
Detecting Statistical Interactions from Neural Network Weights
Interpreting neural networks is a crucial and challenging task in machine learning. In this paper, we develop a novel framework for detecting statistical interactions captured by a feedforward multilayer neural network by directly interpreting its learned weights. Depending on the desired interactions, our method can achieve significantly better or similar interaction detection performance compared to the state-of-the-art without searching an exponential solution space of possible interactions. We obtain this accuracy and efficiency by observing that interactions between input features are created by the non-additive effect of nonlinear activation functions, and that interacting paths are encoded in weight matrices. We demonstrate the performance of our method and the importance of discovered interactions via experimental results on both synthetic datasets and real-world application datasets.
accepted-poster-papers
The paper proposes a way of detecting statistical interactions in a dataset based on the weights learned by a DNN. The idea is interesting and quite useful as is showcased in the experiments. The reviewers feel that the paper is also quite well written and easy to follow.
test
[ "SyQM6W5gz", "Hy8XC6Kxf", "Hy9gtVoxz", "HkVnyt57M", "HJf2U17mz", "BkR-FYvzf", "rJZYeXMzG", "S1xmKLxzM", "Bk7UD8gMf", "ry4GOIgGM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author" ]
[ "This paper presents a method to identify high-order interactions from the weights of feedforward neural networks. \n\nThe main benefits of the method are:\n1)\tCan detect high order interactions and there’s no need to specify the order (unlike, for example, in lasso-based methods).\n2)\tCan detect interactions appearing inside of non-linear function (e.g. sin(x1 * x2))\n\nThe method is interesting, in particular if benefit #2 holds experimentally. Unfortunately, there are too many gaps in the experimental evaluation of this paper to warrant this claim right now.\n\nMajor:\n\n1)\tArguably, point 1 is not a particularly interesting setting. The order of the interactions tested is mainly driven by the sample size of the dataset considered, so in some sense the inability to restrict the order of the interaction found can actually be a problem in real settings. \nBecause of this, it would be very helpful to separate the evaluation of benefit 1 and 2 at least in the simulation setting. For example, simulate a synthetic function with no interactions appearing in non-linearities (e.g. x1+x2x3x4+x4x6) and evaluate the different methods at different sample sizes (e.g. 100 samples to 1e5 samples). The proposed method might show high type-1 error under this setting. Do the same for the synthetic functions already in the paper. By the way, what is the sample size of the current set of synthetic experiments?\n2)\tThe authors claim that the proposed method identifies interactions “without searching an exponential solution space of possible interactions”. This is misleading, because the search of the exponential space of interactions happens during training by moving around in the latent space identified by the intermediate layers. It could perhaps be rephrased as “efficiently”.\n3)\tIt’s not clear from the text whether ANOVA and HierLasso are only looking for second order interactions. If so, why not include a lasso with n-order interactions as a baseline?\n4)\tWhy aren’t the baselines evaluated on the real datasets and heatmaps similar to figure 5 are produced?\n5)\tIs it possible to include the ROC curves corresponding to table 2?\n\n\nMinor:\n\n1)\tHave the authors thought about statistical testing in this framework? The proposed method only gives a ranking of possible interactions, but does not give p-values or similar (e.g. FDRs).\n2)\t12 pages of text. Text is often repetitive and can be shortened without loss of understanding or reproducibility.\n", "This paper develops a novel method to use a neural network to infer statistical interactions between input variables without assuming any explicit interaction form or order. First the paper describes that an 'interaction strength' would be captured through a simple multiplication of the aggregated weight and the weights of the first hidden layers. Then, two simple networks for the main and interaction effects are modeled separately, and learned jointly with posing L1-regularization only on the interaction part to cancel out the main effect as much as possible. The automatic cutoff determination is also proposed by using a GAM fitting based on these two networks. A nice series of experimental validations demonstrate the various types of interactions can be detected, while it also fairly clarifies the limitations.\n\nIn addition to the related work mentioned in the manuscript, interaction detection is also originated from so-called AID, literally intended for 'automatic interaction detector' (Morgan & Sonquist, 1963), which is also the origin of CHAID and CART, thus the tree-based methods like Additive Groves would be the one of main methods for this. But given the flexibility of function representations, the use of neural networks would be worth rethinking, and this work would give one clear example.\n\nI liked the overall ideas which is clean and simple, but also found several points still confusing and unclear.\n\n1) One of the keys behind this method is the architecture described in 4.1. But this part sounds quite heuristic, and it is unclear to me how this can affect to the facts such as Theorem 4 and Algorithm 1. Absorbing the main effect is not critical to these facts? In a standard sense of statistics, interaction would be something like residuals after removing the main (additive) effect. (like a standard test by a likelihood ratio test for models with vs without interactions)\n\n2) the description about the neural network for the main effect is a bit unclear. For example, what does exactly mean the 'networks with univariate inputs for each input variable'? Is my guessing that it is a 1-10-10-10-1 network (in the experiments) correct...? Also, do g_i and g_i' in the GAM model (sec 4.3) correspond to the two networks for the main and interaction effects respectively?\n\n3) mu is finally fixed at min function, and I'm not sure why this is abstracted throughout the manuscript. Is it for considering the requirements for any possible criteria?\n\nPros:\n- detecting (any order / any form of) statistical interactions by neural networks is provided.\n- nice experimental setup and evaluations with comparisons to relevant baselines by ANOVA, HierLasso, and Additive Groves.\n\nCons:\n- some parts of explanations to support the idea has unclear relationship to what was actually done, in particular, for how to cancel out the main effect.\n- the neural network architecture with L1 regularization is a bit heuristic, and I'm not surely confident that this architecture can capture only the interaction effect by cancelling out the main effect.\n\n", "Based on a hierarchical hereditary assumption, this paper identifies pairwise and high-order feature interactions by re-interpreting neural network weights, assuming higher-order interactions exist only if all its induced lower-order interactions exist. Using a multiplication of the absolute values of all neural network weight matrices on top of the first hidden layer, this paper defines the aggregated strength z_r of each hidden unit r contributing to the final target output y. Multiplying z_r by some statistics of weights connecting a subset of input features to r and summing over r results in final interaction strength of each feature interaction subsets, with feature interaction order equal to the size of each feature subset. \n\nMain issues:\n\n1. Aggregating neural network weights to identify feature interactions is very interesting. However, completely ignoring \nactivation functions makes the method quite crude. \n\n2. High-order interacting features must share some common hidden unit somewhere in a hidden layer within a deep neural network. Restricting to the first hidden layer in Algorithm 1 inevitably misses some important feature interactions.\n\n3. The neural network weights heavily depends on the l1-regularized neural network training, but a group lasso penalty makes much more sense. See Group Sparse Regularization for Deep Neural Networks (https://arxiv.org/pdf/1607.00485.pdf).\n\n4. The experiments are only conducted on some synthetic datasets with very small feature dimensionality p. Large-scale experiments are needed.\n\n5. There are some important references missing. For example, RuleFit is a good baseline method for identifying feature interactions based on random forest and l1-logistic regression (Friedman and Popescu, 2005, Predictive learning via rule ensembles); Relaxing strict hierarchical hereditary constraints, high-order l1-logistic regression based on tree-structured feature expansion identifies pairwise and high-order multiplicative feature interactions (Min et al. 2014, Interpretable Sparse High-Order Boltzmann Machines); Without any hereditary constraint, feature interaction matrix factorization with l1 regularization identifies pairwise feature interactions on datasets with high-dimensional features (Purushotham et al. 2014, Factorized Sparse Learning Models with Interpretable High Order Feature Interactions). \n\n6. At least, RuleFit (Random Forest regression for getting rules + l1-regularized regression) should be used as a baseline in the experiments.\n\nMinor issues:\n\nRanking of feature interactions in Algorithm 1 should be explained in more details.\n\nOn page 3: b^{(l)} \\in R^{p_l}, l should be from 1, .., L. You have b^y.\n\n\nIn summary, the idea of using neural networks for screening pairwise and high-order feature interactions is novel, significant, and interesting. However, I strongly encourage the authors to perform additional experiments with careful experiment design to address some common concerns in the reviews/comments for the acceptance of this paper.\n \n========\nThe additional experimental results are convincing, so I updated my rating score.\n ", "The revision is convincing, so the rating score is updated.", "5 & 6. >> On the remaining baseline references\n\nWe have added references for the remaining baselines as you suggested (on Page 2 and in Appendix H). The references are Interpretable Sparse High-Order Boltzmann Machines, Min et al. 2014 and Factorized Sparse Learning Models with Interpretable High Order Feature Interactions, Purushotham et al. 2014. \n\nWe performed experiments comparing our method to the \"Shooter\" and \"FHIM\" baselines from the references. The details and results of our experiments are shown in Appendix H. We found that indeed, the Shooter baseline benefits from relaxing strict hierarchical hereditary constraints ", "Thank you for clarifying your suggestion on group lasso. We have conducted experiments on group lasso as you suggested. After tuning regularization strengths, we found that indeed the average AUC of group lasso with input groups is slightly better than vanilla lasso, but the difference is not statistically significant. The details of additional experiments with group lasso are included in Appendix G.\n\nWe also plan to share our code in the future so that readers can replicate the experiments.", "The neural network weights heavily depends on the l1-regularized neural network training, for which a group lasso penalty makes much more sense. \n\nSince this paper focuses on the first hidden layer, the connection weights between input features and the hidden units in the first hidden layer are highly important. \n\nOn one hand, to enforce sparsity, connection weights from each individual input feature to all the hidden units naturally form a group. If there are $n$ features, there will be $n$ groups of weights connecting input features and the first hidden layer, keeping only important features for interaction detection. Other weights in higher layers can be regularized with standard Lasso. At least intuitively, Group Lasso based on this natural grouping should work much better than a standard Lasso for eliminating false positive feature interactions. \n\nOn the other hand, considering that the number of hidden units in the first layer is loosely set based on validations, weights connecting all input features to each first-layer hidden unit also naturally form a group, which renders the neural network only keeping highly important competitive hidden units for interaction identification.\n\nTherefore, experiments on interaction identification with Group Lasso regularization is natural, intuitive, important, and necessary for the proposed method.", "Thank you for your comments and suggestions. \n\n>> “it is unclear to me how this can affect to the facts such as Theorem 4 and Algorithm 1.”\nThe existence of the univariate networks does not affect Algorithm 1 or Theorem 4. The univariate networks are meant to reduce the modeling of spurious interactions in the main fully-connected network to improve interaction detection performance. \n\n>> On the architecture of the univariate and GAM networks\nYour understanding is correct. \n\n>> Regarding the abstraction of mu, \nWe did not define mu until later in the paper because mu was determined by experiments", "Thank you for your comments and suggestions to improve the paper. Below are our responses to the main points of your comments: \n\n1. >> “However, completely ignoring activation functions makes the method quite crude.”\nOur approach to aggregating weight matrices depends on the activation functions in two ways: 1) the use of matrix multiplications is based on Lemma 3, which depends on the activation functions being 1-Lipschitz, and 2) the averaging of weights is empirically determined from neural networks with ReLU activation. \n\n2. >> “Restricting to the first hidden layer in Algorithm 1 inevitably misses some important feature interactions.”\nThis is an interesting point. We did consider it before. However, it is not straightforward how to incorporate the idea of common hidden units at intermediate layers to get better interaction detection performance. Our previous studies show that naively using the intermediate hidden layers to suggest new interactions have resulted in worse performance in interaction detection because the connections between input features and intermediate layers are not direct. \n\n3. >> “a group lasso penalty makes much more sense” \nIn general, group lasso requires specifying groupings a priori. It is unclear how to tailor the group lasso penalty to discover interactions, but group lasso might offer an alternative way of finding a cutoff on interaction rankings. \n\n4. >> “Large-scale experiments are needed.”\nWe have conducted experiments with large scale p (p=1000, 950 pairwise interactions) as you suggested and obtained a pairwise interaction strength AUC of 0.984. The full experimental setting can be found in our updated paper in Appendix F, which follows Purushotham et al. 2014 on how to generate large p noisy data.\n\n5 & 6. >> “, RuleFit should be used as a baseline in the experiments.”\nWe have added experiments with RuleFit into Table 2 as you suggested. Our approach outperforms RuleFit. This is consistent with previous work by Lou et al. 2013, “Accurate and Intelligible Models with Pairwise Interactions”, which found that RuleFit did not perform better than Additive Groves, our main baseline.\n", "Thank you for your comments and suggestions. We conducted some experiments based on your suggestions and provide our responses to your major points below. We have also included additional results in Appendices E and F. \n \n1) Our proposed method strongly relies on the assumption that the neural network fit well to data, since we are extracting interactions from the learned network weights. The number of data points available plays a critical role here because a small amount of data can cause the neural network to overfit, causing our method to miss true interactions and find spurious ones instead. To avoid this scenario, we employed modern tricks to help our neural network fit well to the data (e.g. early stopping, regularization). That being said, we advise against using our framework when the number of data samples, n, is too small for normal neural networks, e.g., when n < p, where p is the number of features. Under such scenarios, one might need to impose much stronger assumptions on the data, which goes against our proposal of a general interaction detection algorithm.\n\nFor assurance, we conducted the experiments that you requested and confirmed that our approach does well on the multiplicative synthetic function x1+x2x3x4+x4x6 for datasets of sizes [1e2, 1e3, 1e4, 1e5], obtaining average interaction ranking AUCs of [0.99, 1.0, 1.0, 1.0], respectively. The average AUCs for our 10 synthetic functions with nonlinearities (combined) are [0.57,0.83,0.92,0.94] respectively. The baseline methods are specified to find multiplicative interactions, so their AUC is 1 for the multiplicative synthetic function. Note that we can only obtain interactions accurately when there is enough data to train the model, as seen in the improving scores with more data samples. In our synthetic experiments, we used 10k training samples (and 10k valid/10k test), and this has been updated in our paper. We also updated our paper with a large-p experiment on multiplicative interactions in Appendix F, where we obtained an AUC of 0.98.\n\n2) While the neural network is technically searching interactions during training, the cost of this implicit exponential search is faster than an explicit exponential search of the space of interaction candidates. Our work avoids this explicit search.\n\n3) We originally did not include higher-order detection experiments with ANOVA and Lasso because they are mis-specified to handle detecting the general non-additive form of interactions. For assurance, we ran experiments on ANOVA and Lasso and got average top-rank recall scores of 0.47 and 0.44 respectively, which are much lower than the 0.65 average obtained by our approach.\n\n" ]
[ 7, 7, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ByOfBggRZ", "iclr_2018_ByOfBggRZ", "iclr_2018_ByOfBggRZ", "BkR-FYvzf", "Hy9gtVoxz", "rJZYeXMzG", "Bk7UD8gMf", "Hy8XC6Kxf", "Hy9gtVoxz", "SyQM6W5gz" ]
iclr_2018_r1ZdKJ-0W
Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking
Methods that learn representations of nodes in a graph play a critical role in network analysis since they enable many downstream learning tasks. We propose Graph2Gauss - an approach that can efficiently learn versatile node embeddings on large scale (attributed) graphs that show strong performance on tasks such as link prediction and node classification. Unlike most approaches that represent nodes as point vectors in a low-dimensional continuous space, we embed each node as a Gaussian distribution, allowing us to capture uncertainty about the representation. Furthermore, we propose an unsupervised method that handles inductive learning scenarios and is applicable to different types of graphs: plain/attributed, directed/undirected. By leveraging both the network structure and the associated node attributes, we are able to generalize to unseen nodes without additional training. To learn the embeddings we adopt a personalized ranking formulation w.r.t. the node distances that exploits the natural ordering of the nodes imposed by the network structure. Experiments on real world networks demonstrate the high performance of our approach, outperforming state-of-the-art network embedding methods on several different tasks. Additionally, we demonstrate the benefits of modeling uncertainty - by analyzing it we can estimate neighborhood diversity and detect the intrinsic latent dimensionality of a graph.
accepted-poster-papers
The paper proposes a method to embed graph nodes into a gaussian distribution rather than the standard latent vector embeddings. The reviewers concur that the method is interesting and the paper is well-written especially after the opportunity to update.
train
[ "BkNHltugM", "HJhWAwsgM", "H1j0rCeZz", "BkijGb67z", "HkE-HMm-z", "BkEdFlQZz", "ry0Hue7bG", "BJKpLlX-M", "Syo1sTSgz", "HyzfYL4eG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "This paper is well-written and easy follow. I didn't find serious concern and therefore suggest an acceptance.\n\nPros\nMethodology\n1. inductive ability: can generalize to unseen nodes without any further training\n2. personalized ranking: the model uses natural ranking that embeddings of closer nodes (considers node pairs of any distance) should be closer in the embedding space, which is more general than prevailing first and second order proximity\n3. sampling strategy: the proposed node-anchored sampling method gives unbiased estimates of loss function and successfully reduces the time complexity\n\nExperiment\n1. Evaluation tasks including link prediction and node classification are conducted across multiple datasets with additional parameter sensitivity and missing-link robustness experiments\n2. Compared with various baselines with diverse model designs such as GCN and node2vec as well as compared with naive baseline (using original node attributes as model inputs)\n3. Demonstrated the model captures uncertainties and the learned uncertainties can be used to infer latent dimensions\nRelated Works\nThe survey of related work is sufficiently wide and complete.\n\nCons\nAuthors should include which kind of model is used to do the link prediction task given embedding vectors from different models as inputs.", "The paper proposes to learn Gaussian embeddings for directed attributed graph nodes. Each node is associated to a Gaussian representation (mean and diagonal covariance matrix). The mean and diagonal representations for a node are learned as functions of the node attributes. The algorithm is unsupervised and optimizes a ranking loss: nodes at distance 1 in the graph are closer than nodes at distance 2, etc. Distance between nodes representation is measured via KL divergence. The ranking loss is a square exponential loss proposed in energy based models. In order to limit the complexity, the authors propose the use of a sampling scheme and show the convergence in expectation of this strategy towards the initial loss. Experiments are performed on two tasks: link prediction and node classification. Baselines are unsupervised projection methods and a (supervised) logistic regression. An analysis of the algorithm behavior is then proposed.\nThe paper reads well. Using a ranking loss based on the node distance together with Gaussian embeddings is probably new, even if the novelty is not that big. The comparisons with unsupervised methods shows that the algorithm learns relevant representations.\nDo you have a motivation for using this specific loss Eq. (1), or is it a simple heuristic choice? Did you try other ranking losses?\nFor the link prediction experiments, it is not indicated how you rank candidate links for the different methods and how you proceed with the logistic. Did you compare with a more complex supervised model than the logistic? Fort the classification tasks, it would be interesting to compare to supervised/ semi-supervised embedding methods. The performance of unsupervised embeddings for graph node classification is usually much lower than supervised/ semi-supervised methods. Having a measure of the performance gap on the different tasks would be informative. Concerning the analysis of uncertainity, discovering that uncertainty is higher for nodes with neighbors of distinct classes is interesting. In your setting this might simply be caused by the difference in the node attributes. I was not so convinced by the conclusions on the dimensionality of the hidden representation space. An immediate conclusion of this experiment would be that only a small dimensional latent space is needed. Did you experiment with this?\nDetailed comments:\nThe title of the paper is “Deep …”. There is nothing Deep in the proposed model since the NN are simple one layer MLPs. This is not a criticism, but the title should be changed.\nThere is a typo in KL definition (d should be replaced by the dimension of the embeddings). Probably another typo: the energy should be + D_KL and not –D_KL. The paragraph below eq (1) should be modified accordingly.\nAll the figures are too small to see anything and should be enlarged.\nOverall the paper brings some new ideas. The experiments are fine, but not so conclusive.\n", "This paper proposes Graph2Gauss (G2G), a node embedding method that embeds nodes in attributed graphs (can work w/o attributes as well) into Gaussian distributions rather than conventionally latent vectors. By doing so, G2G can reflect the uncertainty of a node's embedding. The authors then use these Gaussian distributions and neighborhood ranking constraints to obtain the final node embeddings. Experiments on link prediction and node classification showed improved performance over several strong embedding methods. Overall, the paper is well-written and the contributions are remarkable. The reason I am giving a less possible rating is that some statements are questionable and can severely affect the conclusions claimed in this paper, which therefore requires the authors' detailed response. I am certainly willing to change my rating if the authors clarify my questions.\n\nMajor concern 1: Is the latent vector dimension L really the same for G2G and other compared methods? \nIn the first paragraph of Section 4, it is stated that \"in all experiments if the competing techniques use an embedding of\ndimensionality L, G2G’s embedding is actually only half of this dimensionality so that the overall number of ’parameters’ per node (mean vector + variance terms) matches L.\" This setting can be wrong since the degree of freedom of a L-dim Gaussian distribution should be L+L(L-1)/2, where the first term corresponds to the mean and the second term corresponds to the covariance. If I understand it correctly, when any compared embedding method used an L-dim vector, the authors used the dimension of L/2. But this setting is wrong if one wants the overall number of ’parameters’ per node (mean vector + variance terms) matches L, as stated by the authors. Fixing L, the equivalent dimension L_G2G for G2G should be set such that L_G2G +L_G2G (L_G2G -1)/2=L, not 2*L_G2G=L. Since this setting is universal to the follow-up analysis and may severely degrade the performance of GSG due to less embedding dimensions, I hope the authors can clarify this point.\n\nMajor concern 2: The claim on inductive learning\nInductive learning is one of the major contributions claimed in this paper. The authors claim G2G can learn an embedding of an unseen node solely based on their attributes. However, is it not clear why this can be done. In the learning stage of Sec. 3.3, the attributes do not seem to play a role in the energy function. Also, since no algorithm descriptions are available, it's not clear how using only an unseen node's attributes can yield a good embedding under G2G work (so does Sec. 4.5). \nMoreover, how does it compare to directly using raw user attributes for these tasks?\n\nMinor concern/suggestions: The \"similarity\" measure in section 3.1 using KL divergence should be better rephased by \"dissimilarity\" measure. Otherwise, one has a similarity measure $Delta$ and wants it to increase as the hop distance k decreases (closer nodes are more similar). But the ranking constraints are somewhat counter-intuitive because you want $Delta$ to be small if nodes are closer. There is nothing wrong with the ranking condition, but rather an inconsistency between the use of \"similarity\" measure for KL divergence. \n", "Based on the reviewers' comments we have made the following improvements to our paper:\n* Clarified the use of KL divergence as a dissimilarity measure and negative energy for ranking candidate links\n* Fixed several typos and improved wording in a few places", "The authors have clarified my questions, which are summarized as follows.\n1. The covariance matrices are actually assumed to be diagonal so the embedding vector length comparison is fair.\n2. How the raw attributes interact with the proposed network model are highlighted and explained. \n3. The Similarity/Dissimilarity issue is addressed.\n\nTherefore, I changed my rating from 5 to 7 due to the good quality and important impact of this work on node embedding.\n", "Thank you for your review and comments.\n\nRegarding the model used to do the link prediction task we adopt the exact same approach as described in the respective original methods of each of the competitors (e.g we use the dot product of the embeddings). For Graph2Gauss the negative energy (-E_ij) is used for ranking candidate links. Note that the two metrics AUC and AP do not need a binary decision (edge/non-edge), but rather a (possibly unnormalized) score indicating how likely is the edge. We now include these details in the uploaded revised version.", "Thank you for your review and comments. We provide answers to all your questions.\n\n1) Eq. (1) motivation:\nThree types of loss functions are typically considered in the ranking literature: pointwise, pairwise and listwise. We employ the pairwise approach since it usually outperforms the pointwise approach and compared to the listwise approach it is more amenable to stochastic training. The listwise approach is also computationally more expensive and early experiments did not show any benefits of using it. Regarding the pairwise loss function we indeed considered several forms typically used in energy-based learning, including the square-exponential, the hinge loss, LVQ2 and others. They performed comparatively. The final choice of the square-exponential is because compared to e.g. the hinge loss and LVQ2 we don't have the need for tuning a hyperparameter such as the margin. \n\n2) Link prediction:\nNote that the two metrics AUC and AP do not need a binary decision (edge/non-edge), but rather a (possibly unnormalized) score indicating how likely is the edge. To rank candidate links (i.e. obtain the score) we adopt the exact same approach as described in the respective original methods of each of the competitors (e.g we use the dot product of the embeddings). For Graph2Gauss the negative energy (-E_ij) is used for ranking candidate links. We now include these details in the uploaded revised version. \n\n3) Logistic regression:\nWe used the logistic regression as a supervised model since this is a common choice used in almost all previous node embedding papers.\n\n4) Supervised/semi-supervised method:\nIt is expected that the performance of supervised/semi-supervised method would be stronger, especially on the node classification task. However, as we already state in the related work section the focus of this paper is on unsupervised learning. While additional comparison with different supervised/semi-supervised methods would be beneficial, we feel this would distract the reader from the main goal: \"unsupervised learning of node embeddings\". Furthermore, it would be straightforward to extend Graph2Gauss to the semi-supervised setting by including a supervised component in the loss, and we leave this for future work.\n\n5) Uncertainty/dimensionality:\nThe conclusion that only a small dimensional latent space is needed is correct and we indeed experimented with this: the sensitivity analysis in Figures 1a) and 1b) shows that increasing the latent dimensionality beyond some small (dataset-specific) number doesn't give significant increase in performance and the performance flattens out. The benefit of the uncertainty analysis is that for a new dataset we would not need to do train multiple models with different latent dimensions such as in Figures 1a) and 1b) to determine what is the minimum number of dimensions for good performance. We could instead train the model with a large latent dimension and perform analysis similar to the one in Figure 4c).\n\n6) Deep:\nWe used \"Deep\" in the title, since the general architecture is conceived with multiple layers in mind. However, in our experiments single hidden layers proved to be enough to reach good performance. We could certainly change the the title to reflect this. \n\n7) KL/Energy typo:\nIt is true, the KL definition and energy have a typo. We have already fixed this in the uploaded revised version. This question was asked earlier and we also answer it in more details in the comment below.\n\n8) Readability:\nWe agree that readability is important and we and we will enlarge the figures.", "Thank you for your review and comments. We provide clarification for all your concerns.\n\n1) Number of parameters:\nYes the latent dimension is indeed the same for G2G and the other compared methods. As mentioned in Sections 3.2 and 4.4 we always use **diagonal** covariance matrices which only have free parameters on the diagonal (and zeros everywhere else). Thus, an L-dimensional Gaussian with a diagonal covariance has L + L free parameters (mean + variance terms) and using only half of the competitors' dimensionality is a fair comparison.\n\nYou are correct - in general, an L-dimensional Gaussian distribution has L+L(L-1)/2 free parameters, but only if we use a **full** covariance matrix. Our choice of diagonal covariances leads not only to fewer parameters but it also has computational advantages (e.g. it's easy to invert a diagonal matrix). We now highlight this choice one more time in the evaluation section for increased clarity.\n\n2.1) Inductive learning:\nTo see why the attributes play a role in the energy function notice that \\mu_i and \\sigma_i are not free parameters (i.e. to be updated by gradient descent), but they are rather the output of a parametric function that takes the node's attributes x_i as input. More specifically, as mentioned in Section 3.2 (and also in the appendix) \\mu_i and \\sigma_i are the outputs of a feed-forward neural network that takes the attributes of the node as input. During learning we do not directly update \\mu_i and \\sigma_i, but rather we update the weights of the neural network that produces \\mu_i and \\sigma_i as output.\n\nAs mention in the discussion (Section 3.4) during learning we need both the attributes and the network structure (since the loss depends on the network structure). However, once the learning concludes, we essentially have a learned function f_theta(x_i) that only needs the attributes (x_i) of the node as input to produce the embedding (\\mu_i and \\sigma_i) as output. This is precisely what enables G2G to be inductive.\n\n2.2) Raw attributes:\nWe do indeed already compare the performance when using the raw attributes. The \"Logistic Regression\" method shown in Table 1 and 2, as well as Figures 1 and 2, is trained using only the raw attributes and it actually shows strong performance as a baseline. However, on the inductive learning task specifically, as we see in Table 2, G2G has significantly better performance compared to the logistic regression method that uses only the raw attributes.\n\n3) Similarity/Dissimilarity:\nWe agree with your comment w.r.t. similarity/dissimilarity and the KL divergence, this is essentially a typo and is already fixed in the uploaded revised version. This question was also asked earlier and we answer it in more details in the comment below.", "Thank you very much for your interest in our paper and your comment. \n\n1. You are right, this is a typo, which is an artifact of an earlier version of the paragraph where we used to talk about the negative energy instead. It should be E_{ij} = D_{KL}(N_j || N_i). This is indeed what we have implemented in our code. As you also can see in Section \"3.3 Sampling strategy\", we require E_{i1} < E_{i2}, ..., E_{iK-1} < E_{iK}, meaning nodes at a shorter distance should have lower energy/KL divergence.\n\nNotice that for calculating the performance in the link prediction task (e.g. area under the ROC curve) we indeed want to use -E_{ij} (the negative energy) as the score since two nodes should have a *higher* score if they are more likely to form an edge.\n\nRegarding the reproducibility comment, we are planning on releasing the code soon. We are also confident that by including the above correction (i.e. flipping the sign and using -E_ij for link prediction) you will be able to reproduce the results.\n\n2. As demonstrated by the experiments, the assumption that the dimensions are independent/uncorrelated (to be precise, this assumption only applies for each single Gaussian, and not for all nodes jointly) performs well in practice. While it is straightforward to extend the model to full covariance matrices (e.g. using the Cholesky decomposition), this will lead to a significant computational overhead. Furthermore, previous approaches that learn Gaussian embeddings for other tasks (see our related work section) also have the same assumption and also show good performance in their experiments.\n\n3. Since we are using a standard feed-forward architecture, the complexity of each iteration should be clear, thus we omitted it. We can add it in a revised version for completeness.\n\nSimilarly, the complexity for computing truncated shortest path is well know, but we can add it to the appendix for completeness. We can efficiently calculate the truncated shortest path with sparse matrix operations. Thus, this complexity is O(K*E) where K is the maximum shortest path we are willing to consider and E is the number of edges. Since we used for all our experiments K<=3, this one time computation is essentially linear in the number of edges.\n\n4. Thank you for pointing out the typo. We will fix in the next revision.", "I have some problems about this paper:\n\n1. In loss function, the authors employed the square-exponential loss. \n As shown in \"A Tutorial on Energy-Based Learning\", optimizing this loss function will make E_ij_k lower than E_ij_l.\n E_ij represents the opposite of the KL divergence.\n The smaller the KL divergence, the larger the similarity between the two distributions.\n This would make the similarity of i and j_k smaller than that of i and j_l, which is contrary to the previous assumption.\n Meanwhile, I tried to reproduce the experiment on the cora dataset and found that the loss fails to converge.\n\n2. In problem definition, Sigma_i is a L*L matrix. \n But in experiment, it becomes a L-dimensional vector.\n Is the assumption of independence in each dimension reasonable?\n\n3. In time complexity analysis, the authors ignore the complexity of a single iteration, which should be related to dimension L. Meanwhile, the authors do not explain the complexity of calculating the shortest path between nodes.\n\n4. There is a notation error in the definition of KL divergence, that d should be changed to L." ]
[ 7, 6, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1ZdKJ-0W", "iclr_2018_r1ZdKJ-0W", "iclr_2018_r1ZdKJ-0W", "iclr_2018_r1ZdKJ-0W", "BJKpLlX-M", "BkNHltugM", "HJhWAwsgM", "H1j0rCeZz", "HyzfYL4eG", "iclr_2018_r1ZdKJ-0W" ]
iclr_2018_H1BLjgZCb
Generating Natural Adversarial Examples
Due to their complex nature, it is hard to characterize the ways in which machine learning models can misbehave or be exploited when deployed. Recent work on adversarial examples, i.e. inputs with minor perturbations that result in substantially different model predictions, is helpful in evaluating the robustness of these models by exposing the adversarial scenarios where they fail. However, these malicious perturbations are often unnatural, not semantically meaningful, and not applicable to complicated domains such as language. In this paper, we propose a framework to generate natural and legible adversarial examples that lie on the data manifold, by searching in semantic space of dense and continuous data representation, utilizing the recent advances in generative adversarial networks. We present generated adversaries to demonstrate the potential of the proposed approach for black-box classifiers for a wide range of applications such as image classification, textual entailment, and machine translation. We include experiments to show that the generated adversaries are natural, legible to humans, and useful in evaluating and analyzing black-box classifiers.
accepted-poster-papers
The paper proposes a method to generate adversaries close to the (training) data manifold using GANs rather than arbitrary adversaries. They show the effectiveness of their method in terms of human evaluation and success in fooling a deep network. The reviewers feel that this paper is for the most part well-written and the contribution just about makes the mark.
train
[ "S1c2UjL4M", "By2zFR_gz", "HJLfGN_xM", "rkzZoW5xf", "ByEBICLQG", "HJwr1CWfM", "SyTyJCbzM", "H1RqCpWfz", "HJR1wYX1z", "Bk8fG911z", "HyF4SB6A-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "public" ]
[ "Most of my concerns have been properly addressed. I agree with the author that use of GAN to generate adversarial examples in text analysis is indeed novel. The importance and the application of the proposed methodology has now been depicted clearly. \n\nHowever I still have two small issues- (1) The application of the search algorithm for imbalanced classes and (2) computational complexity of the search algorithm (The authors also mention this in the paper- \"Our iterative stochastic search algorithm for identifying adversaries is computationally expensive since it is based on naive sampling and local-search\" -how to improve it?)\n\nHence, although I have raised the score from my previous review, I feel it is only marginally above acceptance threshold.", "Quality: Although the research problem is an interesting direction the quality of the work is not of a high standard. My main conservation is that the idea of perturbation in semantic latent space has not been described in an explicit way. How different it will be compared to a perturbation in an input space? \n\nClarity: The use of the term \"adversarial\" is not quite clear in the context as in many of those example classification problems the perturbation completely changes the class label (e.g. from \"church\" to \"tower\" or vice-versa)\n\nOriginality: The generation of adversarial examples in black-box classifiers has been looked in GAN literature as well and gradient based perturbations are studied too. What is the main benefit of the proposed mechanism compared to the existing ones?\n\nSignificance: The research problem is indeed a significant one as it is very important to understand the robustness of the modern machine learning methods by exposing them to adversarial scenarios where they might fail.\n\npros:\n(a) An interesting problem to evaluate the robustness of black-box classifier systems\n(b) generating adversarial examples for image classification as well as text analysis.\n(c) exploiting the recent developments in GAN literature to build the framework forge generating adversarial examples.\n\ncons:\n(a) The proposed search algorithm in the semantic latent space could be computationally intensive. any remedy for this problem?\n(b) Searching in the latent space z could be strongly dependent on the matching inverter $I_\\gamma(.)$. any comment on this?\n(c) The application of the search algorithm in case of imbalanced classes could be something that require further investigation.", "\nSummary:\n A method for creation of semantical adversary examples in suggested. The ‘semantic’ property is measured by building a latent space with mapping from this space to the observable (generator) and back (inverter). The generator is trained with a WGAN optimization. Semantic adversarials examples are them searched for by inverting an example to its sematic encoding and running local search around it in that space. The method is tested for generation of images on MNist and part of LSUM data and for creation of text examples which are adversarial in some sense to inference and translation sentences. It is shown that the distance between adversarial example and the original example in the latent space is proportional to the accuracy of the classifier inspected.\nPage 3: It seems that the search algorithm has a additional parameter: r_0, the size of the area in which search is initiated. This should be explicitly said and the parameter value should be stated.\nPage 4: \n-\tthe implementation details of the generator, critic and invertor networks are not given in enough details, and instead the reader is referred to other papers. This makes this paper non-clear as a stand alone document, and is a problem for a paper which is mostly based on experiments and their results: the main networks used are not described.\n-\tthe visual examples are interesting, but it seems that they are able to find good natural adversary examples only for a weak classifier. In the MNist case, the examples for thr random forest are nautral and surprising, but those for the LE-Net are often not: they often look as if they indeed belong to the other class (the one pointed by the classifier). In the churce-vs. tower case, a relatively weak MLP classifier was used. It would be more instructive to see the results for a better, convolutional classifier.\nPage 5:\n-\tthe description of the various networks used for text generation is insufficient for understanding:\no\tThe AREA is described in two sentences. It is not clear how this module is built, was loss was it used to optimize in the first place, and what elements of it are re0used for the current task\no\t ‘inverter’ here is used in a sense which is different than in previous sections of the paper: earlier it denoted the mapping from output (images) to the underlying latent space. Here it denote a mapping between two latent spaces.\no\t It is not clear what the ‘four-layers strided CNN’ is: its structure, its role in the system. How is it optimized?\no\tIn general: a block diagram showing the relation between all the system’s components may be useful, plus the details about the structure and optimization of the various modules. It seems that the system here contains 5 modules instead of the three used before (critic, generator and inverter), but this is not clear enough. Also which modules are pre-trained, which are optimized together,a nd which are optimized separately is not clear.\no\tSNLI data should be described: content, size, the task it is used for\n\n\nPro:\n-\tA novel idea of producing natural adversary examples with a GAN\n-\tThe generated examples are in some cases useful for interpretation and network understanding \n-\tThe method enables creation of adversarial examples for block box classifiers\nCons\n-\tThe idea implementation is basic. Specifically search algorithm presented is quite simplistic, and no variations other than plain local search were developed and tested\n-\tThe generated adversarial examples created for successful complex classifiers are often not impressive and useful (they are either not semantical, or semantical but correctly classified by the classifier). Hence It is not clear if the latent space used by the method enables finding of interesting adversarial examples for accurate classifiers. \n\n", "The authors of the paper propose a framework to generate natural adversarial examples by searching adversaries in a latent space of dense and continuous data representation (instead of in the original input data space). The details of their proposed method are covered in Algorithm 1 on Page 12, where an additional GAN (generative adversarial network) I_{\\gamma}, which can be regarded as the inverse function of the original GAN G_{\\theta}, is trained to learn a map from the original input data space to the latent z-space. The authors empirically evaluate their method in both image and text domains and claim that the corresponding generated adversaries are natural (legible, grammatical, and semantically similar to the input).\n\nGenerally, I think that the paper is written well (except some issues listed at the end). The intuition of the proposed approach is clearly explained and it seems very reasonable to me. \nMy main concern, however, is in the current sampling-based search algorithm in the latent z-space, which the authors have already admitted in the paper. The efficiency of such a search method decreases very fast when the dimensions of the z-space increases. Furthermore, such an approximation solution based on the sampling may be not close to the original optimal solution z* in Equation (3). This makes me feel that there is large room to further advance the paper. Another concern is that the authors have not provided sufficient number of examples to show the advantages of their proposed method over the other method (such as FGSM) in generating the adversaries. The example in Table 1 is very good; but more examples (especially involving the quantitative comparison) are needed to demonstrate the claimed advantages. For example, could the authors add such a comparison in Human Evaluation in Section 4 to support the claim that the adversaries generated by their method are more natural? \n\nOther issues are listed as follows:\n(1). Could you explicitly specify the dimension of the latent z-space in each example in image and text domain in Section 3?\n(2). In Tables 7 and 8, the human beings agree with the LeNet in >= 58% of cases. Could you still say that your generated “adversaries” leading to the wrong decision from LeNet? Are these really “adversaries”?\n(3). How do you choose the parameter \\lambda in Equation (2)?\n", "Thanks for all the reviews. We have submitted a revision with changes listed below:\n\n- Page 3: added more efficient Algorithm 2 of hybrid shrinking search (with pseudocode in the appendix on Page 15); clarified how we choose hyper-parameter \\lambda.\n- Page 4 & 5: included dimensions of latent z used, and more details about SNLI dataset. \n- Page 7: updated results (Table 6 and Figure 3) based on the new algorithm.\n- Page 8: added human evaluation results supporting that our adversaries are more natural than those generated by FGSM.\n- Page 9: clarified the common assumption that adversaries are within the same class if the added perturbations are small enough, and how we utilize it to evaluate black-box classifiers.\n- Minor: corrected a few typos and wording issues. \n- Appendix: included architecture diagrams and implementation details.", "Thanks for the review.\n\nDetails: We held out a lot of implementation details due to the space constraints, but will gladly incorporate them in subsequent versions. We will include some of the more important ones you mentioned in the next revision, with the rest in the appendix. In the first step of the search algorithm, it samples from the range of (0, \\Delta r] with r_0 = 0. The Stanford Natural Language Inference (SNLI) corpus is a collection of 570k human-written English sentence pairs manually labeled with whether each hypothesis is entailed by, contradicts, or is neutral to the premise, supporting the task of recognizing textual entailment. We will also include diagrams showing the relations between the components in our text generation framework and provide their implementation details in the appendix.\n\nQuality of adversaries: Yes, generating impressive natural adversaries against more accurate classifiers is difficult, since they require much more substantial changes to the original inputs and a more accurate representation of the data manifold than the current GANs are able to encode. But in essence, we utilize this exact phenomena to evaluate the accuracy and robustness of black-box classifiers qualitatively and quantitatively as shown in experiments, and hope to continue improving our approach to generate even better examples for such classifiers.\n\nSearch algorithm: We have an improved search algorithm based on a coarse-to-fine idea that iteratively shrinks the upper bound of \\Delta z. We will include this modification that results in much more efficient generation of samples in the revision (more details in the response to Reviewer 1).", "Thanks for the comments.\n\nInput perturbations vs. latent perturbations: We demonstrate an illustrative example in Figure 1 (d, e) showing the differences compared to perturbations in input space in Figure 1 (b, c). There are more FGSM examples provided in Table 1 showing the advantages of our approach. Moreover, approaches that add noise directly to the input are not applicable to complex data such as text because of the discrete nature of the domain. Adding imperceivable changes to the sentences is impossible, and perturbations often result in sentences that are not grammatical. Our framework can generate grammatical sentences that are meaningfully similar to the input by searching in the latent semantic space. There are also examples in Section 3.2 and appendix showing this advantage of our approach.\n\nRelated work: To the best of our knowledge, there is no existing work on generating natural adversaries against black-box classifiers utilizing GANs. Other attack methods, none of which utilize GANs, either have access to the gradients of white-box classifiers, or train substitution models mimicking the target classifiers to attack. Further, these methods still add perturbations in input space, while our approach attacks target black-box classifiers directly and searches in the latent semantic space, generating natural adversaries that are legible/grammatical, meaningfully similar to the input, and helpful to interpret and evaluate the black-box classifiers, as demonstrated in our results. Please point us to the GAN literature that generates adversaries against black-box classifiers as mentioned in the review, and we will be happy to compare against them.\n\nUsing the term \"adversarial\": Yes, there is an implicit assumption that the generated samples are within the same class if the added perturbations are small enough, and the generated samples look as if they belong to different classes when the perturbations are large. However, note that it is also the case for FGSM and other such approaches: when their \\epsilon is small, the noise is imperceivable; but with a large \\epsilon, one often finds noisy instances that might be in a different class (see Table 1, digit 8 for an example). While we do observe this behavior in some cases, the corresponding classifiers require much more substantial changes to the input and that is why we utilize our approach to evaluate black-box classifier. We will clarify this in the revision of the paper.\n\nMatching inverter: The generator/inverter in our approach work in similar way as the decoder/encoder in autoencoders. It is true that the quality of generated samples depends on these two components together. In Section 6, we mention that the fine-tuning of the latent vector produced by the inverter can further refine the generated adversarial examples, indicating that more powerful inverters are promising future directions of current work.\n\nSearch algorithm: Gradient-based search methods such as FGSM are not applicable to our setup because of black-box classifiers and discrete domain application. We have an improved search algorithm by using a coarse-to-fine strategy that we will include in the revision (see our reply to Reviewer 1 for more details).", "Thank you for the comments.\n\nSearch algorithm: Gradient-based search methods such as FGSM are not applicable to our setup because of black-box classifiers and applications with discrete domains. We have an improved version of the search algorithm that uses a coarse-to-fine strategy to iteratively minimize the upper-bound of \\Delta z based on fewer samples, and then performs finer search in the restricted range recursively. We observe around 4 times speedup in practice and will include more details in the revision.\n\nComparison: It is difficult to compare against FGSM quantitatively regarding how \"natural\" the adversaries are, but we will include more examples in the revision. On one hand, FGSM can add such a small magnitude noise that our eyes do not perceive. On the other hand, the noise added by FGSM, when amplified, looks random without any interpretable meaning to us. It is also worth mentioning that users found ~80% of our generated sentences natural (legible/grammatical), a domain for which FGSM cannot be applied at all. \n\nDetails: The dimension of latent z vector for MNIST, LSUN, and SNLI are 64, 128, and 300 correspondingly. And we choose \\lambda = 10 to emphasize the reconstruction error in latent space, after trying out different values and inspecting generated samples. We will include these details in the revision.", "I asked the question because you have used the term \"adversarial\" and my point was, if your method changes both the image and the label, it may not be suitable to call the modified image as adversarial. \n\nMoreover, I think the concept of slowly transiting from images of one class to another has been widely studied and examined in GAN literature.", "We are glad that the commenter finds the idea interesting. Natural adversarial examples are defined differently here from the conventional adversaries, where one is searching for minimal adversarial change to the input directly. Our objective is to find the minimal amount of semantic change to the input that results in different prediction in order to interpret the decision behavior of the classifier. Indeed, while the change in semantic space may sometimes be sufficiently substantial to make the generated sample actually end up in a different class, the sample is still generated from the minimal semantic change (not just some random sample of a different class), and the way in which it differs from the original input can provide useful insights into the classifier.", "Interesting research direction. However, considering Table 2, I was wondering if we take an image of \"tower\" and semantically change it to an image of \"church,\" then how do we expect the classifier to classify it as \"tower\"? In essence, the adversarial example must belong to the same class as the original image, otherwise one can completely replace the original image with a new one." ]
[ -1, 6, 7, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, 4, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "SyTyJCbzM", "iclr_2018_H1BLjgZCb", "iclr_2018_H1BLjgZCb", "iclr_2018_H1BLjgZCb", "iclr_2018_H1BLjgZCb", "HJLfGN_xM", "By2zFR_gz", "rkzZoW5xf", "Bk8fG911z", "HyF4SB6A-", "iclr_2018_H1BLjgZCb" ]
iclr_2018_HyydRMZC-
Spatially Transformed Adversarial Examples
Recent studies show that widely used Deep neural networks (DNNs) are vulnerable to the carefully crafted adversarial examples. Many advanced algorithms have been proposed to generate adversarial examples by leveraging the L_p distance for penalizing perturbations. Different defense methods have also been explored to defend against such adversarial attacks. While the effectiveness of L_p distance as a metric of perceptual quality remains an active research area, in this paper we will instead focus on a different type of perturbation, namely spatial transformation, as opposed to manipulating the pixel values directly as in prior works. Perturbations generated through spatial transformation could result in large L_p distance measures, but our extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems. This potentially provides a new direction in adversarial example generation and the design of corresponding defenses. We visualize the spatial transformation based perturbation for different examples and show that our technique can produce realistic adversarial examples with smooth image deformation. Finally, we visualize the attention of deep networks with different types of adversarial examples to better understand how these examples are interpreted.
accepted-poster-papers
All reviewers gave "accept" ratings. it seems that everyone thinks this is interesting work. The paper generated a large number of anonymous comments and these were addressed by the authors.
train
[ "SynTtWlBG", "rJ5UfkeSM", "ry_xOQ5ef", "ryO_-j5NM", "S1uJLjCxz", "SJCbAbugf", "SyUPzf7Vz", "HyRHVgfVz", "rkPfBgGNG", "HyRMrDbNz", "HJRHE_aXz", "Hkh-EdTmf", "rJedEdamf", "S13te9vzf", "HJTI0MJMG", "ryqaGWfbf", "H1S0R27lf", "HJUFg7L1f", "ryZVaVr1M", "SyFCytM1z", "SyqTo1-JG", "Sk3Q5d9AZ", "S1E7VLNCW" ]
[ "public", "author", "official_reviewer", "public", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "public", "public", "author", "public", "public", "author", "public", "author", "public" ]
[ "Thanks for the reply!\n\nI see. I think it is a valid argument to say that it lies on different data manifold compared to FGSM and/or CW. Perhaps it can be considered to mention that in the future because I initially misunderstood that you want to show how robust the attack is, and I thought it was not a fair test to compare the adversarial training and ensemble adversarial training if it was tested only against FGSM & CW (especially FGSM, perhaps RAND+FGSM from https://arxiv.org/abs/1705.07204 is more appropriate for comparison, since it has been shown that FGSM is weak against adversarially trained model due to gradient masking).\n\nBut overall I think the method is really interesting and refreshing!\n\nGood luck!\n", "Thanks for the question! In our adversarial training experiment, we only use adversarial examples generated by FGSM to adversarially retrain. The goal of our experiments here is to see if the proposed spatially transformed adv lies on different data manifold with the existing adversarial examples, instead of showing how robust such attack is. \n", "This paper creates adversarial images by imposing a flow field on an image such that the new spatially transformed image fools the classifier. They minimize a total variation loss in addition to the adversarial loss to create perceptually plausible adversarial images, this is claimed to be better than the normal L2 loss functions.\n\nExperiments were done on MNIST, CIFAR-10, and ImageNet, which is very useful to see that the attack works with high dimensional images. However, some numbers on ImageNet would be helpful as the high resolution of it make it potentially different than the low-resolution MNIST and CIFAR.\n\nIt is a bit concerning to see some parts of Fig. 2. Some of Fig. 2 (especially (b)) became so dotted that it no longer seems an adversarial that a human eye cannot detect. And model B in the appendix looks pretty much like a normal model. It might needs some experiments, either human studies, or to test it against an adversarial detector, to ensure that the resulting adversarials are still indeed adversarials to the human eye. Another good thing to run would be to try the 3x3 average pooling restoration mechanism in the following paper:\n\nXin Li, Fuxin Li. Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics . ICCV 2017.\n\nto see whether this new type of adversarial example can still be restored by a 3x3 average pooling the image (I suspect that this is harder to restore by such a simple method than the previous FGSM or OPT-type, but we need some numbers).\n\nI also don't think FGSM and OPT are this bad in Fig. 4. Are the authors sure that if more regularization are used these 2 methods no longer fool the corresponding classifiers?\n\nI like the experiment showing the attention heat maps for different attacks. This experiment shows that the spatial transforming attack (stAdv) changes the attention of the classifier for each target class, and is robust to adversarially trained Inception v3 unlike other attacks like FGSM and CW. \n\nI would likely upgrade to a 7 if those concerns are addressed.\n\nAfter rebuttal: I am happy with the additional experiments and would like to upgrade to an accept.", "Hi,\n\nThanks for the paper, I really enjoy reading the paper, and I think the idea is novel and really interesting approach on how to create adversarial examples that are perceptually indistinguishable!\n\nI have a question: was the adversarial training experiment conducted in table 3 also includes adversarial images generated by your method as part of the training?\n\nThanks!", "This paper proposes a new way to create adversarial examples. Instead of changing pixel values they perform spatial transformations. \n\nThe authors obtain a flow field that is optimized to fool a target classifier. A regularization term controlled by a parameter tau is ensuring very small visual difference between the adversarial and the original image. \n\nThe used spatial transformations are differentiable with respect to the flow field (as was already known from previous work on spatial transformations) it is easy to perform gradient descent to optimize the flow that fools classifiers for targeted and untargeted attacks. \n\nThe obtained adversarial examples seem almost imperceivable (at least for ImageNet). \nThis is a new direction of attacks that opens a whole new dimension of things to consider. \n\nIt is hard to evaluate this paper since it opens a new direction but the authors do a good job using numerous datasets, CAM attention visualization and also additional materials with high-res attacks. \n\nThis is a very creative new and important idea in the space of adversarial attacks. \n\nEdit: After reading the other reviews , the replies to the reviews and the revision of the paper with the human study on perception, I increase my score to 9. This is definitely in the top 15% of ICLR accepted papers, in my opinion. \n\nAlso a remark: As far as I understand, a lot of people writing comments here have a misconception about what this paper is trying to do: This is not about improving attack rates, or comparing with other attacks for different epsilons, etc. \nThis is a new *dimension* of attacks. It shows that limiting l_inf of l_2 is not sufficient and we have to think of human perception to get the right attack model. Therefore, it is opening a new direction of research and hence it is important scholarship. It is asking a new question, which is frequently more important than improving performance on previous benchmarks. \n\n", "This paper explores a new way of generating adversarial examples by slightly morphing the image to get misclassified by the model. Most other adversarial example generation methods tend to rely on generating high frequency noise patterns based by optimizing the perturbation on an individual pixel level. The new approach relies on gently changing the overall image by computing a flow an spatially transforming the image according to that flow. An important advantage of that approach is that the new attack is harder to protect against than to previous attacks according to the pixel based optimization methods.\n\nThe paper describes a novel model method that might become a new important line of attack. And the paper clearly demonstrates the advantages of this attack on three different data sets.\n\nA minor nitpick: the \"optimization based attack (Opt)\" was first employed in the original \"Intriguing Properties...\" 2013 paper using box-LBFGS as the method of choice predating FGSM.", "We thank the commenter for the question.\nFirst, in Table 3, the setting is: we generate adversarial examples against the undefended model based on different methods, and then we test them against adversarially trained models to see how well the examples transfer to defended models. However, the goal of this experiment is not to compare which attack is more robust; it was to test the hypothesis that retraining with additive adversarial examples would have a different effect on stAdv examples, compared to other additive adversarial examples. As for our results on the additive adversarial examples, they are consistent with previous results showing that, in similar experiments, C&W examples [Fig 10 in Carlini & Wagner] are less transferable than FGSM examples [Fig 3(b) in Papernot & McDaniel].\nSecond, we didn’t tune parameters for C&W. Instead we use the default setting for C&W [1] and also use the same setting for stAdv. So we think this is fair comparison. \n\n[Carlini, Nicholas, and David Wagner]. \"Towards evaluating the robustness of neural networks.\" Security and Privacy (SP), 2017 IEEE Symposium on. IEEE, 2017.\n[Papernot, Nicolas, and Patrick McDaniel]. \"Extending Defensive Distillation.\" arXiv preprint arXiv:1705.05264(2017).\n[1] https://github.com/carlini/nn_robust_attacks/blob/master/li_attack.py\n", "Thanks for pointing out these two nice works. We were not aware of BMVC 15’ and the master thesis (published on 2017-08-20, updated on 2017-12-13) during our submission. But we have cited them in our revision and we are happy to discuss the differences based on our understanding. \n\nHere are our comments regarding the differences between ours and these works: \n\nBMVC 15’ “Manitest: Are classifiers really invariant?”: This work studies “whether classifiers are invariant to basic transformations such as rotations and translations”. The paper proposes a novel algorithm to compute invariance score for any classifier. Our paper differs from this paper in two ways: First, our goal is to produce adversarial examples that can fool the classifier, visually realistic, and with minimal changes while the global transformation introduces big changes, and sometimes produce unrealistic images. Second, we study the local deformation rather than global transformation as global transformation will often change the image dramatically. \n\nThe master thesis “Measuring Robustness of Classifiers to Geometric Transformations” (created on 2017-08-20, modified on 2017-12-13). The latest version (Dec 13, 2017) was updated after our submission. Similar to BMVC 15’, this *concurrent* work studies the transformation invariance for CNNs with a focus on high dimensional transformation. The thesis proposes new methods for measuring the invariance score and studies the invariance score regarding varying depths of the network. Again, we target for a different goal as we aim to produce adversarial examples with locally smooth spatial transformation rather than computing invariance scores. In addition to the difference in the goal, the loss function, the parameterization of the transformation, and the optimization method are all different. \n", "We thank the commenter for the questions.\nRunning time:\nTo compare stAdv and C&W’ speed, we conduct the following three experiments on CIFAR-10 dataset. The target model is ResNet32.\n (1). stAdv(LBFGS): \\tau 0.05, max_step 200, learning rate 5e-3, solver: lbfgs with line_search. \n (2) stAdv(ADAM): \\tau 0.05, max_step 200, learning rate 5e-3, solver: ADAM. \n (3) C&W: bound linf 8, initial cost 10e-5, max_step 1000 largest_const 2e+1, confidence 0, const_factor 2.0, solver: ADAM (We use the default settings in C&W’s official GitHub repo.) \nHere we report the average running time over 50 random images for each method:\nStAdv (LBFGS): time: 5.435s, attack success rate : 100%\nStAdv (Adam): time: 0.092s, attack success rate : 100%\nC&W (Adam): time: 4.0214s, attack success rate : 100%\nThe results show that stAdv with LBFGS solver is slightly slower than C&W. However, stAdv with Adam solver is much faster than C&W, and both of them achieve the same success rate. We conclude that speed is not a big issue for stAdv. Also, we note that the solver (Adam vs. LBFGS) plays a more critical role in running time compared to what to optimize (e.g. flow vs. pixels.)\nBlackbox attack: \nThanks for your suggestions. It is definitely worth exploring. In this work, we focus on the white-box setting to explore what a powerful adversary can do based on the Kerckhoffs’s principle[Shannon, 1949] to better motivate defense methods. Besides, stAdv can achieve transferability based black-box attack. We will leave other blackbox attack techniques as our future work. \n\nReference:\n[Shannon, 1949] Shannon, Claude E. \"Communication theory of secrecy systems.\" Bell Labs Technical Journal 28.4 (1949): 656-715\n", "Changes made in our revised version are listed as below:\n- Added human perceptual study of our algorithm in section 4.3.\n- Analyzed the efficiency of mean blur defense strategy against our algorithm in section 4.4.\n- Added a detailed description of experiment setting for C&W and FGSM method in section 4.4. \n- Added more adversarial examples for ImageNet-Compatible set, MNIST, and CIFAR-10 in Appendix C. \n- Updated figure 4 with a strong adversarial budget on MNIST and CIFAR-10. \n- Fixed some grammatical errors.\n- Changed the name “Opt” to “C&W”.\n\nWe would like to thank the reviewers again for the useful feedbacks and suggestions.", "We thank the reviewer for the thoughtful comments and suggestions.\n\nHuman study: \nWe have added a human study in Section 4.3. In particular, we follow the same perceptual study protocol used in prior image synthesis work [Zhang et al. 2016; Isola et al. 2017]. In our study, the participants are asked to choose the more visually realistic image between (1) an adversarial example generated by stAdv and (2) its original image. The user study shows that the generated adversarial examples can fool human participants 47% of the time (perfectly realistic results would achieve 50%). This experiment shows that our adversarial examples are almost indistinguishable from natural images. Please see section 4.3 for more details.\n\n3x3 mean blur defense\nWe included the suggested related work and added an analysis of the 3x3 average pooling restoration mechanism [Li et al. 16’]. See section 4.4 and Table 5 in Appendix B for the discussion and results. In summary, the restoration is not as effective on stAdv examples. The classification accuracy on restored stAdv examples is around 50% (Table 5), compared to restored C&W examples (around 80%) and FGSM examples (around 70%) [Carlini et al. 2017, Li et al. 2016]. In addition, stAdv achieves near 100% attack success rate in a perfect knowledge adaptive attack [Carlini et al. 2017].\n\nComparison with C&W and FGSM (Figure 4)\nIn our revised version, we have updated Figure 4 to show adversarial examples for FGSM and C&W with a strong adversarial budget as: L_infinity perturbation limit of 0.3 on MNIST and 8 on CIFAR-10. We apply the same setting for the later evaluations against defenses \n\nReferences:\n[Carlini et al. 2017] Carlini, Nicholas, and David Wagner. \"Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods.\" arXiv preprint arXiv:1705.07263 (2017).\n[Li et al. 2016] Li, Xin, and Fuxin Li. \"Adversarial examples detection in deep networks with convolutional filter statistics.\" arXiv preprint arXiv:1612.07767 (2016).\n[Zhang et al. 2016] Zhang, Richard, Phillip Isola, and Alexei A. Efros. \"Colorful image colorization.\" European Conference on Computer Vision. Springer International Publishing, 2016.\n[Isola et al. 2017]Isola, Phillip, et al. \"Image-to-image translation with conditional adversarial networks.\" arXiv preprint arXiv:1611.07004 (2016).\n", "We thank the reviewer for the helpful comments. To further improve our work, we have added a user study to our updated version to evaluate the perceptual realism for the generated instances. In particular, we follow the same perceptual study protocol used in prior image synthesis work [Zhang et al. 2016; Isola et al. 2017]. In our study, the participants are asked to choose the more visually realistic image between (1) an adversarial example generated by stAdv and (2) its original image. The user study shows that the generated adversarial examples can fool human participants 47% of the time (perfectly realistic results would achieve 50%). This experiment shows that our adversarial examples are almost indistinguishable from natural images. Please see section 4.3 for more details. \n", "We thank the reviewer for the constructive suggestions! We have updated the method Opt to C&W throughout the paper.\n", "This work explores a new direction on generating adversarial examples. However, I would like to share my concerns about the transferability and time efficiency of the spatial transformation method. As you referred in the paper \"we will focus on the white box setting...\", whether this method can be applied into black box attacks. And it seems like too expensive to solve this optimal problem by L-BFGS. Could you reveal further details about the time consumption of generating adversarial examples and compared results with other methods?", "I found this work very interesting and the paper is neatly written.\n\nIt would be good if the authors comment on the differences between their approach and prior work that previously constructed adversarial examples with spatial transformations (a.k.a. geometric transformations). In particular, these types of adversarial examples have first been considered in a 2015 paper (https://arxiv.org/abs/1507.06535) published in BMVC. Also, there is a master thesis which addresses a very similar setting to this paper (i.e., using a flow field): https://infoscience.epfl.ch/record/230235 (e.g. pages 32-34).\n\nThe paper would be stronger if such comparison is provided, IMHO.", "The paper provides new type of adversarial attack which is different from the previous works.\nHowever, the optimization based attack results (CW attacks) shown in table 3 are very suspicious considering tha fact that the attacks are performed in white box attack assumption.\nThere are a lot of papers showing iterative/optimization based attacks are much more stronger than FGSM method for the networks trained with standard training or adversarial training.\nCW L_infinity attacks are known to be sensitive to the hyper parameters (tau, learning rate, constant c).\nAnd those hyper parameters have to be optimally chosen per \"each\" example.\nThe authors should provide convincing explanation why they got such poor results for CW attack (even lower attack rates compared to FGSM for model C ).\nUnless they provide reasonable explanation for that, the paper remains suspicious and claims wrong conclusion based on unfair comparison.", "The main reason that stAdv is needed or our main goal of the paper is to provide a new way to think about adversarial examples: where an attacker can move pixels by some amount, instead of adding or subtracting pixel values. Investigating this unexplored side of adversarial examples is worthwhile, even if it would involve attacks that are strictly weaker, although that is not the case here. In this paper, we focused on showing the differences between spatially transformed adversarial examples and additive adversarial examples in terms of attack robustness analysis and attention visualization.\n\nHigh attack success rate and visually realistic adversarial examples are goals of lower priority.\n[1a] We thank the comment for their enthusiasm in this new attack and for bringing up the desire for additional success rate comparisons with additive adversarial examples and on more defenses. One result we didn’t include in the paper since it is not the main focus as we mention above is that we can also achieve 100% attack success rate on white-box attacks on adversarially trained models, like other optimization attacks. We can of course add these results in our updated version.\n\nThe Opt attack is C&W’s attack (last sentence in the first paragraph of Intro) and we will clarify this in our updated version. \n\n[1b] We are definitely interested in further testing our attack method against other state-of-the-art defenses and thank you for the suggestions. One of the two papers mentioned is a concurrent ICLR 2018 submission. The other one is an arxiv paper published in mid-September. It is impossible to directly apply these defenses in our submission. Moreover, the second reference has shown that Cao’s defense can already be attacked with ensemble optimization method. We are watching for better defense methods, but to the best of our knowledge currently, the most efficient methods are the adversarial training based methods we tested. \n\n[2] As the commenter mentions about the visual quality, yes, we did intend to show that the adversarial examples generated this way are more realistic. We will conduct a user experiment, which the commenter also suggested. In addition, supporting the claim that the examples are realistic will create useful evidence that that the L_p norm is not a good measurement. The vision community has long since noticed this weakness of the L_p norm, but no better ones are provided. Here we actually show that with relatively high L_p norm, we can still generate perceptually realistic adversarial examples, which raises up an open research direction to propose better distance measurements between adversarial examples and normal instances.\n", "I cannot see that the paper: https://arxiv.org/abs/1709.05583 is a state-of-art defense (or better than adversarial training). Only CW attack is considered in that paper (e.g., no FGSM), experiments are not enough to support this argument.\n\nMeanwhile, this method is more like adding random noises around the each clean image to obtain multiple images for classification (ensemble classification). I cannot see this method can be robust to FGSM with a high value of sigma.", "I agree with the previous comments that this paper proposes a new method to generate adversarial examples. However, why this new method is needed is not well supported. \n\n1. Is your goal to have a higher attack success rate? \n \n 1a. The evaluations did not really show the new method has higher attack success rates than the state-of-the-art attacks. In particular, what is exactly Opt attack? the attack proposed by Carlini and Wagner has very high attack success rate (close to 100%) for adverarially trained neural networks. You can refer to these two papers:\n\nhttps://openreview.net/pdf?id=BkpiPMbA-\nhttps://arxiv.org/abs/1709.05583\n\n 1b. The paper did not evaluate their attacks against state-of-the-art defense methods. Adversarial training is not state-of-the-art defense. State-of-the-art defense leverages neighborhood around an instance to predict its label, instead of the single instance alone. It would be interesting to show whether the new attacks are effective against state-of-the-art defense, e.g., https://arxiv.org/abs/1709.05583\n\n\n 2. is your goal to generate more visually realistic adversarial examples? If this is the goal of the paper, I think the authors should provide user studies to evaluate whether their adversarial examples are more visually realistic than existing adversarial examples. Showing several examples is not sufficient. \n", "you are right, the \\tau here is used to control the tradeoff between visibility-fooling. \nWe apply grid search to tune \\tau, and the corresponding results are shown in the appendix((https://www.dropbox.com/sh/pl7sbecks6ja5g0/AACVdlRg96heBkICOWl1IQm4a?dl=0).", "Is it possible to achieve visibility-fooling tradeoff by changing τ in equation (2)? The weight of L_{flow} should control the visibility of the attack.", "We thank the commenter for the suggestions. We are glad that the commenter finds the idea interesting and thinks it opens a potential direction. We agree that it is hard to evaluate the perceptual similarity of adversarial examples and that L_p may not be the best and is not suitable for our proposed method.\n\nWe emphasize that the perturbations are almost invisible for CIFAR and ImageNet datasets (we cannot tell the difference in the examples in our paper).We believe these results are more important than the results on MNIST, where the differences are visible because these wiggly/sketchy distortions occur more naturally in handwritten digits.\n\nBased on the suggestion, we have gotten the permission from the Chair to add an anonymized link (https://www.dropbox.com/sh/pl7sbecks6ja5g0/AACVdlRg96heBkICOWl1IQm4a?dl=0) here for ten more image examples on MNIST, CIFAR, and ImageNet in high-resolution figures.\nWe will also add more examples in the appendix of our updated version.", "Thanks for the interesting paper -- it opens a new axis for adversarial attacks.\nWhile the idea is interesting, the tradeoff between visibility and fooling effectiveness is not entirely clear.\nThe potential greatest concern that I have about this paper is that the perturbations are way too visible.\n\nFor claiming an adversarial perturbation to be superior to the others, not only fooling rates (or ``attack success rates'') but also perceptual similarity to the original image should be compared. Since there is no standard measure for the latter, people have resorted to Lp distances for additive perturbations. One cannot do the same in this paper because the perturbations are not additive, as the authors have described: \"for stAdv, we cannot use Lp norm to bound the distance as translating a image by one pixel may introduce large Lp penalty.\"\n\nThe paper still compares the attack success rate comparisons against previous attacks in table 2. The authors have used bare-eye observation to set the operating point: \"We instead constrain the spatial transformation flow and show that our adversarial examples have high perceptual quality in Figures 2, 3, and 4.\" But this is hardly convincing, since the perturbations seem to be _always visible_ if one looks closely (fractal patterns along edges), often more visible than previous additive attacks. The claim \"geometric changes are small and locally smooth\" is hard to buy.\n\nAlso the number of visual examples is too stingy for ImageNet (only 2). Given their importance, there should be at least 10 examples per dataset. Also, visualisations in this paper are too low resolution in general and so many image artifacts may be overlooked. \n\nI would suggest the following for making a more convincing case for the paper: \nVisually compare the _minimal_ perturbation needed for a successful attack, on MNIST, CIFAR, and ImageNet, with at least 10 examples in _high resolution_ figures. " ]
[ -1, -1, 7, -1, 9, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, -1, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "rJ5UfkeSM", "ryO_-j5NM", "iclr_2018_HyydRMZC-", "iclr_2018_HyydRMZC-", "iclr_2018_HyydRMZC-", "iclr_2018_HyydRMZC-", "ryqaGWfbf", "HJTI0MJMG", "S13te9vzf", "iclr_2018_HyydRMZC-", "ry_xOQ5ef", "S1uJLjCxz", "SJCbAbugf", "iclr_2018_HyydRMZC-", "iclr_2018_HyydRMZC-", "iclr_2018_HyydRMZC-", "ryZVaVr1M", "ryZVaVr1M", "iclr_2018_HyydRMZC-", "SyqTo1-JG", "Sk3Q5d9AZ", "S1E7VLNCW", "iclr_2018_HyydRMZC-" ]
iclr_2018_ryBnUWb0b
Predicting Floor-Level for 911 Calls with Neural Networks and Smartphone Sensor Data
In cities with tall buildings, emergency responders need an accurate floor level location to find 911 callers quickly. We introduce a system to estimate a victim's floor level via their mobile device's sensor data in a two-step process. First, we train a neural network to determine when a smartphone enters or exits a building via GPS signal changes. Second, we use a barometer equipped smartphone to measure the change in barometric pressure from the entrance of the building to the victim's indoor location. Unlike impractical previous approaches, our system is the first that does not require the use of beacons, prior knowledge of the building infrastructure, or knowledge of user behavior. We demonstrate real-world feasibility through 63 experiments across five different tall buildings throughout New York City where our system predicted the correct floor level with 100% accuracy.
accepted-poster-papers
Reviewers agree that the paper is well done and addresses an interesting problem, but uses fairly standard ML techniques. The authors have responded to rebuttals with careful revisions, and improved results.
train
[ "ry1E-75eG", "B11TNj_gM", "ryca0nYef", "S1MIIQpXM", "SyX3mXp7f", "ryex-7aXz", "S1xJ7MTmM", "Bk4vzMTQG", "S1mOIY2mf", "rJp2Dt3XG", "B1kuDK27M", "B1gGvY3Qf", "BkGbWw9Xf", "H1zsR8cmz", "SJw9Jr4-G", "r1gRb9SWG", "Hkesgr4Zf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author" ]
[ "Update: Based on the discussions and the revisions, I have improved my rating. However I still feel like the novelty is somewhat limited, hence the recommendation.\n\n======================\n\nThe paper introduces a system to estimate a floor-level via their mobile device's sensor data using an LSTM to determine when a smartphone enters or exits a building, then using the change in barometric pressure from the entrance of the building to indoor location. Overall the methodology is a fairly simple application of existing methods to a problem, and there remain some methodological issues (see below).\n\nGeneral Comments\n- The claim that the bmp280 device is in most smartphones today doesn’t seem to be backed up by the “comScore” reference (a simple ranking of manufacturers). Please provide the original source for this information.\n- Almost all exciting results based on RNNs are achieved with LSTMs, so calling an RNN with LSTM hidden units a new name IOLSTM seems rather strange - this is simply an LSTM.\n- There exist models for modelling multiple levels of abstraction, such as the contextual LSTM of [1]. This would be much more satisfying that the two level approach taken here, would likely perform better, would replace the need for the clustering method, and would solve issues such as the user being on the roof. The only caveat is that it may require an encoding of the building (through a one-hot encoding) to ensure that the relationship between the floor height and barometric pressure is learnt. For unseen buildings a background class could be used, the estimators as used before, or aggregation of the other buildings by turning the whole vector on.\n- It’s not clear if a bias of 1 was added to the forget gate of the LSTM or not. This has been shown to improve results [2].\n- Overall the whole pipeline feels very ad-hoc, with many hand-tuned parameters. Notwithstanding the network architecture, here I’m referring to the window for the barometric pressure, the Jaccard distance threshold, the binary mask lengths, and the time window for selecting p0.\n- Are there plans to release the data and/or the code for the experiments? Currently the results would be impossible to reproduce.\n- The typo of accuracy given by the authors is somewhat worrying, given that the result is repeated several times in the paper.\n\nTypographical Issues\n- Page 1: ”floor-level accuracy” back ticks\n- Page 4: Figure 4.1→Figure 1; Nawarathne et al Nawarathne et al.→Nawarathne et al.\n- Page 6: ”carpet to carpet” back ticks\n- Table 2: What does -4+ mean?\n- References. The references should have capitalisation where appropriate.For example, Iodetector→IODetector, wi-fi→Wi-Fi, apple→Apple, iphone→iPhone, i→I etc.\n\n[1] Shalini Ghosh, Oriol Vinyals, Brian Strope, Scott Roy, Tom Dean, and LarryHeck. Contextual LSTM (CLSTM) models for large scale NLP tasks. arXivpreprint arXiv:1602.06291, 2016.\n[2] Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent network architectures. InProceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 2342–2350,2015", "The paper proposes a two-step method to determine which floor a mobile phone is on inside a tall building. \nAn LSTM RNN classifier analyzes the changes/fading in GPS signals to determine whether a user has entered a building. Using the entrance point's barometer reading as a reference, the method calculates the relative floor the user has moved to using a well known relationship between heights and barometric readings.\n\nThe paper builds on a simple but useful idea and is able to develop it into a basic method for the goal. The method has minimal dependence on prior knowledge and is thus expected to have wide applicability, and is found to be sufficiently successful on data collected from a real world context. The authors present some additional explorations on the cases when the method may run into complications.\n\nThe paper could use some reorganization. The ideas are presented often out of order and are repeated in cycles, with some critical details that are needed to understand the method revealed only in the later cycles. Most importantly, it should be stated upfront that the outdoor-indoor transition is determined using the loss of GPS signals. Instead, the paper elaborates much on the neural net model but delays until the middle of p.4 to state this critical fact. However once this fact is stated, it is obvious that the neural net model is not the only solution.\n\nThe RNN model for Indoor/Outdoor determination is compared to several baseline classifiers. However these are not the right methods to compare to -- at least, it is not clear how you set up the vector input to these non-auto-regressive classifiers. You need to compare your model to a time series method that includes auto-regressive terms, or other state space methods like Markov models or HMMs.\n\nOther questions:\n\np.2, Which channel's RSSI is the one included in the data sample per second?\n\np.4, k=3, what is k?\n\nDo you assume that the entrance is always at the lowest floor? What about basements or higher floor entrances? Also, you may continue to see good GPS signals in elevators that are mounted outside a building, and by the time they fade out, you can be on any floor reached by those elevators.\n\nHow does each choice of your training parameters affect the performance? e.g. number of epoches, batch size, learning rate. What are the other architectures considered? What did you learn about which architecture works and which does not? Why?\n\nAs soon as you start to use clustering to help in floor estimation, you are exploiting prior knowledge about previous visits to the building. This goes somewhat against the starting assumption and claim.", "The authors motivate the problem of floor level estimation and tackle it with a RNN. The results are good. The models the authors compare to are well chosen. As the paper foremost provides application (and combination) of existing methods it would be benefitial to know something about the limitations of their approach and about the observed prequesits. ", "Based on this data, and these results, the line between both models is certainly more blurry. What is clear is that the neural network models do outperform the other models. We've changed some of the wording to highlight this point and not make it a strictly LSTM vs others approach but instead a neural network vs others approach. Although given more complicated examples, we think the LSTM would perform better on the IO task. However, generating more data is very time-consuming so it makes the overall problem difficult to model.\n\nBut we believe the point you mentioned with hierarchical LSTMs is extremely relevant in this context because it allows a foundation on which to build future work to model the full problem end-to-end with a model based on LSTM architectures and the hierarchical approaches mentioned. We added this point in the future direction and certainly think it's feasible but it'll likely require more data and model design. \n\n However, we're still fine-tuning the LSTM to see why there was a 1% drop.", "Thanks for adding this. It starts to look like there's nothing to choose between the LSTM and Feedforward network ...", "The table has been updated. The algorithm did fairly well on this task when using each of the classifiers with the exception of the HMM. The difference between classifiers on this task would likely come through when the possibility of acquiring a GPS lock during a trial comes up such as a glass elevator on the outside of the building. In this case, the LSTM would likely produce less false positives as indicated by the increased accuracy in the IO classification task. Fewer false-positives mean that the algorithm would likely identify the correct anchor barometric pressure point at the entrance to the building instead of a stairwell or entering the building from the glass elevator.", "Yes, LSTM only. Generating others now", "For the floor prediction task the result given is using the LSTM right (I don't think it's actually specified)? Do you have results for the baselines for this? ", "Thank you once again for your feedback during these last few weeks. We've gone ahead and completed the following: \n1. Added the HMM baseline\n2. Reran all the models and updated the results. The LSTM and feedforward model performed the same on the test set. We've reworded the results and method page to reflect this.\n3. By increasing the classifier accuracy we improved the floor-prediction task to 100% with no margin of error on the floor predictions.\n4. We tried the hierarchical LSTM approach as suggested but did not get a model to work in the few weeks we experimented with it. It looks promising, but it'll need more experimentation. We included this approach in future works section.\n5. We released all the code at this repository: https://github.com/blindpaper01/paper_FMhXSlwRYpUtuchTv/ \n\nAlthough the code is mostly organized, works and is commented it'll be polished up once it needs to be released to the broader community. The Sensory app was not released yet to preserve anonymity. \n\n6. Fixed typos (some of the numbering typos are actually from the ICLR auto-formatting file).\n\nPlease let us know if there's anything else you'd like us to clarify.\nThank you so much for your feedback once again!\n", "Dear reviewer,\n\nWe've released a main update listed above. Please let us know if there's anything we can help clarify! \n\nThank you once again for your feedback!", "Hello. We've added the HMM baseline. We apologize for the delay, we wanted to make sure we set the HMM baseline as rigorous as possible.\n\nThe code is also available for your review. \n\nThank you once again for your feedback!\n", "Hello. We've clarified these issues in the primary post above. Please let us know if we've addressed your concerns. \n\nThank you once again for your valuable feedback\n", "Hi. Once again, thank you for your feedback.\n\nre HMM:\nYes, currently finalizing the HMM baseline right now and will be adding to the paper by today. \n\nre Model accuracy:\nThese are the results from the latest hyperparameter optimization. We're verifying these today once the tests complete. Although the model accuracy dropped for the indoor/outdoor classification task, it increased to 100% with no margin of error for the floor prediction task. The other baselines don't achieve near the same result on the floor prediction task. However, we're running a final optimization today to ensure we have the best results given the hyperparameter search.\n\nWe can add those model results to the floor prediction task for clarification.", "Can you explain why in the table 1 in the revision from 29th October the validation and test accuracy of the LSTM are 0.949 and 0.911 and in the most recent version they have dropped to 0.935 and 0.898 (worse than the baselines)?\n\nAlso I agree with the statement by reviewer 2:\n\n\"The RNN model for Indoor/Outdoor determination is compared to several baseline classifiers. However these are not the right methods to compare to -- at least, it is not clear how you set up the vector input to these non-auto-regressive classifiers. You need to compare your model to a time series method that includes auto-regressive terms, or other state space methods like Markov models or HMMs.\"\n\nIt seems like no changes have been made to address this.", "\nThank you so much for your valuable feedback! I want to preface the breakdown below by letting you know that we added time-distributed dropout which helped our model's accuracy. The new accuracy is 100% with no margin of error in the floor number.\n\n1. As of June 2017 the market share of phones in the US is 44.9% Apple and 29.1% Samsung [1]. 74% are iPhone 6 or newer [2]. The iPhone 6 has a barometer [3]. Models after the 6 still continue to have a barometer. \nFor the Samsung phones, the Galaxy s5 is the most popular [4], and has a barometer [5].\n\n\n[1] https://www.prnewswire.com/news-releases/comscore-reports-june-2017-us-smartphone-subscriber-market-share-300498296.html\n[2] https://s3.amazonaws.com/open-source-william-falcon/911/2017_US_Cross_Platform_Future_in_Focus.pdf\n[3] https://support.apple.com/kb/sp705?locale=en_US\n[4] https://deviceatlas.com/blog/most-popular-smartphones-2016\n[5] https://news.samsung.com/global/10-sensors-of-galaxy-s5-heart-rate-finger-scanner-and-more\n\n2. Makes sense, we separated it for the non deep learning audience trying to understand it. However, happy to update everything to say LSTM.\n3. Thanks for this great suggestion. We had experimented with end-to-end models but decided against it. We did have a seq2seq model that attempted to turn the sequence of readings into a sequence of meter offsets. It did not fully work, but we're still experimenting with it. This model does not however get rid of the clustering step. \n\nAn additional benefit of separating this step from the rest of the model is that it can be used as a stand-alone indoor/outdoor classifier. \n\nI'll address your concerns one at a time:\n a. In which task would it perform better? The indoor-outdoor classification task or the floor prediction task?\n c. What about this model would solve the issue of the user being on the roof?\n d. Just to make sure I understand, the one-hot encoding suggestion aims to learn a mapping between the floor height and the barometric pressure which in turn removes the need for clustering?\n e. This sounds like an interesting approach, but seems to fall outside of the constraint of having a self-contained model which did not need prior knowledge. Generating a one-hot encoding for every building in the world without a central repository of building plans makes this intractable.\n\n4. We used the bias (tensorflow LSTM cell). https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell\n5. Happy to add explanations for why the \"ad-hoc\" parameters were chosen:\n a. Jaccard window, binary mask lengths, and window length were chosen via grid search.\n b. Will add those details to the paper.\n\n6. Yes! All the data + code will be made public after reviews. However, if you feel strongly about having it before, we can make it available sooner through an anonymous repository. In addition, we're planning on releasing a basic iOS app which you'll be able to download from the app store to run the model on your phone and see how it works on any arbitrary building for you.\n\n7. Yes, many typos. Apologize for that. We did a last minute typo review too close to the deadline and missed those issues. This is in fact going to change now that we've increased the model accuracy to 100% with no floor margin of error.\n\nWe're updating the paper now and will submit a revised version in the coming weeks", "Thank you for your feedback! We're working on adding your suggestions and will post an update in the next few weeks.\n\nWanted to let you know we've improved the results from 91% to 100% by adjusting our regularization mechanism in the LSTM. We'll make the appropriate changes to the paper.\n\n\"The paper could use some reorganization\"\n1. Agreed and the updated draft will have:\n - Cleaner organization\n - Upfront clarification about the GPS signal\n - Shortened discussion about the neural net model\n\n\"The RNN model for Indoor/Outdoor determination is compared to several baseline classifiers.\"\n2. The problem is reduced to classification by creating a fixed window of width k (in our case, k=3) where the middle point is what we're trying to classify as indoors/outdoors. \n - Happy to add the HMM comparison.\n - Happy to add a time series comparison.\n\n\"p.2, Which channel's RSSI is the one included in the data sample per second?\n\"\n3. We get the RSSI strength as proxied by the iPhone status bar. Unfortunately, the API to access the details of that signal is private. Therefore, we don't have that detailed information. However, happy to add clarification about how exactly we're getting that signal (also available in the sensory app code).\n\n\n4. k is the window size. Will clarify this. \n\n\"Do you assume that the entrance is always at the lowest floor? What about basements or higher floor entrances? \"\n\n5. We actually don't assume the entrance is on the lower floors. In fact, one of the buildings that we test in has entrances 4 stories appart. This is where the clustering method shines. As soon as the user enters the building through one of those lower entrances, the floor-level indexes will update because it will detect another cluster.\n\n\n\"Also, you may continue to see good GPS signals in elevators that are mounted outside a building, and by the time they fade out, you can be on any floor reached by those elevators.\"\n6. Yup, this is true. Unfortunately this method does heavily rely on the indoor/outdoor classifier. \n - We'll add a brief discussion to highlight this issue.\n\n\n\"How does each choice of your training parameters affect the performance? e.g. number of epoches, batch size, learning rate. What are the other architectures considered? What did you learn about which architecture works and which does not? Why?\n\"\n7. We can add a more thorough description about this and provide training logs in the code that give visibility into the parameters for each experiment and the results.\n - The window choice (k) actually might be the most critical hyperparameter (next to learning rate). The general pattern is that a longer window did not help much. \n - The fully connected network actually does surprisingly well but the RNN generalizes slightly better. A 1-layer RNN did not provide much modeling power. It was the multi-layer model that added the needed complexity to capture these relationships. We also tried bi-directional but it failed to perform well. \n\n\"As soon as you start to use clustering to help in floor estimation, you are exploiting prior knowledge about previous visits to the building. This goes somewhat against the starting assumption and claim.\n\"\n8. Fair point. We provide a prior for each situation will will get you pretty close to the correct floor-level. However, it's impossible to get more accurate without building plans, beacons or some sort of learning. We consider the clustering method more of the learning approach: It updates the estimated floor heights as either the same user or other users walk in that building. In the case where the implementer of the system (ie. A company), only wants to use a single-user's information and keep it 100% on their device, the clustering system will still work using that user's repeated visits. In the case where a central database might aggregate this data, the clusters for each building will develop a lot faster and converge on the true distribution of floor heights in a buillding.", "Thank you for your valuable feedback!\nIn Appendix A, section B we provide a lengthy discussion about potential pitfalls of our system in a real-world scenario and offer potential solutions.\n\nWas there something in addition to this that you'd like to see?" ]
[ 6, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ryBnUWb0b", "iclr_2018_ryBnUWb0b", "iclr_2018_ryBnUWb0b", "SyX3mXp7f", "ryex-7aXz", "Bk4vzMTQG", "Bk4vzMTQG", "S1mOIY2mf", "iclr_2018_ryBnUWb0b", "ryca0nYef", "B11TNj_gM", "H1zsR8cmz", "H1zsR8cmz", "iclr_2018_ryBnUWb0b", "ry1E-75eG", "B11TNj_gM", "ryca0nYef" ]
iclr_2018_SJLlmG-AZ
Understanding image motion with group representations
Motion is an important signal for agents in dynamic environments, but learning to represent motion from unlabeled video is a difficult and underconstrained problem. We propose a model of motion based on elementary group properties of transformations and use it to train a representation of image motion. While most methods of estimating motion are based on pixel-level constraints, we use these group properties to constrain the abstract representation of motion itself. We demonstrate that a deep neural network trained using this method captures motion in both synthetic 2D sequences and real-world sequences of vehicle motion, without requiring any labels. Networks trained to respect these constraints implicitly identify the image characteristic of motion in different sequence types. In the context of vehicle motion, this method extracts information useful for localization, tracking, and odometry. Our results demonstrate that this representation is useful for learning motion in the general setting where explicit labels are difficult to obtain.
accepted-poster-papers
An interesting model, for an interesting problem but perhaps of limited applicability - doesn't achieve state of the art results on practical tasks. Paper has other limitations, though the authors have addressed some in rebuttals.
test
[ "S18F244xz", "SkG-iVtlz", "rJp8iHslM", "BkoQQPTXG", "BJokPHp7z", "rkZIUS6QM", "B1KiUB6mf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The authors propose to learn the rigid motion group (translation and rotation) from a latent representation of image sequences without the need for explicit labels.\nWithin their data driven approach they pose minimal assumptions on the model, requiring the group properties (associativity, invertibility, identity) to be fulfilled.\nTheir model comprises CNN elements to generate a latent representation in motion space and LSTM elements to compose these representations through time.\nThey experimentally demonstrate their method on sequences of MINST digits and the KITTI dataset.\n\nPros:\n- interesting concept of combining algebraic structure with a data driven method\n- clear idea development and well written\n- transparent model with enough information for re-implementation\n- honest pointers to scenarios where the method might not work well\n\nCons:\n- the method is only intrinsically evaluated (Tables 2 and 3), but not compared with results from other motion estimation methods", "Paper proposes an approach for learning video motion features in an unsupervised manner. A number of constraints are used to optimize the neural network that consists of CNN + RNN (LSTM). Constraints stem from group structure of sequences and include associativity and inevitability. For example, forward-backward motions should cancel each other out and motions should be additive. Optimized network is illustrated to produce features that can be used to regress odometry. \n\nOverall the approach is interesting from the conceptual point of view, however, experimental validation is very preliminary. This makes it difficult to asses the significance and viability of the approach. In particular, the lack of direct comparison, makes it difficult to asses whether the proposed group constraints are competitive with brightness constancy (or similar) constraints used to learn motion in an unsupervised manner in other papers. \n\nIt is true that proposed model may be able to learn less local motion information, but it is not clear if this is what happens in practice. In order to put the findings in perspective authors should compare to unsupervised optical flow approach (e.g., unsupervised optical flow produced by one of the proposed CNN networks and used to predict odometer on KITTI for fair comparison). Without a comparison of this form the paper is incomplete and the findings are difficult to put in the context of state-of-the-art.\n\nAlso, saying that learned features can predict odometry “better than chance” (Section 4.2 and Table 2) seems like a pretty low bar for a generic feature representation. ", "The paper presents a method that given a sequence of frames, estimates a corresponding motion embedding to be the hidden state of an RNN (over convolutional features) at the last frame of the sequence. The parameters of the motion embedding are trained to preserve properties of associativity and invertibility of motion, where the frame sequences have been recomposed (from video frames) in various way to create pairs of frame sequences with those -automatically obtained- properties. This means, the motion embedding is essentially trained without any human annotations.\nExperimentally, the paper shows that in synthetic moving MNIST frame sequences motion embedding discovers different patterns of motion, while it ignores image appearance (i.e., the digit label). The paper also shows that linear regressor trained in KITTI on top of the unsupervised motion embedding to estimate camera motion performs better than chance. \nQ to the authors: what labelled data were used to train the linear regressor in the KITTI experiment? \nEmpirically, it appears that supervision by preserving group transformations may not be immensely valuable for learning motion representations. \n\n\n\nPros\n1)The neural architecture for motion embedding computation appears reasonable\n2)The paper tackles an interesting problem\n\nCons\n1)For a big part of the introduction the paper refers to the problem of `` ````\"learning motion” or `''understanding motion” without being specific what it means by that. \n2)The empirical results are not convincing of the strength of imposing group transformations for self-supervised learning of motion embeddings.\n3)The KITTI experiment is not well explained as it is not clear how this regressor was trained to predict egomotion out of the motion embedding.\n", "- Added a paragraph to the introduction expanding explanation of motion and its relationship to image transformations.\n- Added a paragraph to the introduction explaining how we operationalize the goal of understanding motion with a model.\n- Added a paragraph to section 4.2 of experiments section describing the Flow+PCA method and interpreting results.\n\n- Expanded Table 2 with the results on the Flow+PCA baseline.\n- Revised Table 3 caption for clarity.\n- Modified figure 4 image and caption to better explain the experiment and interpretation. \n\n- Added a supplemental section with more extensive description of Flow+PCA experiments along with interpretation and comparison to the group-based method. Three new figures added (Figures 6, 7, and 8).", "Thank you for your review and comments. We have added a comparison to a self-supervised optical flow method to better contextualize our method. See the response to AnonReviewer3 for more details (under the heading Compare to unsupervised optical flow).", "Thank you for your helpful comments and suggestions. We have added a comparison to a recent self-supervised optical flow approach for the KITTI visual odometry experiments as suggested and updated the text accordingly. See below for responses to specific comments.\n\n-Compare to unsupervised optical flow-\nTo put our method in context, we have included comparisons to a recent method for self-supervised optical flow estimation (Yu et al 2016). The output of this method is a dense optical flow field. In order to regress from this flow field to the camera motion parameters, we downsample the flow fields and run PCA over the full training set of fields. We then linearly regress from the flow field PCs to the camera motion parameters using least squares. Flow fields are computed at a resolution of 320x96, and PCA is computed on downsampled flow fields of resolution 160x48. The results from this method on KITTI are now included in the paper in table 2 and figure 6 (in the supplement). \n\nThe full optical flow method outperforms our method. We note that egomotion estimation benefits greatly from maintaining information about spatial position (which a flow field does, but our method does not). KITTI is characterized by stereotyped depth and is reasonably modelled as rigid, and under these circumstances camera translation and rotation can be estimated from a full flow field nearly linearly. The good performance of self-supervised flow + PCA here highlights the clear advantage of domain-restricted models and learning rules in a setting where those domain restrictions are appropriate. Our learning rule and model do not make these more restrictive assumptions but still performs reasonably in this setting.\n\nTo further contextualize our method, we also show the flow results as a function of the number of flow PCs included. As shown in figure 6 (in the supplement), our method outperforms the flow method up to four flow PCs, and outperforms estimates of x-dimension rotation up to around ten flow PCs. These results bolster our claim of learning a reasonable representation of motion with minimal domain assumptions.\n\n-Not clear it learns less local motion representation-\nThank you for drawing attention to this point. Unlike standard models of motion, our model is designed so that it can capture nonlocal, nonrigid motion in principle. By contrast, standard models are designed to capture only local motion (optical flow) or global motion with rigid structure (egomotion). In practice, it appears that our learning rule does not succeed at capturing all of the nonlocal structure that the model can support. However, the KITTI results show that our trained model can be used to linearly regress reasonable estimates of camera translation and rotation, suggesting that the representation is nonlocal to some extent. The model and learning rule proposed here should serve as a baseline for future, more powerful learning rules that are still able to model nonrigid, nonlocal motion.\n\n-\"Better than chance\" as a low bar.-\nOur results show the feasibility and limits of what can be learned about motion using a model that makes as few assumptions as possible about image motion and by using a minimal learning rule to impose these constraints. As we mention, our method isn't competitive with state of the art results. To improve the interpretability of this method, we have included a systematic comparison of our method with a self-supervised optical flow method as a function of the amount of flow information included in the regression (table 2 and figure 6 (supplemental)).", "Thank you for your helpful comments. We have modified the paper to better explain the points you mention, and we hope this clarifies the text. See below for responses to specific comments.\n\n-What labelled data for linear regressor, how was KITTI trained? experiment not well explained-\nFor the regression experiments, we use the sequences of the KITTI visual odometry benchmark, which include annotations of camera translation and rotation. To test our model on this dataset, we linearly regress from the learned representation to the camera motion between an image pair. We do not fine-tune our model with labelled camera motion, but perform linear regression using least squares on the learned representations. The representations themselves are trained only on KITTI tracking, a different subset of KITTI than we use for least squares. Results are shown on the full set of KITTI visual odometry sequences. We also use least squares to regress from optical flow PCA components in the updated experiments (we discuss this in more depth in the response to reviewer 3, under the heading Compare to unsupervised optical flow).\n \nWe have also expanded figure 4 to explain more clearly the design of the KITTI interpolation experiments. The purpose of this experiment is to test whether our method is sensitive to deviations from the motion subspace - i.e. whether is sensitive to the difference between realistic and subtly unrealistic image sequences.\n\n-\"learning motion\", \"understanding motion\"-\nThank you for pointing out the ambiguity in these terms. By understanding motion, we mean learning a model that characterizes image transformations due to structure-preserving changes in time. Not all changes in image sequences reflect motion (e.g. other changes are due to cuts in a video, etc.), and here we are concerned with characterizing the subspace of image transformations due specifically to motion. We have added a clarification of these terms in the updated manuscript.\n\nMotion has several properties that we use to operationalize to what extent a learned representation characterizes the motion subspace. (1) A representation that characterizes motion can be read out to estimate the motion in the scene. In particular, it can be used to regress metric properties of motion, such as camera translation and rotation. We test this prediction by regressing to camera translation and rotation on KITTI and by tSNE clustering on MNIST digits, which reveals clustering by translation. (2) The model should also represent the same motion identically regardless of the image content. For example, a motion of a digit moving at one pixel per frame to the right should be represented the same whether the digit is a \"5\" or a \"3\". We demonstrate this with the tSNE clustering results in the paper. (3) Image sequences produced by natural motion should be represented differently than image sequences not produced by natural motion. That is, a representation that characterizes motion can distinguish realistic motion from unrealistic motion. We demonstrate this property in our interpolation experiments.\n\n-Strength of group representations-\nWe agree that stronger, domain-specific learning rules can be obtained by incorporating more constraints for a specific context. The smoothness constraint used in optical flow is one such rule in the context of locally rigid motion. Here we show that the more general rules based on groups can lead to representations useful for motion. \n\nThe group-based learning rules are complementary to more domain-specific learning rules, but they are also applicable in settings where domain-specific rules like smoothness or brightness constancy are not appropriate. We hope that our results will be useful to future work designing learning rules incorporating both more generic and more domain-specific inductive biases for learning motion and other kinds of image transformations.\n" ]
[ 7, 5, 4, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_SJLlmG-AZ", "iclr_2018_SJLlmG-AZ", "iclr_2018_SJLlmG-AZ", "iclr_2018_SJLlmG-AZ", "S18F244xz", "SkG-iVtlz", "rJp8iHslM" ]
iclr_2018_r1HhRfWRZ
Learning Awareness Models
We consider the setting of an agent with a fixed body interacting with an unknown and uncertain external world. We show that models trained to predict proprioceptive information about the agent's body come to represent objects in the external world. In spite of being trained with only internally available signals, these dynamic body models come to represent external objects through the necessity of predicting their effects on the agent's own body. That is, the model learns holistic persistent representations of objects in the world, even though the only training signals are body signals. Our dynamics model is able to successfully predict distributions over 132 sensor readings over 100 steps into the future and we demonstrate that even when the body is no longer in contact with an object, the latent variables of the dynamics model continue to represent its shape. We show that active data collection by maximizing the entropy of predictions about the body---touch sensors, proprioception and vestibular information---leads to learning of dynamic models that show superior performance when used for control. We also collect data from a real robotic hand and show that the same models can be used to answer questions about properties of objects in the real world. Videos with qualitative results of our models are available at https://goo.gl/mZuqAV.
accepted-poster-papers
Since this seems interesting, I suggest to accept this paper at the conference. However, there are still some serious issues with the paper, including missing references.
test
[ "BJb0-vDxz", "BkpHBrUNz", "ryAPav5lz", "Skozh1abf", "rkarB4izM", "BkpoxZszM", "H1qYeZjfz", "HJxOWU5MM", "r1MFgUcfz", "SkzIyI5zf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "The paper proposes an architecture for internal model learning of a robotic system and applies it to a simulated and a real robotic hand. The model allows making relatively long-term predictions with uncertainties. The models are used to perform model predictive control to achieve informative actions. It is shown that the hidden state of the learned models contains relevant information about the objects the hand was interacting with. \n\nThe paper reads well. The method is sufficiently well explained and the results are presented in an illustrative and informative way. \nupdate: See critique in my comment below.\nI have a few minor points:\n\n- Sec 2: you may consider to cite the work on maximising predictive information as intrinsic motivation:\nG. Martius, R. Der, and N. Ay. Information driven self-organization of complex robotic behaviors. PLoS ONE, 8(5):e63400, 2013.\n- Fig 2: bottom: add labels to axis, and maybe mention that same color code as above\n- Sec 4 par 3: .... intentionally not autoregressive: w.r.t. to what? to the observations? \n- Sec 7.1: how is the optimization for the MPC performed? Which algorithm did you use and long does the optimization take? \n in first Eq: should f not be sampled from GMMpdf, so replace = with \\sim\n\nTypos:\n- Sec1 par2: This pattern has has ...\n- Sec 2 par2: statistics ofthe\n- Sec 4 line2: prefix of an episode , where (space before ,)\n \n", "I was somehow overly enthusiastic when giving my initial score because I really like the research direction and also found the technique interesting of using the Renyi entropies etc.\nHowever, when looking at the paper again, I share some of the concerns of the other reviewers, mostly related to the presentation and controls.\n\nI am quite surprised to see that:\n1) the explanation of how the MPC (as given above) did not go into paper. The authors changed the explanation of the MPC but did not write that they did it with a gradient descent using ADAM and projection etc. The paper should contain enough information to reproduce it!\n2) none of the suggested literature (also by the other reviewers) made it into the paper\n3) The authors did no attempt to improve their presentation, e.g. of Figure 4, which is indeed far from ideal (as noted by the other reviewers). Why not picking a few sensors with qualitatively different predicted certainty and show those enlarged. It would also be interesting to see what is the overall prediction performance of the model. \n4) no simple control experiment was done after the critic, e.g. using some random policy/grasps and compare how much more informative the \"aware\" actions are.\n\nI still think it is an important contribution. I would nevertheless urge the authors to fix as many of the above-mentioned problems as possible in the camera-ready version.\n\nDetails: \n7.1 last sentence: enforce _slew_ contraints", "Summary:\nThe paper describes a system which creates an internal representation of the scene given observations, being this internal representation advantageous over raw sensory input for object classification and control. The internal representation comes from a recurrent network (more specifically, a sequence2sequence net) trained to maximize the likelihood of the observations from training\n\nPositive aspects:\nThe authors suggest an interesting hypothesis: an internal representation of the world which is useful for control could be obtained just by forcing the agent to be able to predict the outcome of its actions in the world. This hypothesis would enable robots to train it in a self-supervised manner, which would be extremely valuable.\n\nNegative aspects:\nAlthough the premise of the paper is interesting, its execution is not ideal. The formulation of the problem is unclear and difficult to follow, with a number of important terms left undefined. Moreover, the experiment task is too simplistic; from the results, it's not clear whether the representation is anything more than trivial accumulation of sensory input\n\n- Lack of clarity:\n-- what exactly is the \"generic cost\" C in section 7.1?\n-- why are both f and z parameters of C? f is directly a function of z. Given that the form of C is not explained, seems like f could be directly computing as part of C.\n-- what is the relation between actions a in section 7.1 and u in section 4?\n-- How is the minimization problem of u_{1:T} solved?\n-- Are the authors sure that they perform gathering information through \"maximizing uncertainty\" (section 7.1)? This sounds profoundly counterintuitive. Maximizing the uncertainty in the world state should result in minimum information about the worlds state. I would assume this is a serious typo, but cannot confirm given that the relation between the minimize cost C and the Renyi entropy H is not explicitely stated. \n-- When the authors state that \"The learner trains the model by maximum likelihood\" in section 7.1, do they refer to the prediction model or the control model? It would seem that it is the control model, but the objective being \"the same as in section 6\" points in the opposite direction\n-- What is the method for classifying and/or regressing given the features and internal representation? This is important because, if the method was a recurrent net with memory, the differences between the two representations would probably be minimal.\n\n- Simplistic experimental task:\nMy main intake from the experiments is that having a recurrent network processing the sensory input provides some \"memory\" to the system which reduces uncertainty when sensory data is ambiguous. This is visible from the fact that the performance from both systems is comparable at the beginning, but degrades for sensory input when the hand is open. This could be achievable in many simple ways, like modeling the classification/regression problem directly with an LSTM for example. Simpler modalities of providing a memory to the system should be used as a baseline.\n\n\nConclusion:\nAlthough the idea of learning an internal representation of the world by being able to predict its state from observations is interesting, the presented paper is a) too simplistic in its experimental evaluation and b) too unclear about its implementation. Consequently, I believe the authors should improve these aspects before the article is valuable to the community\n\n", "The authors explore how sequence models that look at proprioceptive signals from a simulated or real-world robotic hand can be used to decode properties of objects (which are not directly observed), or produce entropy maximizing or minimizing motions.\n\nThe overall idea presented in the paper is quite nice: proprioception-based models that inject actions and encoder/pressure observations can be used to measure physical properties of objects that are not directly observed, and can also be used to create information gathering (or avoiding) behaviors. There is some related work that the authors do not cite that is highly relevant here. A few in particular come to mind:\n\nYu, Tan, Liu, Turk. Preparing for the Unknown: uses a sequence model to estimate physical properties of a robot (rather than unobserved objects)\n\nFu, Levine, Abbeel. One-Shot Learning of Manipulation Skills: trains a similar proprioception-only model and uses it for object manipulation, similar idea that object properties can be induced from proprioception\n\nBut in general the citations to relevant robotic manipulation work are pretty sparse.\n\nThe biggest issue with the paper though is with the results. There are no comparisons or reasonable baselines of any kind, and the reported results are a bit hard to judge. As far as I can understand, there are no quantitative results in simulation at all, and the real-world results are not good, indicating something like 15 degrees of error in predicting the pose of a single object. That doesn't seem especially good, though it's also very hard to tell without a baseline.\n\nOverall, this seems like a good workshop paper, but probably substantial additional experimental work is needed in order to evaluate the practical usefulness of this method. I would however strongly encourage the authors to pursue this research further: it seems very promising, and I think that, with more rigorous evaluation and comparisons, it could be quite a nice paper!\n\nOne point about style: I found the somewhat lofty claims in the introduction a bit off-putting. It's great to discuss the greater \"vision\" behind the work, but this paper suffers from a bit too much high-level vision and not enough effort put into explaining what the method actually does.", "I appreciate the additional details provided in the response, and the additional videos. However, I don't think that the authors have really addressed my main concern: the difficulty of judging the quality of the results. The additional results in the videos are purely qualitative, and it's impossible to tell what works and what doesn't. In the paper, there are three results figures: Figure 4 is essentially impossible to understand, it's a wall seemingly random plots. Figure 2 appears to show that, on some time steps, directly regressing from sensors does as well as the proposed model, but the proposed model tends to do well on more time steps. This is not surprising -- if the sensors at the \"best\" time step are sufficient to infer the object, then a recurrent model that tries to track and predict future sensory readings will probably have hidden state that correlates with history, making it slightly better for inferring the information about the objects. How hard or easy this task is, or how significant this result is, is impossible to determine here. In the real world results, I'm not convinced by the 15 degree error. There is again no serious baseline, and the lower bound baseline (a feedforward memoryless network) seems to do about equally well most of the time. Simply using a recurrent model would provide at least an upper bound baseline, but realistically, if this task is really too hard (or too easy?), perhaps it's just not the right task for evaluating the method.\n\nThere are some additional model-based control results provided in the videos. They look very interesting and promising. But these too are hard to interpret without a serious quantitative evaluation. It seems that sometimes the hand figures out that closing fingers correlates with higher force -- that makes sense, and it suggests that model-based RL works. There are other recent papers on model-based RL too, including using neural network models (including many papers you don't cite).\n\nRegarding the two prior works: it doesn't seem like they are cited in the updated draft. I understand these papers are not the same, but I disagree with the authors that they are irrelevant to cite. They aren't. Specifically:\n\nYu et al. should really be compared to. In fact, I think you might actually be doing this in the videos (the LSTM baseline), but since this information is not in the paper (and neither is the citation), it's impossible for me to tell. Yes, Yu et al. is doing something different, and I get that your paper uses \"less information\" in some sense. I would argue it doesn't, it's just a peculiarly circuitous way to learn to identify objects, but I do think your viewpoint on this is reasonable, if debatable. However, there still needs to be an honest effort at a comparative evaluation and proper overview of the relevant literature.\n\nFu et al. (which I assume is what you call \"Lu et al\") should be cited and discussed, as should other recent model-based RL work. You say \"there have been many success stories of model-free RL with deep neural representations, the same cannot be said about model-based RL\" -- well, here is a paper doing model-based RL that predates yours, and while it is doing different tasks, clearly the idea is related. There are other papers that do model-based RL too, many of which are not cited or compared to. Simply asserting that they are not \"successful\" is not going to cut it, the burden is on you to compare and/or discuss, instead of pretending that they don't exist.\n\nTo be clear: I do think that there is something really interesting about this paper. I think what that something is is an incremental but valuable improvement in training predictive models for model-based RL. I would really like to give it a higher score, but in its present state, this just doesn't seem like a paper I would expect to see published in an academic conference, but rather a workshop paper. The rigorous evaluation, comparison to prior work, and a serious attempt to summarize prior work in this area is missing. I think this is unfortunate, because I do suspect things actually work reasonably well in some cases, and if the authors scope their claims properly, acknowledge what works and what doesn't, actually provide rigorous quantitative evaluations, and properly position their paper in regard to prior work, I think it would be a good paper.", "Videos 1 (https://www.youtube.com/watch?v=BogIU66kfpo) and 2 (https://www.youtube.com/watch?v=jyAJEcFybpI) show how we can successfully predict (blue) 132 hand measurements (red) up to 500 steps ahead! The learned online, probabilistic predictive models are very effective at long-term prediction, and not just predicting the next observation.\n\nIn Video 3 (https://www.youtube.com/watch?v=fCm7iQdFXCs) we show the results of regressing from the internal states of the hand model --- which was only trained to predict the 132 hand signals, ie proprioception, vestibular info and touch sensors --- to the shape of the object that was touched (in this video, our model is called PreCoNN). It is crucial to emphasize that what we are aiming to demonstrate here is unsupervised learning, not supervised learning. That is, we train the body model with body labels (proprioception, touch and vestibular info) only. We then want to show that the learned representations of the body model can be used to predict properties of objects in the world. That is by modelling the body only, we want to show that we can acquire persistent knowledge about the shape of objects in the world. The motivation for this being that the world is very complicated and ever changing, so it might be easier to focus on modelling the body.\n\nWe argue that the learned representations are persistent because even when the hand is not touching the object, the model is still aware of the shape of the object. To show that this is true, that is that information about object shape is present in the body model, we use a simple MLP (crucially with not time dependence) to predict shape. We only do this as a diagnostic and not because this is the final goal. Again the end goal is not to predict shape from touch, but rather to show that a model trained only on body measurements can encode information about shapes of objects in the world from recent interactions. We simply want to show this information is there. \n\nIf we had object shape labels, we could simply train supervised models (eg feedforward NNs or LSTMs). If we do this we get the other videos (RawNN and RawLSTM) as shown in Video 3. Clearly LSTM which is trained in a supervised manner does better. NN doesn’t model dynamics and so it struggles when the hand stops touching the object. However, what is cool about our result is that PreCoNN was pre-trained without access to object shape labels. There was a initial phase of unsupervised learning to obtain all the features, these are then fixed and subsequently a simple MLP is sufficient to predict the shape of the object from the state of the forward model. That is by modelling the body only, we can learn good features for doing predictions about the world.\n\nIn Video 4 (https://www.youtube.com/watch?v=qiez5Bziyp8), we show the behaviour of the hand using the training object of maximizing the Renyi entropy. Note the hand seeks to touch the objects and manipulate them! \n\nVideos 5-8 show that the learned model can be used with novel control objectives, respectively maximizing the Renyi entropy of the fingertips only (https://www.youtube.com/watch?v=3Hh5WQ5HeSs) with the fingertips learning to touch each other, maximising pressure on the fingertips (https://www.youtube.com/watch?v=TCiQZfpR1zc) with the fingertips seeking contact, maximizing pressure for the fingers and palm sensors (https://www.youtube.com/watch?v=mbBpfWDA6B4), and minimizing Renyi entropy for all sensors (https://www.youtube.com/watch?v=bHmzG6mxTh0). In the last video, the hand avoids touch the object as one would expect because in this case it has no uncertainty in predicting what it feels. All these results are consistent with each other.\n\nFinally, Video 9 (https://www.youtube.com/watch?v=ojTSIb6OT9w) shows the data collection part of our Shadow hand experiment.\n\nAll videos collected into a single playlist: https://www.youtube.com/playlist?list=PLwtvFFFFBVYL5GzMxq40_fjNUQD91vw4O", "We thank the three reviewers. Their comments are not only valid but also very helpful. They will no doubt help us improve the presentation of this work considerably.\n\nWe will address the reviewers’ questions in detail individually. To address the general concern about baselines and results, we have experimented with several variants of our model. While time prohibits us from offering full quantitative ablations at this stage, we have produced some videos (included as a comment in this thread) to provide a more clear illustration of what we have accomplished using one of our models. (We agree that in the paper we could do a lot better to communicate our results.)\n \nIn summary, we have shown that while learning to explore by maximising Renyi entropy of body predictions, it is possible to simultaneously learn dynamic, long-term, predictive neural network models of body measurements, including proprioceptive signals, vestibular information and touch sensors. We have shown that the learned body models can be used to solve other control tasks. While there have been many success stories of model-free RL with deep neural representations, the same cannot be said about model-based RL. In this sense, this paper puts forward a rare example of learning to control while (simultaneously) learning neural dynamic models effectively. It however goes beyond this to provide evidence of the value of embodiment in AI, in particular that it is possible to learn persistent, holistic, dynamic representations of the world by simply learning to predict body signals. We believe this position could be helpful to advance research in AI.", "\"Yu, Tan, Liu, Turk. Preparing for the Unknown...\"\n\"Fu, Levine, Abbeel. One-Shot Learning of Manipulation Skills...\"\n\nThank you for these very relevant references. There are similarities, but important differences. The paper of Yu and colleagues uses labels of the world properties (mu in their notation) to pre-learn the model in a supervised way. We on the other hand aim to show that properties of the world come to be represented in a model that is only trained with body labels. While relevant the two papers are markedly different. The paper of Lu and colleagues, and in fact may of their subsequent works including guided policy search, use classical control ideas and iterate between fitting trajectories and improving control policies. We have shown that we can learn more complex models and exploration policies jointly. The works are related, but with many differences, and we feel it will be promising to explore this connection further. Thank you for pointing out this connection.\n\n\"But in general the citations to relevant robotic manipulation work are pretty sparse.\"\n\nWe agree. We will address this.\n\n\"The biggest issue with the paper though is with the results.\"\n\nTo demonstrate what we are referring to as “awareness” in this paper, we do include baselines for all of our experiments that perform the same tasks using the raw sensor state instead of the hidden state. Please see also the general response and videos, which include plots and comparisons for reference.\n\nIn the pose-prediction experiment on the Shadow hand, our baseline model (in Figure 5) is called “sensors” and tries to predict the pose of the object from only sensory information (without the hidden state). This baseline also does not surpass the ~15 degrees of error, indicating that the sensor readings only contain coarse-grained information about the object and that anything better than ~15 degrees is not possible with the current setup. Our approach outperforms this baseline in most cases and shows that the hidden state of the predictive models have learned a useful representation. We will revise our paper and make the baselines more clearly labeled.\n\n\"I found the somewhat lofty claims in the introduction a bit off-putting.\"\n\nWe agree. We've modified the intro to remove some of the more lofty claims.", "\"Although the premise of the paper is interesting, its execution is not ideal. ... it's not clear whether the representation is anything more than trivial accumulation of sensory input\"\n\nWe hope the general reply and videos help with this.\n\n\"what exactly is the \"generic cost\" C in section 7.1?\"\n\"why are both f and z parameters of C? f is directly a function of z. Given that the form of C is not explained, seems like f could be directly computing as part of C.\"\n\nIn the MPC presentation in section 7.1 we intentionally left the objective generically as C and agree that our current presentation is confusing. For example, to maximize the entropy in the nominal trajectory for exploration, the objective C is the (negated) Renyi entropy of the model predictions, which directly uses the GMM PDF f.\n\nYou are correct that the PDF f and hidden state z need not necessarily be parameters of C. However in some cases it adds notational convenience; the PDF f is useful for when the goal of the objective is to reach a desired state (such as the maximum entropy, or to maximize the fingertip pressure), and the hidden state z is useful for when the goal of the objective extracts other information from the hidden state (such as the type of object, or the height of the object)\n\nWe have clarified this portion in the paper.\n\n\"what is the relation between actions a in section 7.1 and u in section 4?\"\n\nMinimizing the objective in 7.1 over a obtains u. We agree this is probably not the best notation and we've updated the description in 7.1 to use u and u^star instead.\n\n\"How is the minimization problem of u_{1:T} solved?\"\n\nThis problem is solved using Adam with warm starts from the previous step, see our response to R1 for a full description.\n\n\"Are the authors sure that they perform gathering information through \"maximizing uncertainty\"...\"\n\nWe perform information gathering through maximizing (and not minimizing) uncertainty. To understand why this is the case it is important to be clear about exactly which uncertainty is being maximized, and what we are trying to gather information about.\n\nIt is true that in some settings (e.g. A Bayesian exploration-exploitation approach for optimal online sensing and planning with a visually guided mobile robot by R Martinez-Cantin, N de Freitas, E Brochu, J Castellanos, and A Doucet in Autonomous Robots 27 (2), 93-103, and the many references therein) one might choose to minimize uncertainty of the internal belief state, so as to gather information. This also arises naturally in Bayesian experimental design (eg the highly cited work of K Chaloner). \n\nHowever, here we are concerned with the uncertainty in the predictions of the hand signals (touch sensors, vestibular info, and proprioception). By seeking to maximize this uncertainty, the hand is driven to try behaviours where it is highly uncertain about what sensations it will experience (eg what its pressure sensors will feel). \n\nTo further emphasize this consistency, we show the outcome of instead acting to minimize the predicted entropy in Video 8 (Figure 3(c) in the paper). Under this objective the hand pulls itself away from the block. \n\n\"When the authors state that \"The learner trains the model by maximum likelihood\" in section 7.1, do they refer to the prediction model or the control model?\"\n\nThe only model in this paper is the predictive model. Control is implemented as planning in the predictive model using MPC. The learner trains the predictive model, and the actors plan trajectories using an objective that depends on the predictive model, but there is no separate learned control model involved.\n\n\"What is the method for classifying and/or regressing given the features and internal representation?\"\n\nThe diagnostic models are MLPs that look at a single time step of the predictive model state. Your supposition is correct that if we use a recurrent net as the diagnostic model then there is no advantage to including the predictive model features. It would be quite surprising if this were not the case.\n\nThe purpose of the diagnostic model ties back to information partitioning. The role of the diagnostic is to show that although we do not have an explicit representation of any external state in the predictive model, we nonetheless obtain such a representation implicitly (since if this were not the case the diagnostic task would fail).\n\n\"Simplistic experimental task:...\"\n\nThe motivation behind our setup is that collecting the data to train the \"direct\" model can be difficult or impossible in a real life setting. The most complex part of the apparatus for the shadow hand experiment (apart from the hand itself) is the mechanism for measuring the angle of the object being grasped.\n\n", "\"- Sec 2: you may consider to cite the work on maximising predictive information as intrinsic motivation:\nG. Martius, R. Der, and N. Ay. Information driven self-organization of complex robotic behaviors. PLoS ONE, 8(5):e63400, 2013.\"\n\nThank you for the relevant reference.\n\n\"- Sec 4 par 3: .... intentionally not autoregressive: w.r.t. to what? to the observations? \"\n\nIntentionally not autoregressive with respect to time. We have clarified this in the paper.\n\n\"- Sec 7.1: how is the optimization for the MPC performed? Which algorithm did you use and long does the optimization take?\"\n\nWe take the very naive approach of optimizing the MPC objective for a fixed number of steps by differentiating the cost function with respect to the actions and taking (projected) gradient steps with Adam. We project the actions into the constraint set at each step. We initialize the nominal action sequence with a burn in of 1000 Adam steps (which is fairly time consuming) and we take 10 additional optimization steps after executing each action and observing a response, warm started from the previous solution. This is acceptably fast for experimentation (2-5 steps/second after burn in) but is substantially slower than real time, primarily due to the cost of evaluating the model.\n\n\n\" in first Eq: should f not be sampled from GMMpdf, so replace = with \\sim\"\n\nf is the Gaussian PDF defined by the parameters, not a sample from the PDF. We don't need to sample from the predictive distribution in order to compute the control objective. One of the reasons we chose the Renyi entropy as the objective is that it is easy to compute analytically for the Mixture of Gaussians predictions that we make at each step.\n" ]
[ 7, -1, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, 4, 5, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1HhRfWRZ", "SkzIyI5zf", "iclr_2018_r1HhRfWRZ", "iclr_2018_r1HhRfWRZ", "HJxOWU5MM", "H1qYeZjfz", "iclr_2018_r1HhRfWRZ", "Skozh1abf", "ryAPav5lz", "BJb0-vDxz" ]
iclr_2018_SyzKd1bCW
Backpropagation through the Void: Optimizing control variates for black-box gradient estimation
Gradient-based optimization is the foundation of deep learning and reinforcement learning. Even when the mechanism being optimized is unknown or not differentiable, optimization using high-variance or biased gradient estimates is still often the best strategy. We introduce a general framework for learning low-variance, unbiased gradient estimators for black-box functions of random variables, based on gradients of a learned function. These estimators can be jointly trained with model parameters or policies, and are applicable in both discrete and continuous settings. We give unbiased, adaptive analogs of state-of-the-art reinforcement learning methods such as advantage actor-critic. We also demonstrate this framework for training discrete latent-variable models.
accepted-poster-papers
This is an interesting and well-written paper introducing two unbiased gradient estimators for optimizing expectations of black box functions. LAX can handle functions of both continuous and discrete random variables, while RELAX is specialized to functions of discrete variables and can be seen as a version of the recently introduced REBAR with its concrete-relaxation-based control variate replaced by (or augmented with) a free-form function. The experimental section of the paper is adequate but not particularly strong. If Q-prop is the most similar existing RL approach, as is state in the paper, why not include it as a baseline? It would also be good to see how RELAX performs at optimizing discrete VAEs using just the free-form control variate (instead of combining it with the REBAR control variate).
train
[ "HyFP5nE1M", "S1JPkCzlG", "S1a0antgG", "B1Gdj6qQM", "r148oh9QG", "Bk9nMTL7f", "rkSpZaImf", "SJLiWTImf", "H1Cjea8mG", "r1U0in8XG", "rkaDiETZf", "BylvIk5lG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "public" ]
[ "This paper introduces LAX/RELAX, a method to reduce the variance of the REINFORCE gradient estimator. The method builds on and is directly inspired by REBAR. Similarly to REBAR, RELAX is an unbiased estimator, and the idea is to introduce a control variate that leverages the reparameterization gradient. In contrast to REBAR, RELAX learns a free-from control variate, which allows for low-variance gradient estimates for both discrete and continuous random variables. The method is evaluated on a toy experiment, as well as the discrete VAE and reinforcement learning. It effectively reduces the variance of state-of-the-art methods (namely, REBAR and actor-critic).\n\nOverall, I enjoyed reading the paper. I think it is a neat idea that can be of interest for researchers in the field. The paper is clearly explained, and I found the experiments convincing. I have minor comments only.\n\n+ Is there a good way to initialize c_phi prior to optimization? Given that c_phi must be a proxy for f(), maybe you can take advantage of this observation to find a good initialization for phi?\n\n+ I was confused with the Bernoulli example in Appendix B. Consider the case theta=0.5. Then, b=H(z) takes value 1 if z>0, and 0 otherwise. Thus, p(z|b,theta) should assign mass zero to values z>0 when b=0, which does not seem to be the case with the proposed sampling scheme in page 11, since v*theta=0.5*v, which gives values in [0,0.5]. And similarly for the case b=1.\n\n+ Why is the method called LAX? What does it stand for?\n\n+ In Section 3.3, it is unclear to me why rho!=phi. Given that c_phi(z)=f(sigma_lambda(z))+r_rho(z), with lambda being a temperature parameter, why isn't rho renamed as phi? (the first term doesn't seem to have any parameters). In general, this section was a little bit unclear if you are not familiar with the REBAR method; consider adding more details.\n\n+ Consider adding a brief review of the REBAR estimator in the Background section for those readers who are less familiar with this approach.\n\n+ In the abstract, consider adding two of the main ideas that the estimator relies on: control variates and reparameterization gradients. This would probably be more clear than \"based on gradients of a learned function.\"\n\n+ In the first paragraph of Section 3, the sentence \"f is not differentiable or not computable\" may be misleading, because it is unclear what \"not computable\" means (one may think that it cannot be evaluated). Consider replacing with \"not analytically computable.\"\n\n+ In Section 3.3, it reads \"differentiable function of discrete random variables,\" which does not make sense.\n\n+ Before Eq. 11, it reads \"where epsilon_t does not depend on theta\". I think it should be the distribution over epsilon_t what doesn't depend on theta.\n\n+ In Section 6.1, it was unclear to me why t=.499 is a more challenging setting.\n\n+ The header of Section 6.3.1 should be removed, as Section 6.3 is short.\n\n+ In Section 6.3.1, there is a broken reference to a figure.\n\n+ Please avoid contractions (doesn't, we'll, it's, etc.)\n\n+ There were some other typos; please read carefully the paper and double-check the writing. In particular, I found some missing commas, some proper nouns that are not capitalized in Section 5, and others (e.g., \"an learned,\" \"gradient decent\").", "This paper suggests a new approach to performing gradient descent for blackbox optimization or training discrete latent variable models. The paper gives a very clear account of existing gradient estimators and finds a way to combine them so as to construct and optimize a differentiable surrogate function. The resulting new gradient estimator is then studied both theoretically and empirically. The empirical study shows the benefits of the new estimator for training discrete variational autoencoders and for performing deep reinforcement learning.\n\nTo me, the main strengths of the paper is the very clear account of existing gradient estimators (among other things it helped me understand obscurities of the Q-prop paper) and a nice conceptual idea. The empirical study itself is more limited and the paper suffers from a few mistakes and missing information, but to me the good points are enough to warrant publication of the paper in a good conference like ICLR.\n\nBelow are my comments for the authors.\n\n---------------------------------\nGeneral, conceptual comments:\n\nWhen reading (6), it is clear that the framework performs regression of $c_\\phi$ towards the unknown $f$ simultaneously with optimization over $c_\\phi$.\nTaking this perspective, I would be glad to see how the regression part performs with respect to standard least square regression,\ni.e. just using $||f(b)-c_\\phi(b)||^2$ as loss function. You may compare the speed of convergence of $c_\\phi$ towards $f$ using (6) and the least squared error.\nYou may also investigate the role of this regression part into the global g_LAX optimization by studying the evolution of the components of (6).\n\nRelated to the above comment, in Algo. 1, you mention \"f(.)\" as given to the algo. Actually, the algo does not know f itself, otherwise it would not be blackbox optimization. So you may mean different things. In a batch setting, you may give a batch of [x,f(x) (,cost(x)?)] points to the algo. You more probably mean here that you have an \"oracle\" that, given some x, tells you f(x) on demand. But the way you are sampling x is not specified clearly.\n\nThis becomes more striking when you move to reinforcement learning problems, which is my main interest. The RL algorithm itself is not much specified. Does it use a replay buffer (probably not)? Is it on-policy or off-policy (probably on-policy)? What about the exploration policy? I want to know more... Probably you just replace (10) with (11) in A2C, but this is not clearly specified.\n\nIn Section 4, can you explain why, in the RL case, you must introduce stochasticity to the inputs? Is this related to the exploration issue (see above)?\n\nLast sentence of conclusion: you are too allusive about the relationship between your learned control variate and the Q-function. I don't get it, and I want to know more...\n\n-----------------------------------\nLocal comments:\n\nBackpropagation through the void: I don't understand why this title. I'm not a native english speaker, I'm probably missing a reference to something, I would be glad to get it.\n\nFigure 1 right. Caption states variance, but it is log variance. Why does it oscillate so much with RELAX?\n\nBeginning of 3.1: you may state more clearly that optimizing $c_\\phi$ the way you do it will also \"minimize\" the variance, and explain better why (\"we require the gradient of the variance of our gradient estimator\"...). It took me a while to get it.\n\nIn 3.1.1 a weighting based on $\\d/\\d\\theta log p(b)$ => shouldn't you write $... log p(b|\\theta)$ as before?\n\nFigure 2 is mentioned in p.3, it should appear much sooner than p6.\n\nIn Figure 2, there is nothing about the REINFORCE PART. Why?\n\nIn 3.4 you alternate sums over an infinite horizon and sums over T time steps. You should stick to the T horizon case, as you mention the case T=1 later.\n\np6 Related work\n\nThe link to the work of Salimans 2017 is far from obvious, I would be glad to know more...\n\nQ-prop (Haarnoja et al.,2017): this is not the adequate reference to Q-prop, it should be (Gu et al. 2016), you have it correct later ;)\n\nFigure 3: why do you stop after so few epochs? I wondered how expensive is the computation of your estimator, but since in the RL case you go up to 50 millions (or 4 millions?), it's probably not the issue. I would be glad to see another horizontal lowest validation error for your RELAX estimator (so you need to run more epochs).\n\"ELBO\" should be explained here (it is only explained in the appendices).\n\n6.2, Table 1: Best obtained training objective: what does this mean? Should it be small or large? You need to explain better. How much is the modest improvement (rather give relative improvement in the text?)? To me, you should not defer Table 3 to an appendix (nor Table 4).\n\nFigure 4: Any idea why A2C oscillates so much on inverted pendulum? Any idea why variance starts to decrease after 500 episodes using RELAX? Isn't related to the combination of regression and optimization, as suggested above?\n\nAbout Double Inverted Pendulum, Appendix E3 mentions 50 million frames, but the figure shows 4 millions steps. Where is the truth?\n\nWhy do you give steps for the reward, and episodes for log-variance? The caption mentions \"variance (log-scale)\", but saying \"log-variance\" would be more adequate.\n\np9: the optimal control variate: what is this exactly? How do you compare a control variate over another? This may be explained in Section 2.\n\nGAE (Kimura, 2000). I'm glad you refer to former work (there is a very annoying tendency those days to refer only to very recent papers from a small set of people who do not correctly refer themselves to previous work), but you may nevertheless refer to John Schulman's paper about GAEs anyways... ;)\n\nAppendix E.1 could be reorganized, with a common hat and then E.1.1 for one layer model(s?) and E.1.2 for the two layer model(s?)\n\nA sensitivity analysis wrt to your hyper-parameters would be welcome, this is true for all empirical studies.\n\nIn E2, is the output layer linear? You just say it is not ReLU...\n\nThe networks used in E2 are very small (a standard would be 300 and 400 neurons in hidden layers). Do you have a constraint on this?\n\n\"As our control variate does not have the same interpretation as the value function of A2C, it was not directly clear how to add reward bootstrapping and other variance reduction techniques common in RL into our model. We leave the task of incorporating these and other variance reduction techniques to future work.\"\nFirst, this is important, so if this is true I would move this to the main text (not in appendix).\nBut also, it seems to me that the first sentence of E3 contradicts this, so where is the truth?\n\n{0.01,0.003,0.001} I don't believe you just tried these values. Most probably, you played with other values before deciding to perform grid search on these, right?\nThe same for 25 in E3.\n\nGlobally, you experimental part is rather weak, we would expect a stronger methodology, more experiments also with more difficult benchmarks (half-cheetah and the whole gym zoo ;)), more detailed analyses of the results, but to me the value of your paper is more didactical and conceptual than experimental, which I really appreciate, so I will support your paper despite these weaknesses.\n\nGood luck! :)\n\n---------------------------------------\nTypos:\n\np5\nmonte-carlo => Monte(-)Carlo (no - later...)\ntaylor => Taylor\nyou should always capitalize Section, equation, table, figure, appendix, ...\n\ngradient decent => descent (twice)\n\np11: probabalistic\n\np15 ELU(Djork-... => missing space\n\n", "The paper considers the problem of choosing the parameters of distribution to maximize an expectation over that distribution. This setting has attracted a huge interest in ML communities (that is related to learning policy in RL as well as variational inference with hidden variables). The paper provides a framework for such optimization, by interestingly combining three standard ways. \n\nGiven Tucker et al, its contribution is somehow incremental, but I think it is an interesting idea to use neural networks for control variate to handle the case where f is unknown. \n\nThe main issue of this paper seems to be in limited experimental results; they only showed three quite simple experiments (I guess they need to focus one setting; RL or VAE). Moreover, it would be good to actually show if the variation of g_hat is much smaller than other standard methods.\n \nThere is missing reference at page 8.\n\n", "Dear Leon, you make some interesting connections!\n\n1) Yes, our estimator does look a lot like a doubly-robust estimator. In fact, estimating gradients through discrete random variables seems a lot like counterfactual modeling, where we happen to know all the confounders. The motivation seems a bit different, though, in that the doubly-robust estimator is about correcting for model misspecification and removing bias, where we start with an unbiased estimator. But the overall idea is indeed similar. Also, the fact that these different weighting schemes both give unbiased estimates raises the question: what is the family of weighting schemes that we could be looking at?\n\n2) I see what you mean - the special case where c = f recovers the reparameterization trick, and the weighted estimates cancel out. One thread I'd like to think about further is, can we do even better? We actually have results on toy problems where LAX achieves lower variance than the reparameterization trick. Of course, there are sometimes many possible reparameterizations, each with different variance.\n\n3) I think the \"Generalized Reparameterization Gradient\" paper looked at questions very similar to this one. Also, one future direction we're thinking about is learning the reparameterization as well as the surrogate.\n\n\nThanks again for the interesting points.\n", "Thank you for your comment. A sentence has been added to section 7 to mention the application these works could have to our method. ", "Dear reviewer 2,\n\nThank you for your kind words and detailed feedback. Some small changes have been made to address your comments. I will address your comments in the order they were written. Your original comments will appear in [brackets] and my responses will follow.\n\n[+ Is there a good way to initialize c_phi prior to optimization? Given that c_phi must be a proxy for f(), maybe you can take advantage of this observation to find a good initialization for phi?]\n\nGood question. This was touched upon in section 3.3 where we use the concrete relaxation to initialize our control variate and then learn an offset parameterized by a neural network. However this approach is only applicable when f is known. We also did not experiment with different structures for the control variate beyond different neural network architectures. This is an interesting idea which we leave to further work to explore. \n\n\n[+ I was confused with the Bernoulli example in Appendix B. Consider the case theta=0.5. Then, b=H(z) takes value 1 if z>0, and 0 otherwise. Thus, p(z|b,theta) should assign mass zero to values z>0 when b=0, which does not seem to be the case with the proposed sampling scheme in page 11, since v*theta=0.5*v, which gives values in [0,0.5]. And similarly for the case b=1.]\n\nThis issue was due to a typo in Appendix B where theta was swapped for (1 - theta). This has been fixed in the new version. \n\n\n[+ Why is the method called LAX? What does it stand for?]\n\nWe first coined “RELAX” as an alternative to REBAR that learned the continuous “relax”-ation. We then developed LAX, and since it was a simpler version of relax, we chose a simpler name. We realize that these aren’t particularly descriptive names, and welcome any suggestions for naming these estimators.\n\n[+ In Section 3.3, it is unclear to me why rho!=phi. Given that c_phi(z)=f(sigma_lambda(z))+r_rho(z), with lambda being a temperature parameter, why isn't rho renamed as phi? (the first term doesn't seem to have any parameters). In general, this section was a little bit unclear if you are not familiar with the REBAR method; consider adding more details.]\n\nThe paper has been updated to specify that phi = {rho, lambda}\n\n\n[+ Consider adding a brief review of the REBAR estimator in the Background section for those readers who are less familiar with this approach.]\n\nThe paper mentions that REBAR can be viewed as a special case of the RELAX method where the concrete relaxation is used as the control variate. For brevity’s sake, a full explanation of REBAR was left out.\n\n\n[+ In the abstract, consider adding two of the main ideas that the estimator relies on: control variates and reparameterization gradients. This would probably be more clear than \"based on gradients of a learned function.\"]\n\nThe abstract has been slightly changed to mention that the method involves control variates.\n\n\n[+ In the first paragraph of Section 3, the sentence \"f is not differentiable or not computable\" may be misleading, because it is unclear what \"not computable\" means (one may think that it cannot be evaluated). Consider replacing with \"not analytically computable.\"]\n\nGood point. ”not computable” has been removed from that sentence. \n\n\n[+ In Section 3.3, it reads \"differentiable function of discrete random variables,\" which does not make sense.]\n\nBy this, it was meant that the function f is differentiable, but we may be evaluating it only on a discrete input. An example of such a function would be f(x) = x^2, where x = {0, 1}. Here f is differentiable when its domain is the real numbers but we are evaluating it restricted to {0, 1}. This wording was removed to avoid confusion.\n\n\n[+ Before Eq. 11, it reads \"where epsilon_t does not depend on theta\". I think it should be the distribution over epsilon_t what doesn't depend on theta.]\n\nThis section has been reworded to make this more clear\n\n[+ In Section 6.1, it was unclear to me why t=.499 is a more challenging setting.]\n\nThe closer that t gets to .5 means that the values of f(0) and f(1) get closer together. This means a Monte Carlo estimator of the gradient will require more samples to converge to the correct value.\n\n\n[+ The header of Section 6.3.1 should be removed, as Section 6.3 is short.]\n\nSection 6.3.1 has been removed and consolidated into 6.3\n\n\n[+ In Section 6.3.1, there is a broken reference to a figure]\n\nGood catch. The broken reference has been fixed in section 6.3\n\n[+ Please avoid contractions (doesn't, we'll, it's, etc.)]\n\nContractions have been removed.\n\n[+ There were some other typos; please read carefully the paper and double-check the writing. In particular, I found some missing commas, some proper nouns that are not capitalized in Section 5, and others (e.g., \"an learned,\" \"gradient decent\").]\n\nThese typos have been fixed. Thank you for your attention to detail, it has improved the text considerably.\n", "[About Double Inverted Pendulum, Appendix E3 mentions 50 million frames, but the figure shows 4 millions steps. Where is the truth?]\n\nThis was due to a typo in the appendix. The figure is correct- it was run for 5 million steps, and we corrected the appendix to reflect this. \n\n\n\n[Why do you give steps for the reward, and episodes for log-variance? The caption mentions \"variance (log-scale)\", but saying \"log-variance\" would be more adequate.]\n\nThe variance was estimated at every episode since we needed to run a full episode to compute the policy gradient. After every training episode, 100 episodes were run to generate samples from our gradient estimator which were used to estimate the variance of the estimator. We run the algorithms for a fixed number of steps to be consistent with previous work. \n\n\n\n[p9: the optimal control variate: what is this exactly? How do you compare a control variate over another? This may be explained in Section 2.]\n\nThe optimal control variate is the control variate which produces a gradient estimator with the lowest possible variance. \n\n\n\n[GAE (Kimura, 2000). I'm glad you refer to former work (there is a very annoying tendency those days to refer only to very recent papers from a small set of people who do not correctly refer themselves to previous work), but you may nevertheless refer to John Schulman's paper about GAEs anyways... ;)]\n\nThanks for the pointer, this reference was added.\n\n\n[Appendix E.1 could be reorganized, with a common hat and then E.1.1 for one layer model(s?) and E.1.2 for the two layer model(s?)]\n\nThe appendix was reorganized as per your suggestions. \n\n\n\n[A sensitivity analysis wrt to your hyper-parameters would be welcome, this is true for all empirical studies.]\n\nIn general, we did not find the algorithm to be very sensitive to hyperparameters other than the learning rate. We agree that adding a sensitivity analysis would be an improvement. We have added this to the experimental details in the Appendix along with a sentence which presents the best performing hyperparameters.\n\n\n[In E2, is the output layer linear? You just say it is not ReLU…]\n\nThe Appendix was changed to note that the output layers were linear.\n\n\n[The networks used in E2 are very small (a standard would be 300 and 400 neurons in hidden layers). Do you have a constraint on this?]\n\nThere was no hard constraint on network size. These networks were chosen because they worked well with the baseline A2C algorithm, and is a standard choice in the literature, and the OpenAI baselines.\n\n\n[\"As our control variate does not have the same interpretation as the value function of A2C, it was not directly clear how to add reward bootstrapping and other variance reduction techniques common in RL into our model. We leave the task of incorporating these and other variance reduction techniques to future work.\"\nFirst, this is important, so if this is true I would move this to the main text (not in appendix).\nBut also, it seems to me that the first sentence of E3 contradicts this, so where is the truth?]\n\n\nWe moved this section to the main text, and clarified why we don’t use reward bootstrapping for discrete, but use it for continuous experiments. We do so with the continuous control RL tasks by structuring the control variate as C = V(s) + c(a, s), where V was trained as the value function in A2C. \n\n\n[{0.01,0.003,0.001} I don't believe you just tried these values. Most probably, you played with other values before deciding to perform grid search on these, right?\nThe same for 25 in E3.]\n\nFor all the experiments, the hyperparameter values used for grid search mentioned in the appendices (E1, E2, E3) were the only ones tried. No other values were tried because of computational constraints. These values were not chosen because they gave us better result, but because we thought they would make a comprehensive grid.\n\n\n[Globally, you experimental part is rather weak, we would expect a stronger methodology, more experiments also with more difficult benchmarks (half-cheetah and the whole gym zoo ;)), more detailed analyses of the results, but to me the value of your paper is more didactical and conceptual than experimental, which I really appreciate, so I will support your paper despite these weaknesses.]\n\nYes, we believe that further experimentation is warranted, but feel that our current results demonstrate the effectiveness of our method. Thank you for your support. \n\nTypos:\n\nWe have fixed all of the typos that you noticed. Thanks.", "Local Comments:\n\n[Backpropagation through the void: I don't understand why this title. I'm not a native english speaker, I'm probably missing a reference to something, I would be glad to get it.]\n\nThe title alludes to the scope of the method. Normally we refer to backprop “through” something. If we use our method to estimate gradients through an unknown or non-existent computation graph, we could say we are backpropagating through “the void”. However, I would guess that you are not the only one confused by the title.\n\n[Figure 1 right. Caption states variance, but it is log variance. Why does it oscillate so much with RELAX?]\n\nThe caption has been changed to log-variance. We guess that the oscillation is due to interactions that arise as the control variate is trained. Since the parameters of the distribution p are constantly changing, the control variate is consistently a step behind p, which may account for these oscillations. We are aware of this phenomenon but have left it for further work to analyze in more detail.\n\n[Beginning of 3.1: you may state more clearly that optimizing $c_\\phi$ the way you do it will also \"minimize\" the variance, and explain better why (\"we require the gradient of the variance of our gradient estimator\"...). It took me a while to get it.]\n\nWe believe it is clear that by minimizing a Monte Carlo estimate of the variance, we will be minimizing the variance of the estimator. The first sentence of the second paragraph in section 3.1 has been slightly changed to avoid confusion. The title of the section has also been changed to be more clear.\n\n\n\n[In 3.1.1 a weighting based on $\\d/\\d\\theta log p(b)$ => shouldn't you write $... log p(b|\\theta)$ as before? ]\n\nThe paper has been updated to include the dependence on theta in the distribution. \n\n\n[Figure 2 is mentioned in p.3, it should appear much sooner than p6.]\n\nDue to length restrictions, it was difficult to place this figure elsewhere in the paper. \n\n\n\n[In Figure 2, there is nothing about the REINFORCE PART. Why?]\n\nWe wanted to focus on comparing the learned control variates of REBAR and RELAX.\n\n\n[In 3.4 you alternate sums over an infinite horizon and sums over T time steps. You should stick to the T horizon case, as you mention the case T=1 later.]\n\nGood point, we changed our notation to use a finite horizon of T.\n\n\n[p6 Related work]\n\nCan you clarify what you mean here?\n\n\n[The link to the work of Salimans 2017 is far from obvious, I would be glad to know more…]\n\nWhen applied to RL, our algorithm is an extension of policy-gradient optimization. We felt it reasonable to refer to alternative approaches such as that of Salimans et al. to inform the reader of concurrent, but orthogonal work.\n\n\n\n[Q-prop (Haarnoja et al.,2017): this is not the adequate reference to Q-prop, it should be (Gu et al. 2016), you have it correct later ;)]\n\nThanks, we corrected this.\n\n[Figure 3: why do you stop after so few epochs? I wondered how expensive is the computation of your estimator, but since in the RL case you go up to 50 millions (or 4 millions?), it's probably not the issue. I would be glad to see another horizontal lowest validation error for your RELAX estimator (so you need to run more epochs).\n\"ELBO\" should be explained here (it is only explained in the appendices).]\n\nIn our VAE experiments, we were interested in comparing with the baseline of REBAR. For that reason, we followed their experimental setup which, runs for 2 million iterations. We added a note that ELBO is shorthand for evidence lower-bound. \n\n\n[6.2, Table 1: Best obtained training objective: what does this mean? Should it be small or large? You need to explain better. How much is the modest improvement (rather give relative improvement in the text?)? To me, you should not defer Table 3 to an appendix (nor Table 4).]\n\nGood points. We change “best objective”to “highest obtained ELBO” to indicate that higher ELBOs are better. We couldn’t move the tables up because of space constraints.\n\n[Figure 4: Any idea why A2C oscillates so much on inverted pendulum? Any idea why variance starts to decrease after 500 episodes using RELAX? Isn't related to the combination of regression and optimization, as suggested above?]\n\nRegarding the inverted pendulum, we believe this is due to the larger variance of the baseline gradient estimator. Regarding the variance on the double pendulum, we do not fully understand this phenomenon, thus we do not make any claims about it. \n\n(continued in next comment)\n", "Dear reviewer 1,\n\nThank you for your detailed comments and positive reception of the work. I will address your comments in the order in which you wrote them. For clarity, I have placed your comments in [brackets] and my responses will follow in plain text.\n\nGlobal Comments:\n\n[When reading (6), it is clear that the framework performs regression of $c_\\phi$ towards the unknown $f$ simultaneously with optimization over $c_\\phi$.\nTaking this perspective, I would be glad to see how the regression part performs with respect to standard least square regression,\ni.e. just using $||f(b)-c_\\phi(b)||^2$ as loss function. You may compare the speed of convergence of $c_\\phi$ towards $f$ using (6) and the least squared error.\nYou may also investigate the role of this regression part into the global g_LAX optimization by studying the evolution of the components of (6).]\n\nWe did not experiment with the L2 loss directly. I do believe this should be explored in further work as the L2 loss has less computational overhead than the Monte Carlo variance estimate that was used in this work. This was tested in the concurrently submitted “Sample-efficient Policy Optimization with Stein Control Variate“ and in that work, the Monte Carlo variance estimate was found to produce better results in some settings. We ourselves are curious about the relative performance of minimizing the L2 loss vs the variance, but will probably leave that for further work.\n\n[Related to the above comment, in Algo. 1, you mention \"f(.)\" as given to the algo. Actually, the algo does not know f itself, otherwise it would not be blackbox optimization. So you may mean different things. In a batch setting, you may give a batch of [x,f(x) (,cost(x)?)] points to the algo. You more probably mean here that you have an \"oracle\" that, given some x, tells you f(x) on demand. But the way you are sampling x is not specified clearly.] \n\nYou are correct in your understanding the algorithm. For clarity of notation, we have decided to keep the algorithm box as it is.\n\n[This becomes more striking when you move to reinforcement learning problems, which is my main interest. The RL algorithm itself is not much specified. Does it use a replay buffer (probably not)? Is it on-policy or off-policy (probably on-policy)? What about the exploration policy? I want to know more... Probably you just replace (10) with (11) in A2C, but this is not clearly specified.]\n\nWe do not use a reply-buffer, our method is on-policy. We simply replace (10) with (11) as you mentioned. Since we mention the similarities between our algorithm and A2C, we have decided to not elaborate further. Exploration is encouraged due to entropy regularization which is commonly used in policy-gradient methods. A note has been added to the experimental details section in the Appendix to explain this. \n\n[In Section 4, can you explain why, in the RL case, you must introduce stochasticity to the inputs? Is this related to the exploration issue (see above)?]\n\nWe were trying to explain that none of these methods can be used without modification to optimize parameters of a deterministic discrete function, since there would be no exploration. In the RL case, stochasticity is introduced by using a stochastic policy and exploration is encouraged due to entropy regularization. For brevity and clarity, this paragraph has been removed.\n\n[Last sentence of conclusion: you are too allusive about the relationship between your learned control variate and the Q-function. I don't get it, and I want to know more…]\n\nHere we are simply mentioning the potential of training the control variate using off policy data as is done in the Q-prop paper. We believe further theoretical work must be done to better understand the relationship between the optimal control variate and the optimal Q-function, so we do not make any claims about this relationship. \n\n(continued in next comment)\n", "Dear Reviewer 3,\n\nThank you for your overall positive review of the work. To respond to your specific criticisms:\n\n1) “Given Tucker et al, its contribution is somehow incremental”. We did build closely on Tucker et. al., but we generalized their method, and expanded its scope substantially. REBAR was only applicable to known computation graphs with discrete random variables, making it inapplicable to reinforcement learning or continuous random variables.\n\n2) “ they only showed three quite simple experiments (I guess they need to focus one setting; RL or VAE)”. Our first experiment was deliberately simple, so we could visualize the learned control variate. Regarding our VAE experiments, achieving state of the art on discrete variational autoencoders may constitute a “simple” experiment, but it’s not clear that this is a problem. We included the RL experiments to demonstrate the breadth of problems to which our method can be applied, since this is one of the main advantages of RELAX over REBAR.\n\n3) “it would be good to actually show if the variation of g_hat is much smaller than other standard methods”. Figures 1 and 4 both have plots labeled ‘log variance’, showing exactly this.\n", "Other recent work on generalizing reparameterization gradients seems to be missing:\nRuiz et al., The Generalized Reparameterization Gradient, NIPS, 2016.\nNaesseth et al., Reparameterization Gradients through Acceptance-Rejection Sampling Algorithms, AISTATS, 2017.\n", "First of all, this is a very nicely written paper on an important topic. \n\nI only write this comment because I would like to hear the comments of the author(s) about the following remarks. \n\n 1) To me equation (7) expresses the doubly robust estimation of the gradient in question. This is a bit different from the usual robust estimation because the importance sampling weights, dlog(P(b,theta))/dtheta are signed, but this remains fundamentally similar.\n\n 2) One very interesting aspect of the paper might therefore be a clarification of the power of the reparametrization trick. It now looks like doubly robust estimation of the gradient using the knowledge that the predictor function is exact (that makes it simply robust in fact ;-) When it is not exact, one needs to correct with a pair of weighted estimates.\n\n 3) How much of the power of the reparametrization trick comes from the fact that the reparametrization is assumed exact, as opposed to needing to be corrected by a difference of weighted estimates? \n\nMany thanks." ]
[ 8, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SyzKd1bCW", "iclr_2018_SyzKd1bCW", "iclr_2018_SyzKd1bCW", "BylvIk5lG", "rkaDiETZf", "HyFP5nE1M", "SJLiWTImf", "H1Cjea8mG", "S1JPkCzlG", "S1a0antgG", "iclr_2018_SyzKd1bCW", "iclr_2018_SyzKd1bCW" ]
iclr_2018_rylSzl-R-
On Unifying Deep Generative Models
Deep generative models have achieved impressive success in recent years. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as powerful frameworks for deep generative model learning, have largely been considered as two distinct paradigms and received extensive independent studies respectively. This paper aims to establish formal connections between GANs and VAEs through a new formulation of them. We interpret sample generation in GANs as performing posterior inference, and show that GANs and VAEs involve minimizing KL divergences of respective posterior and inference distributions with opposite directions, extending the two learning phases of classic wake-sleep algorithm, respectively. The unified view provides a powerful tool to analyze a diverse set of existing model variants, and enables to transfer techniques across research lines in a principled way. For example, we apply the importance weighting method in VAE literatures for improved GAN learning, and enhance VAEs with an adversarial mechanism that leverages generated samples. Experiments show generality and effectiveness of the transfered techniques.
accepted-poster-papers
This is a thought-provoking paper that places GANs and VAEs in a single framework and, motivated by this perspective, proposes several novel extensions to them. The reviewers made several good suggestions for improving the paper and the authors are expected to make the revisions they promised. The current title of the paper is too general and should be changed to something more directly descriptive of the contents.
train
[ "BkONJetlM", "SJAtVYteG", "SJIHn0tlz", "BJm4dObMz", "SJOCPubMz", "rknHPu-zM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Update 1/11/18:\n\nI'm happy with the comments from the authors. I think the explanation of non-saturating vs saturating objective is nice, and I've increased the score.\n\nNote though: I absolutely expect a revision at camera-ready if the paper gets accepted (we did not get one).\n\nOriginal review:\nThe paper is overall a good contribution. The motivation / insights are interesting, the theory is correct, and the experiments support their claims.\n\nI’m not sure I agree that this is “unifying” GANs and VAEs, rather it places them within the same graphical model perspective. This is very interesting and a valuable way of looking at things, but I don’t see this as reshaping how we think of or use GANs. Maybe a little less hype, a little more connection to other perspectives would be best. In particular, I’d hope the authors would talk a little more about f-GAN, as the variational lower-bound shown in this work is definitely related, though this work uniquely connects the GAN lower bound with VAE by introducing the intractable “posterior”, q(x | y).\n\nDetailed comments:\nP1: I see f-GAN as helping link adversarial learning with traditional likelihood-based methods, notably as a dual-formulation of the same problem. It seems like there should be some mention of this.\n\nP2:\nwhat does this mean: “generated samples from the generative model are not leveraged for model learning”. The wording is maybe a little confusing.\n\nP5:\nSo here I think the connection to f-GAN is even clearer, but it isn’t stated explicitly in the paper: the discriminator defines a lower-bound for a divergence (in this case, the JSD), so it’s natural that there is an alternate formulation in terms of the posterior (as it is called in this work). As f-GAN is fairly well-known, not making this connection here I think isolates this work in a critical way that makes it seem that similar observations haven’t been made.\n\nP6:\n\"which blocks out fake samples from contributing to learning”: this is an interesting way of thinking about this. One potential issue with VAEs / other MLE-based methods (such as teacher-forcing) is that it requires the model to stay “close” to the real data, while GANs do not have such a restriction. Would you care to comment on this?\n\nP8:\nI think both the Hjelm (BGAN) and Che (MaliGAN) are using these weights to address credit assignment with discrete data, but BGAN doesn’t use a MLE generator, as is claimed in this work. \n\nGeneral experimental comments:\nGenerally it looks like IWGAN and AA-VAE do as is claimed: IWGANs have better mode coverage (higher inception scores), while AA-VAEs have better likelihoods given that we’re using the generated samples as well as real data. This last one is a nice result, as it’s a general issue with RNNs (teacher forcing) and is why we need things like scheduled sampling to train on the free-running phase. Do you have any comments on this?\n\nIt would have been nice to show that this works on harder datasets (CelebA, LSUN, ImageNet).", "The authors develops a framework interpreting GAN algorithms as performing a form of variational inference on a generative model reconstructing an indicator variable of whether a sample is from the true of generative data distributions. Starting from the ‘non-saturated’ GAN loss the key result (lemma 1) shows that GANs minimizes the KL divergence between the generator(inference) distribution and a posterior distribution implicitly defined by the discriminator. I found the paper IWGAN and especially the AAVAE experiments quite interesting. However the paper is also very dense and quite hard to follow at times - In general I think the paper would benefit from moving some content (like the wake-sleep part of the paper) to the appendix and concentrating more on the key results and a few more experiments as detailed in the comments / questions below.\n\nQ1) What would happen if the KL-divergence minimizing loss proposed by Huszar (see e.g http://www.inference.vc/an-alternative-update-rule-for-generative-adversarial-networks/) was used instead of the “non-saturated” GAN loss - would the residial JSD terms in Lemma 1 cancel out then?\n\nQ2) In Lemma 1 the negative JSD term looks a bit nasty to me e.g. in addition to KL divergence the GAN loss also maximises the JSD between the data and generative distributions. This JSD term acts in a somewhat opposite direction of the KL-divergence that we are interested in minimizing. Can the authors provide some more detailed comments / analysis on these two somewhat opposed terms - I find this quite important to include given the opposed direction of the JSD versus the KL term and that the JSD is ignored in e.g. section 4.1? secondly did the authors do any experiments on the the relative sizes of these two terms? I imagine it would be possible to perform some low-dimensional toy experiments where both terms were tractable to compute numerically?\n\nQ3) I think the paper could benefit from some intuition / discussion of the posterior term q^r(x|y) in lemma 1 composed on the prior p_theta0(x) and discriminator q^r(y|x). The terms drops out nicely in math however i had a bit of a hard time wrapping my head around what minimizing the KL-divergence between this term and the inference distribution p(xIy). I know this is a kind of open ended question but i think it would greatly aid the reader in understanding the paper if more ‘guidance’ is provided instead of just writing “..by definition this is the posterior.’\n\nQ4) In a similar vein to the above. It would be nice with some more discussion / definitions of the terms in Lemma 2. e.g what does “Here most of the components have exact correspondences (and the same definitions) in GANs and InfoGAN (see Table 1)” mean? \n\nQ5) The authors state that there is ‘strong connections’ between VAEs and GANs. I agree that both (after some assumptions) both minimize a KL-divergence (table 1) however to me it is not obvious how strong this relation is. Could the authors provide some discussion / thoughts on this topic?\n\nOverall i like this work but also feel that some aspects could be improved: My main concern is that a lot of the analysis hinges on the JSD term being insignificant, but the authors to my knowledge does but provide any prof / indications that this is actually true. Secondly I think the paper would greatly benefit from concentration on fewer topics (e.g. maybe drop the RW topic as it feels a bit like an appendix) and instead provide a more throughout discussion of the theory (lemma 1 + lemma 2) as well as some more experiments wrt JSD term.\n", "The paper provides a symmetric modeling perspective (\"generation\" and \"inference\" are just different naming, the underlying techniques can be exchanged) to unify existing deep generative models, particularly VAEs and GANs. Someone had to formally do this, and the paper did a good job in describing the new view (by borrowing the notations from adversarial domain adaptation), and demonstrating its benefits (by exchanging the techniques in different research lines). The connection to weak-sleep algorithm is also interesting. Overall this is a good paper and I have little to add to it.\n\nOne of the major conclusions is GANs and VAEs minimize the KL Divergence in opposite directions, thus are exposed to different issues, overspreading or missing modes. This has been noted and alleviated in [1].\n\nIs it possible to revise the title of the paper to specifically reflect the proposed idea? Other papers have attempted to unify GAN and VAE from different perspectives [1,2].\n\n[1] Symmetric variational autoencoder and connections to adversarial learning. arXiv:1709.01846\n[2] Adversarial variational Bayes: Unifying variational autoencoders and generative adversarial networks. arXiv:1701.04722, 2017.\n\n\nMinor: In Fig. 1, consider to make “(d)” bold to be consistent with other terms. ", "By “unifying” we meant this work proposes a unified statistical view of VAEs and GANs. We will revise the title to avoid confusion. \n\nP1 & P5:\nWe have discussed f-GAN (Nowozin et al., 2016) in the related work section. f-GAN and most previous works that analyze GANs are based on the `saturated` objective of GANs, i.e., min_G log(1-D(G(z))). In particular, f-GAN and a few other works showed that with this objective, GANs involve *minimizing a variational lower bound* of some f-divergence (Nowozin et al., 2016) or mutual information between x and y (the real/fake indicator) (Huszar et al., 2016; Li et al., 2016). \n\nIn contrast, our work is based on the `non-saturated` objective of the original GAN, i.e., max_G log D(G(z)). The two objectives have the same fixed point solution, but the `non-saturated` one avoids the vanishing gradient issue of the `saturated` one, and is more widely-used in practice. However, very few formal analysis (e.g., Arjovsky & Bottou, 2017) has been done on the `non-saturated` objective. Our results in Lemma.1 is a generalization of the previous theorem in (Arjovsky & Bottou, 2017) by allowing non-optimal discriminators in the analysis (please see the last paragraph of P5 for more details). \n\nWe will make these clearer in the revised version.\n\nP2 & P6:\nThe sentence “generated samples from … learning” in P2 meant the same as “blocks out fake samples from contributing to learning” in P6 as the reviewer noted. We will polish the statement. Thanks for pointing it out.\n\nBy the analysis in the paper and common empirical observations, GANs (which involve min KL(Q||P) ) suffer from mode missing issue. That is, the learned generation distribution tends to concentrate to few large modes of the real data distribution. In contrast, VAEs and other MLE-based methods (which involve min KL(P||Q) ) suffer from the issue of covering all data modes as well as small-density regions in-between. In this sense, GANs in practice are more “restricted” to stay close to data modes, and generate samples that are generally less diverse and more plausible.\n\nP8:\nBGAN for discrete data rephrases generator G as conditional distribution g(x|z), and evaluates the explicit conditional likelihood g(x|z) for training. BGAN for continuous data does not have such parameterization. We will update the statements to fix the issue. Thanks for pointing this out.\n\nExperiments:\nIt can be very interesting to apply the techniques in AA-VAE to augment the MLE training of RNNs, which has not been explored in this paper. A related line of research is adversarial training of RNNs which applies a discriminator on the RNN samples. To our knowledge, such approaches suffer from optimization difficulty due to, e.g., the discrete nature of samples (e.g., text samples). In contrast, AA-VAE avoids the issue as generated samples are used in the same way as real data examples by maximizing the “likelihood” of good samples selected by the discriminator. We are happy to explore more in this direction in the future.\n\nThis paper focuses mainly on establishing formal connections between GANs, VAEs, and other deep generative models through new formulations of them. Technique transfer between research lines, e.g., IWGAN and AA-VAE, serves to showcase the benefit of the unified statistical view. We will validate the new techniques on harder datasets as suggested, and show the results soon.", "Q1) Huszar proposed to optimize a loss that combines the `saturated` and `non-saturated` losses. We have cited and briefly discussed this work (Sønderby et al., 2017) in the related work section. As with most previous studies, the analysis of the combined loss in the blog and (Sønderby et al., 2017) is based on the assumption that the discriminator is near optimal. With this assumption, Lemma.1 is simplified to Eq.(8), and the residual JSD term cancels out with the combined loss. However, as discussed in the paper, Lemma.1 in general case (Eq.6) does not rely on the optimality assumptions of the discriminator which are usually unwarranted in practice. Thus, Lemma.1 can be seen as a generalization of previous results (Sønderby et al., 2017; Arjovsky and Bottou, 2017) to account for broader situations. Also, the JSD term does not cancel out even with the combined loss.\n\nQ2) We will update the paper to add more analysis of the JSD term. In particular, from the derivation of Lemma.1 in section C of the supplements, we can show the relative sizes of the KL and JSD term follow: JSD <= KL. Specifically, if we denote the RHS of Eq.(20) as -E_p(y) [ KL - KL_1 ], then from Eq.(20) we have KL_1 <= KL. From Eqs.(22) and (23), we further have JSD <= KL_1. We therefore have JSD <= KL_1 <= KL. That is, the JSD is upper-bounded by the KL, and intuitively, if the KL is sufficiently minimized, the magnitude of JSD will also decrease.\n\nNote that we did not mean that the JSD term is negligible. Indeed, most conclusions in the paper have taken into account the JSD. For example, JSD is *symmetric* (rather than insignificant) and will not affect the mode missing behavior of GANs endowed by the asymmetry of the KL. We have also noticed in the paper that the gradients of the JSD and the KLD cancel out when discriminator gives random guesses (e.g., when p_g=p_data). In the derivations of IWGAN in sections 4.1 and G, inspired from the JSD term in Lemma.1, we also subtracted away the 2nd term of RHS of Eq.(38) which equals the JSD when k=1. The approximation is necessary for computational tractability.\n\nQ3) Figure.2 and the second point (“Training dynamics”) under Lemma.1 give an intuitive illustration of the posterior distribution q^r(x|y). Intuitively, the posterior distribution is a mixture of p_data(x) and p_{g_\\theta0}(x) with the mixing weights induced from the discriminator distribution q^r(y|x). Figure.2 illustrates how minimizing the KL divergence between the inference distribution and the posterior can push p_g towards p_data, and how mode missing can happen. \n\nQ4) We will add more definitions and explanations of the terms in Lemma.2. The key terms are listed in Table.1 to allow side-by-side comparison with the corresponding terms in GANs and InfoGANs. For example, the distribution q_\\eta(z|x,y) in Lemma.2 precisely corresponds to the distribution q_\\eta(z|x,y) in InfoGAN (defined in the text of Eq.(9)). We will make these clearer in the revised version. Thanks for the suggestion.\n\nQ5) By “strong” we meant the connections reveal multiple new perspectives of GANs and VAEs as well as a broad class of their variants. Most of the discussions are presented in section 3.4. For example, the reformulation of GANs links the adversarial approach to the classic Bayesian variational inference (VI) algorithm, which further opens up the opportunities of transferring the large volume of extensions of VI to the adversarial approach for improvement (e.g., the proposed IWGAN in the paper). Section 3.4 provides four examples of such new perspectives inspired by the new formulations and connections, each of which in turn leads to either an existing research direction or new broad discussions on deep generative modeling (e.g., section A). We hope this work can inspire even more insights and discussions on, e.g., formal relations of adversarial approaches and Bayesian methods, etc. ", "Thanks for the valuable and encouraging comments.\n\n- Our work is indeed a couple of months earlier than [1], and is discussed in [1]. The work of [1] focuses on alleviating the asymmetry of the KL Divergence minimized by VAEs. It discusses the connection of the new symmetric VAE variant and GANs, but does not reveal that GANs involve minimizing a KL Divergence in an opposite direction, nor focus on the underlying connections between original VAEs and GANs as we do. \n\nIn section 3.4 point 2) and section F of the supplements, we discussed some existing work on alleviating the mode overspreading issue of VAEs by augmenting original VAE objective with GANs related objectives. The work of [1] falls into this category (though in [1] the symmetric VAE is motivated purely from VAEs perspective). We will include the discussion of [1] in the revised version.\n\n- Our work aims at developing a unified statistical view of VAEs and GANs through new formulations of them. The unified view provides a tool to analyze existing deep generative model research, and naturally enables technique transfer between research lines. This is different from other work [1,2] which combines VAE and GAN objectives to form a new model/algorithm instance. We acknowledge that a clearer, specific title can alleviate confusions. Thanks for the suggestion." ]
[ 7, 6, 7, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_rylSzl-R-", "iclr_2018_rylSzl-R-", "iclr_2018_rylSzl-R-", "BkONJetlM", "SJAtVYteG", "SJIHn0tlz" ]
iclr_2018_HyZoi-WRb
Debiasing Evidence Approximations: On Importance-weighted Autoencoders and Jackknife Variational Inference
The importance-weighted autoencoder (IWAE) approach of Burda et al. defines a sequence of increasingly tighter bounds on the marginal likelihood of latent variable models. Recently, Cremer et al. reinterpreted the IWAE bounds as ordinary variational evidence lower bounds (ELBO) applied to increasingly accurate variational distributions. In this work, we provide yet another perspective on the IWAE bounds. We interpret each IWAE bound as a biased estimator of the true marginal likelihood where for the bound defined on K samples we show the bias to be of order O(1/K). In our theoretical analysis of the IWAE objective we derive asymptotic bias and variance expressions. Based on this analysis we develop jackknife variational inference (JVI), a family of bias-reduced estimators reducing the bias to O(K−(m+1)) for any given m < K while retaining computational efficiency. Finally, we demonstrate that JVI leads to improved evidence estimates in variational autoencoders. We also report first results on applying JVI to learning variational autoencoders. Our implementation is available at https://github.com/Microsoft/jackknife-variational-inference
accepted-poster-papers
The authors analyze the IWAE bound as an estimator of the marginal log-likelihood and show how to reduce its bias by using the jackknife. They then evaluate the effect of using the resulting estimator (JVI) for training and evaluating VAEs on MNIST. This is an interesting and well written paper. It could be improved by including a convincing explanation of the relatively poor performance of the JVI-trained, JVI-evaluated models.
test
[ "HyUn5KYxz", "B11KATDMf", "SkuC4bqez", "S1MVzM5ez", "ryjOLrWzG", "B1xfLS-zz", "H1gTrB-GG", "ry-rrBWMz", "H164Zf9eG", "B1xZQ0DCZ", "rkSCR0HCW" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "[After author feedback]\nI think this is an interesting paper and recommend acceptance. My remaining main comments are described in the response to author feedback below.\n\n[Original review]\nThe authors introduce jackknife variational inference (JVI), a method for debiasing Monte Carlo objectives such as the importance weighted auto-encoder. Starting by studying the bias of the IWAE bound for approximating log-marginal likelihood, the authors propose to make use of debiasing techniques to improve the approximation. For the binarized MNIST the authors show improved approximations given the same number of samples from the auxiliary distribution q(z|x).\n\nJVI seems to be an interesting extension of, and perspective on, the IWAE bound (and other Monte Carlo objectives). Some questions and comments:\n\n* The Cremer et al. (2017) paper contains some errors when interpreting the IWAE bound as a standard ELBO with a more flexible variational approximation distribution. For example eq. (1) in their paper does not correspond to an actual distribution, it is not properly normalized. This makes the connection in their section 2.1 unclear. I would suggest citing the following paper instead for this connection and the relation to importance sampling (IS):\nNaesseth, Linderman, Ranganath, Blei, \"Variational Sequential Monte Carlo\", 2017.\n\n* Regarding the analysis of the IWAE bound the paper by Rainforth et al. (2017) mentioned in the comments seems very relevant. Also, because of the strong connection between IWAE and IS detailed in the Naesseth et al. (2017) paper it is possible to make use of a standard Taylor approximation/delta methods to derive Prop. 1 and Prop. 2, see e.g. Robert & Casella, \"Monte Carlo Statistical Methods\" or Liu's \"Monte Carlo Strategies for Scientific Computing\".\n\n* It could be worth mentioning that the JVI objective function is now no longer (I think?) a lower bound to the log-evidence.\n\n* Could the surprising issue (IWAE-learned, JV1-evaluated being better than JV1-learned, JV1-evaluated) in Table 1 be because of different local optima?\n\n* Also, we can easily get unbiased estimates of the evidence p(x) using IS and optimize this objective wrt to model parameters. The proposal parameters can be optimized to minimize variance, how do you think this compares to the proposed method?\n\nMinor comments:\n* p(x) -> p_\\theta(x)\n* In the last paragraph of section 1 it seems like you claim that the expressiveness of p_\\theta(x|z) is a limitation of VAE. It was a bit unclear to me what was actually a general limitation of maximum likelihood versus the approximation based on VAEs.\n* Last paragraph of section 1, \"strong bound\" -> \"tight bound\"\n* Last paragraph of section 2, citation missing for DVI", "* IWAE bound\nIn your Section 2: \"The bounds LK seem quite different from LE, but recently Cremer et al. (2017) and Naesseth et al. (2017) showed that an exact correspondence exists: any LK can be converted into the standard form LE by defining a modified distribution qIW(z|x) through an importance sampling construction.\"\n\nUsing the modified distribution qIW(z|x) (denoted by qEW(z|x) in the Cremer et al. (2017) paper) in a standard form bound (LE in your notation) leads to a tighter ELBO than the IWAE ELBO. This is first shown in Naesseth et al. (2017), Theorem 1. Cremer et al. (2017) (updated paper) includes a special case of this result in Section 5.3, but this is attributed to the first author of Naesseth et al. (2017). It is good to also include a citation to Cremer et al. because they focus on importance sampling.\n\n* IS \nI believe another main obstacle to optimize the IS-based cost functions I mention is that the variance of the stochastic gradients might be prohibitive, even if we do not need to subsample data. The reason I mentioned this procedure is that I think the motivation behind JVI, which is currently debiasing the IWAE log-marginal likelihood bound, could be further strengthened by including a brief discussion of this zero-bias version and why it isn't practical.", "The authors analyze the bias and variance of the IWAE bound from Burda et al. (2015), and with explicit formulas up to vanishing polynomial terms and intractable moments. This leads them to derive a jacknife approach to estimate the moments as a way to debias the IWAE for finite importance weighted samples. They apply it for training and also as an evaluation method to assess the marginal likelihood at test time.\n\nThe paper is well-written and offers an interesting combination of ideas motivated from statistical analysis. Following classical results from the debiasing literature, they show a jacknife approach has reduced bias (unknown for variance). In practice, this involves an enumerated subset of calculations leading to a linear cost with respect to the number of samples which I'm inclined to agree is not too expensive.\n\nThe experiments are unfortunately limited to binarized MNIST. Also, all benchmarks measure lower bound estimates with respect to importance samples, when it's more accurate to measure with respect to runtime. This would be far more convincing as a way to explain how that constant to the linear-time affects computation in practice. The same would be useful to compare the estimate of the marginal likelihood over training runtime. Also, I wasn't sure if the JVI estimator still produced a lower bound to make the comparisons. It would be useful if the authors could clarify these details.", "This paper provides an interesting analysis of the importance sampled estimate of the LL bound and proposes to use Jackknife to correct for the bias. The experiments show that the proposed method works for model evaluation and that computing the correction is archivable at a reasonable computational cost. It also contains an insightful analysis.\n\n", "Thank you for your review.\n\nSmall effective sample size:\nIndeed the effective sample size (ESS) for IWAE can be very small because the log-weight distribution has high variance. The intuition that this would lead to very similar gradients is wrong however: consider JVI-1, which is a weighted average of leave-one-out IWAE objectives. If the ESS is close to one, then one of the leave-one-out IWAE objectives will not contain the dominant sample but be the IWAE on the remaining samples. The JVI gradients are weighted averages of IWAE gradients and will effectively downweight dominant samples. JVI-2 and higher-order variants have an even stronger effect because they leave out more than one sample.\n\nShared scale in Figure 3:\nWe agree this could be useful for comparing results across training objectives; the main purpose of Figure 3 is to demonstrate that within any regime higher order JVI estimates reduce bias and because the scales are quite a bit different between training objectives we opted to use space efficiently to that end.\n\nReporting bounds versus debiased LL estimates:\nWe do not have a comprehensive answer to this point and both views have merit:\n1. Reporting bounds across models provides an estimate of the model performance that is conservative/safe against deficiencies in tuning the inference procedure.\n2. Reporting more accurate LL estimates has the advantage of being a more accurate assessment of the model performance free of model-specific biases; for example, an ELBO may be bad in case the encoder is bad, despite the\ngenerative model being of good quality. Also, in many cases the LL directly transfers into a natural metrics such as bits-per-pixel needed for compression.\n", "Thank you for your review.\n\nIs JVI a lower bound?:\nNo, it is not. We added a clarification to the beginning of the JVI section.\n\nEvaluation with respect to runtime:\nFigure 2 shows that on a GPU, in most cases, we observe linear scaling behaviour between the number K of samples and runtime. Therefore, any of our experiments with respect to K can be seen as also holding with respect to runtime.\nA practical issue is that runtime is more difficult to consistently assess; we did this for Figure 2 on a single-user GPU workstation, but for the other experiments, in our multi-user GPU cluster system we cannot ensure consistent timings over many training runs so instead we report the number of samples.\n", "Thank you for your comments.\n\nRegarding the errors in Cremer et al. and the connection to Naesseth et al.:\nin their latest version, https://arxiv.org/pdf/1704.02916.pdf, Section 5.2 the expected q distribution is proven to be properly normalized.\nWe will add the Naesseth et al. paper as additional reference, however, the Cremer et al. work directly considers the IWAE, whereas Naesseth considers sequential Monte Carlo, so we believe the Cremer et al. paper to be the more\nappropriate reference.\n\nRelation to Rainforth et al.:\nwe were not aware of this work and indeed their analysis is more general and includes our Proposition 1 and 2 as special cases. We point this out in our revised version.\nRegarding the delta method, our proposition 1/2 are derived using the delta method for moments, as we write in the proofs to Proposition 1 and 2 in appendix A. As reference we give Christopher Small's specialized book on the topic of asymptotic expansions in statistics (which we can recommend highly).\n\nJVI no longer a bound:\ngood point, we now make this a lot clearer in Section 5 of the revised paper.\n\nJVI-trained, JVI-evaluated worse due to local minima:\nWe have multiple hypotheses why this could be the case.\n1. In line with a hypothesis that was recently put forth in another paper by Rainforth et al., http://bayesiandeeplearning.org/2017/papers/55.pdf, which is that the encoder network becomes more challenging to learn. (You can see this by considering a perfect log-marginal likelihood objective; in that case the\nencoder gradient would vanish completely.) To test this hypothesis we have investigated using two training objectives, where we use a regular ELBO for the encoder and a JVI or IWAE objective for the decoder. This does lead to better encoders but initial results are not conclusively showing a benefit of JVI.\n2. JVI estimators are no longer bounds and perhaps the optimization moves the decoder into parameters which systematically amplify positive bias. We do not know a simple way to test this hypothesis.\n\nImportance sampling:\nusing an importance-sampling evidence approximation and controlling the variance (or moment-matching) of this approximation is an interesting idea and in fact pursuing this exact idea led to the current paper.\n The connection is as follows: for training we need log-marginal likelihoods to decompose additively over independent instances, and just as IWAE is biased, so will be importance sampling estimates (and annealed importance sampling, AIS, estimates). For tractable sample sizes the bias is quite large, e.g. 2--4 nats for MNIST. This made us consider debiasing corrections for IS objectives and led us to consider debiasing the IWAE.\n Pursuing the IS route would be interesting for future work; the exact variance-optimal proposal is the true posterior p(z|x), but a sample version of minizing the variance can be used, see e.g. (Rubinstein and Kroese, \"Simulation and the Monte Carlo Method\", second edition, Section 5.6.2). Alternatively one can update the encoder using the regular ELBO criterion as we tried recently for our JVI objectives.\n\n\nExpressiveness of p_\\theta(x|z):\nthis is a general point for any latent variable model that we need to make sure that the observation model is expressive enough to model the statistics of the data. It is not specific to VAEs.\n", "We now read the Rainforth et al. paper and agree with your assessment.\nWe added an appropriate discussion before our analysis in Section 3 of the\nrevised submission.\n", "Hello, \n\nInteresting analysis. But I’m not particularly surprised that JVI during training does not result in better models compared to IWAE. We often observe very small effective sampling sizes when training big models with only few of the normalized importance weights is close to 1. It seems JVI would result in very similar gradients under these circumstances. I think it could strengthen the paper if this would be investigated and discussed.\n\nWouldn’t a common scale for the LL plots in Figure 3 make it easier to read? \n\nDo you think it would be preferable if the community continued to report biased bounds instead of JVI estimated LLs? This provides a natural partial protection against overestimating due to low effective sampling sizes, doesn’t it? ", "Thank you for pointing to this related work, it seems clearly relevant; we will read it and will add a citation and potentially a more detailed discussion of the relation to the next version of our submission.", "Hey\n\nI just wanted to draw your attention to the recent preprint https://arxiv.org/pdf/1709.06181.pdf which includes a very similar set of results to your analysis of the IWAE bound in section 3. They consider convergence bounds for a more general class of problems by using bias-variance decompositions and consider the IWAE (in section 6.5) as a particular example which leads to a result which is effectively equivalent to your Propositions 1 and 2 up to a constant factor. To see this, note that your result in corresponds to the case of N=1, M=K, and that the C^2_0 ς^4_1 / 4M^2 + O(1/M^3) terms in their bound constitute the biased squared with the variance comprising of all the other terms in their bound (see proof of Theorem 3). They thus show the same key high-level result that the bias and variance are both O(1/K).\n\nGiven how recent this related work is and that it is only a preprint with a predominantly different focus, I don't think this detracts too much from your current submission, but you may wish to revise the paper to acknowledge that this very similar result was previously independently derived and to highlight the differences of your results from theirs." ]
[ 7, -1, 6, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, 3, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyZoi-WRb", "H1gTrB-GG", "iclr_2018_HyZoi-WRb", "iclr_2018_HyZoi-WRb", "H164Zf9eG", "SkuC4bqez", "HyUn5KYxz", "rkSCR0HCW", "iclr_2018_HyZoi-WRb", "rkSCR0HCW", "iclr_2018_HyZoi-WRb" ]
iclr_2018_rkrC3GbRW
Learning a Generative Model for Validity in Complex Discrete Structures
Deep generative models have been successfully used to learn representations for high-dimensional discrete spaces by representing discrete objects as sequences and employing powerful sequence-based deep models. Unfortunately, these sequence-based models often produce invalid sequences: sequences which do not represent any underlying discrete structure; invalid sequences hinder the utility of such models. As a step towards solving this problem, we propose to learn a deep recurrent validator model, which can estimate whether a partial sequence can function as the beginning of a full, valid sequence. This validator provides insight as to how individual sequence elements influence the validity of the overall sequence, and can be used to constrain sequence based models to generate valid sequences — and thus faithfully model discrete objects. Our approach is inspired by reinforcement learning, where an oracle which can evaluate validity of complete sequences provides a sparse reward signal. We demonstrate its effectiveness as a generative model of Python 3 source code for mathematical expressions, and in improving the ability of a variational autoencoder trained on SMILES strings to decode valid molecular structures.
accepted-poster-papers
Viewing the problem of determining the validity of high-dimensional discrete sequences as a sequential decision problem, the authors propose learning a Q function that indicates whether the current sequence prefix can lead to a valid sequence. The paper is fairly well written and contains several interesting ideas. The experimental results appear promising but would be considerably more informative if more baselines were included. In particular, it would be good to compare the proposed approach (both conceptually and empirically) to learning a generative model of sequences. Also, given that your method is based on learning a Q function, you need to explain its exact relationship to classic Q-learning, which would also make for a good baseline.
val
[ "SJzxBpKeM", "r1bjT3VgM", "B1odDD8gM", "B1frSrpXf", "BkYQBH67f", "SyvGSB6Xz", "SJHWHrT7z", "SypyHBT7z", "HJ4T4Spmz", "H1HcVrTmM", "H1iONBaXG", "Hy_SEST7G", "S1C74HpXf", "H1U0mSp7M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "SUMMARY:\nThis work is about learning the validity of a sequences in specific application domains like SMILES strings for chemical compounds. In particular, the main emphasis is on predicting if a prefix sequence could possibly be extended to a complete valid sequence. In other words, one tries to predict if there exists a valid suffix sequence, and based on these predictions, the goal is to train a generative model that always produces valid sequences. In the proposed reinforcement learning setting, a neural network models the probability that a certain action (adding a symbol) will result in a valid full sequence. For training the network, a large set of (validity-)labelled sequences would be needed. To overcome this problem, the authors introduce an active learning strategy, where the information gain is re-expressed as the conditional mutual information between the the label y and the network weights w, and this mutual information is maximized in a greedy sequential manner. \nEVALUATION:\nCLARITY & NOVELTY: In principle, the paper is easy to read. Unfortunately, however, for the reader is is not easy to find out what the authors consider their most relevant contribution. Every single part of the model seems to be quite standard (basically a network that predicts the probability of a valid sequence and an information-gain based active learning strategy) - so is the specific application to SMILES strings what makes the difference here? Or is is the specific greedy approximation to the mutual information criterion in the active learning part? Or is it the way how you augment the dataset? All these aspects might be interesting, but somehow I am missing a coherent picture.\nSIGNIFICANCE: it is not entirely clear to me if the proposed \"pruning\" strategy for the completion of prefix sequences can indeed be generally applied to sequence modelling problems, because in more general domains it might be very difficult to come up with reasonable validity estimates for prefixes that are significantly shorter than the whole sequence. I am not so familiar with SMILES strings -- but could it be that the experimental success reported here is mainly a result of the very specific structure of valid SMILES strings? But then, what can be learned for general sequence validation problems?\n \nUPDATE: Honestly, outside the scope of SMILES strings, I still have some concerns regarding reasonable validity estimates for prefixes that are significantly shorter than the whole sequence... \n\n", "The authors use a recurrent neural network to build generative models of sequences in domains where the vast majority of sequences is invalid. The basic idea, outlined in Eq. 2, is moderately straightforward: at each step, use an approximation of the Q function for subsequences of the appropriate length to pick a valid extension. There are numerous details to get right. The writing is mostly clear, and the examples are moderately convincing. I wish the paper had more detailed arguments and discussions.\n\nI question the appropriateness of Eq. 2 as a target. A correctly learned model will put positive weight on valid sequences, but it may be an arbitrarily slow way to generate diverse sequences, depending on the domain. For instance, imagine a domain of binary strings where the valid sequences are the all 1 sequence, or any sequence beginning with a 0. Half the generated sequences would be all 1's in this situation, right? And it's easy to construct further examples that are much worse than this?\n\nThe use of Bayesian active learning to generate the training set feels like an elegant idea. However, I wish there were more clarity about what was ad hoc and what wasn't. For instance, I think the use of dropout to get q is suspect (see for instance https://arxiv.org/abs/1711.02989), and I'd prefer a little more detail on statements like \"The nonlinearity of g(·) means that our Monte\nCarlo approximation is biased, but still consistent.\" Do we have any way of quantifying the bias? Is the statement about K=16 being reasonable a statement about bias, variance, or both?\n\nFor Python strings: \n- Should we view the fact that high values of tau give a validity of 1.0 as indicative that the domain's constraints are fairly easy to learn?\n- \"The use of a Boltzmann policy allows us to tune the temperature parameter to identify policies\nwhich hit high levels of accuracy for any learned Q-function approximation.\" This is only true to the extent the domain is sufficiently \"easy\" right? Is the argument that even in very hard domains, you might get this by just having an RNN which memorized a single valid sequence (assuming at least one could be found)?\n- What's the best explanation for *why* the active model has much higher diversity? I understand that the active model is picking examples that tell us more about the uncertainty in w, but it's not obvious to me that means higher diversity. Do we think this is a universal property of domains?\n- The highest temperature active model is exploring about half of valid sequences (modulo the non-tightness of the bound)? Have you tried gaining some insight by generating thousands of valid sequences manually and seeing which ones the model is rejecting?\n- The coverage bound is used only for for Python expressions, right? Why not just randomly sample a few thousand positives and use that to get a better estimate of coverage? Since you can sample from the true positive set, it seems that your argument from the appendix about the validation set being \"too similar to the training set\" doesn't apply?\n- It would be better to see a comparison to a strong non-NN baseline. For instance, I could easily make a PCFG over Python math expressions, and use rejection sampling to get rid of those that aren't exactly length 25, etc.?\n\nI question how easy the Python strings example is. In particular, it might be that it's quite an easy example (compared to the SMILES) example. For SMILES, it seems like the Bayesian active learning technique is not by itself sufficient to create a good model? It is interesting that in the solubility domain the active model outperforms, but it would be nice to see more discussion / explanation.\n\nMinor note: The incidence of valid strings in the Python expressions domain is (I believe) > 1/5000, although I guess 1 in 10,000 is still the right order of magnitude.\n\nIf I could score between \"marginal accept\" and \"accept\" I would. ", "Overall: Authors casted discrete structure generation as a planning task and they used Q-learning + RNNs to solve for an optimal policy to generate valid sequences. They used RNN for sequential state representation and Q-learning for encoding expected value of sub-actions across trajectory - constraining each step's action to valid subsequences that could reach a final sequence with positive reward (valid whole sequences).\n\nEvaluation: The approach centers around fitting a Q function with an oracle that validates sub-sequences. The Q function is supported by a sequence model for state representation. Though the approach seems novel and well crafted, the experiments and results can't inform me which part of the modeling was critical to the results, e.g. was it the (1) LSTM, (2) Q-function fitting? Are there other simpler baseline approaches to compare against the proposed method? Was RL really necessary for the planning task? The lack of a baseline approach for comparison makes it hard to judge both results on Python Expressions and SMILES. The Python table gives me a sense that the active learning training data generation approach provides competitive validity scores with increased discrete space coverage. However the SMILES data set is a little mixed for active vs passive - authors should try to shed some light into that as well.\n\nIn conclusion, the approach seems novel and seem to fit well with the RL planning framework. But the lack of baseline results make it hard to judge significance of the work.", "While the proposed target distribution is not uniform over $\\mathcal{X}_+$, it has the following advantages:\n\n(a) It functions as an indicator of validity, giving zero probability mass to invalid sequences.\n(b) It can be combined with generative models trained on real-world data which do not generate uniform samples. The proposed method can then be used to eliminate, at each step during the sequence generation in such models, those next actions (characters) that will lead to invalid sequences, improving the validity of the sequences generated.\n(c) It is invariant to changes in the training data distribution (active learning strategy).\n(d) It can handle padded sequences with no extra effort.\n(d) It is numerically stable -- with the output at each step being in the range [0, 1], rather than perhaps the ratio of sequences with that prefix that can lead to valid sequences, which tends to 0 with increasing sequence length and has typical scale that varies with step t.\n\nWe also considered and tested as target a distribution that would be uniform over all sequences if trained with data distributed uniformly from $\\mathcal{X}$. We found however that: (a) this only held for fixed length sequences and was not appropriate for padded sequences; (b) the requirement of uniform data from $\\mathcal{X}$ prevented us from using active learning or any already existing data and; (c) the resulting method suffered from severe numerical/optimisation issues.\n", "The paper https://arxiv.org/abs/1711.02989 refers to variational Gaussian dropout. We use Bernoulli dropout, which is a theoretically-grounded way of obtaining uncertainty estimates in neural networks. This method has already been used to obtain uncertainty estimates in Bayesian neural networks in several previous works:\n\nhttps://arxiv.org/abs/1506.02142\nhttps://arxiv.org/abs/1512.05287\nhttps://arxiv.org/abs/1703.02910\n", "We have investigated the quality of the biased Monte Carlo information gain estimator. For active learning, the bias would only matter if it affects the relative ordering of different choices. The bias here preserves ordering. That is, if info_gain(x_1) > info_gain(x_2) then E[ info_gain_MC(x_1)] > E[info_gain_MC(x_2)]. K=16 was a statement regarding variance – note that some variance isn’t much of an issue for us, after all we are intentionally ‘injecting’ noise at the Boltzmann sampling stage, in order to obtain diverse samples.", "We have updated and extended our SMILES experiments. We now provide comparisons of our work with a state-of-art context-free grammar based approach [Kusner et al. (2017)].\n\nFor python expressions, the active model sees a lot more valid sequences during training, and thus gets better at modelling a large range of those. The passive one doesn’t see as many examples of valid sequences, and so doesn’t learn their general properties as well. Looking at the generated data, both methods struggle with correlated changes like brackets. One possible fix is to use variable length sequences to learn the usage of brackets from shorter sequences, which can then be generalised to longer sequences.\n\nAbout the claim regarding Boltzmann sampling being used to generate high validity samples at low enough temperatures, indeed, we mean that it can just generate the same one valid sequence with no sequence diversity. Tau at 1.0 validity could perhaps give an indication of the difficulty of the problem domain.\n", "In our new SMILES experiments, instead of the active learning we propose a data augmentation strategy which generates informative negative samples. Table 4 shows that this strategy allows us to outperform previous state-of-the-art results [Kusner et al. (2017)].\n\nThe reason for not using our active learning strategy with SMILES is because, while it does learn to discover strings that are technically valid, they are not chemically-realistic. This is why our initial active learning results in the SMILES domain were mixed. By instead augmenting an existing set of realistic molecules, we are able to more efficiently explore the space of realistic SMILES strings.\n", "Our main contribution is the formulation of the problem as learning a Q function. To learn this function, however, we need informative data. For Python strings, where no positive data is available, we propose an active learning strategy to learn efficiently. For SMILES, where existing positive data is available, we propose a data augmentation strategy which allows us to obtain informative negative samples. We chose to describe our Q function with a recurrent neural network (LSTM), but any other similar model (GRU) could have been used as well.\n", "We have updated and extended our SMILES experiments. We now provide comparisons of our work with a state-of-art context-free grammar based approach [Kusner et al. (2017)]. This more clearly demonstrates the significance of our contribution.\n", "The reason for not using our active learning strategy with SMILES is because while it does learn to discover strings that are technically valid, they are not chemically-realistic. This is why our initial active learning results in the SMILES domain were mixed. By instead augmenting an existing set of realistic molecules, we are able to more efficiently explore the space of realistic SMILES strings.\n\nIn our new SMILES experiments, instead of the active learning we propose a data augmentation strategy which generates informative negative samples. Table 4 shows that this strategy allows us to outperform previous state-of-the-art results [Kusner et al. (2017)].\n", "The proposed approach is applicable to any sequence validity problem in which our Q function is learnable from data. We considered the problems of learning the validity of python expressions and SMILES sequences because these are relatively simple problems that are also useful in practice and challenging for existing methods.", "Solving the validity learning problem in arbitrary domains can at worst be highly intractable. We believe practical solutions are only available when the validity rules are simple enough to be learned from data. Our approach is able to learn validity models in two very different domains, Python expressions and SMILES strings, demonstrating its capacity for generalization. \n\nThe proposed active learning method is expected to be more beneficial in domains where shorter sequences demonstrate rules of validity that also apply to longer strings — that is, the nature of the governing validity rules does not change a great deal as sequences get longer.", "Our main contribution is the formulation of the problem as learning a Q function. To learn this function, however, we need informative data. For Python strings, where no positive data is available, we propose an active learning strategy to learn efficiently. For SMILES, where existing positive data is available, we propose a data augmentation strategy which allows us to obtain informative negative samples. We chose to describe our Q function with a recurrent neural network (LSTM), but any other similar model (GRU) could have been used as well.\n\nTo further demonstrate the the importance of our contribution, we’ve updated the SMILES experiments to include a comparison with previous work on validity of samples from VAE prior – a challenging domain where benchmarks exist. Here, our model sets the new state-of-the-art. \n" ]
[ 6, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rkrC3GbRW", "iclr_2018_rkrC3GbRW", "iclr_2018_rkrC3GbRW", "r1bjT3VgM", "r1bjT3VgM", "r1bjT3VgM", "r1bjT3VgM", "r1bjT3VgM", "B1odDD8gM", "B1odDD8gM", "B1odDD8gM", "SJzxBpKeM", "SJzxBpKeM", "SJzxBpKeM" ]
iclr_2018_rkTS8lZAb
Boundary Seeking GANs
Generative adversarial networks are a learning framework that rely on training a discriminator to estimate a measure of difference between a target and generated distributions. GANs, as normally formulated, rely on the generated samples being completely differentiable w.r.t. the generative parameters, and thus do not work for discrete data. We introduce a method for training GANs with discrete data that uses the estimated difference measure from the discriminator to compute importance weights for generated samples, thus providing a policy gradient for training the generator. The importance weights have a strong connection to the decision boundary of the discriminator, and we call our method boundary-seeking GANs (BGANs). We demonstrate the effectiveness of the proposed algorithm with discrete image and character-based natural language generation. In addition, the boundary-seeking objective extends to continuous data, which can be used to improve stability of training, and we demonstrate this on Celeba, Large-scale Scene Understanding (LSUN) bedrooms, and Imagenet without conditioning.
accepted-poster-papers
Training GANs to generate discrete data is a hard problem. This paper introduces a principled approach to it that uses importance sampling to estimate the gradient of the generator. The quantitative results, though minimal, appear promising and the generated samples look fine. The writing is clear, if unnecessarily heavy on mathematical notation.
train
[ "BJZMpg8Nf", "rJegU3tef", "r1in0YH4M", "r12KCYBVz", "H1FAQ9Flz", "H1lP_k5ez", "Hywj4vLXM", "BJkW_vUXM", "r1u6rv87f", "H1jZBvUXM", "ryQ3XBXzz", "HypaYt_Zf", "Hk-oKKdbz", "S1-90FdWz", "HJuHqFuWG" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "I updated my review and increased my score to 7.", "Thanks for the feedback and for clarifying the 1) algorithm and the assumptions in the multivariate case 2) comparison to RL based methods 3) connection to estimating importance sampling weights using GAN discriminator.\n\nI think the paper contribution is now more clear and strengthened with additional convincing experiments and I am increasing my score to 7.\n\nThe paper would still benefit from doing the experiment with importance weights by pixel , rather then a global one as done in the paper now. I encourage the authors to still do the experiment, see if there is any benefit.\n\n\n\n==== Original Review =====\nSummary of the paper:\n\nThe paper presents a method based on importance sampling and reinforcement learning to learn discrete generators in the GAN framework. The GAN uses an f-divergence cost function for training the discriminator. The generator is trained to minimize the KL distance between the discrete generator q_{\\theta}(x|z), and the importance weight discrete real distribution estimator w(x|z)q(\\theta|z). where w(x|z) is estimated in turn using the discriminator. \nThe methodology is also extended to the continuous case. Experiments are conducted on quantized image generation, and text generation.\n\nQuality:\n\nthe paper is overall well written and supported with reasonable experiments.\n\nClarity:\n\nThe paper has a lot of typos that make sometimes the paper harder to follow:\n- page (2) Eq 3 max , min should be min, max if we want to keep working with f-divergence\n- Definition 2.1 \\mathbb{Q}_{\\theta} --> \\mathbb{Q}\n- page 5 the definition of \\tilde{w}(x^{(m})) in the normalization it is missing \\tilde{w}\n- Equation (10) \\nabla_{\\theta}\\log(x|z) --> \\nabla_{\\theta}\\log(x^{(m)}|z)\n- In algorithm 1, again missing indices in the update of theta --> \\nabla_{\\theta}\\log(x^{(m|n)}|z^{n})\n \nOriginality:\n\nThe main ingredients of the paper are well known and already used in the literature (Reinforce for discrete GAN with Disc as a reward for e.g GAN for image captioning Dai et al). The perspective from importance sampling coming from f-divergence for discrete GAN has some novelty although the foundations of this work relate also to previous work:\n- Estimating ratios using the discriminator is well known for e.g learning implicit models , Mohamed et al \n- The relation of importance sampling to reinforce is also well known\" On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient,\" Tang and Abbeel.\n\nGeneral Review:\n\n- when the generator is producing only *one* discrete distribution the theory is presented in Section 2.3. When we move to experiments, for image generation for example, we need to have a generator that produces a distribution by pixel. It would be important for 1) understanding the work 2) the reproducibility of the work to parallel algorithm 1 and have it *in the paper*, for this 'multi discrete distribution ' generation case. If we have N pixels \\log(p(x_1,...x_N|z))= \\Pi_i g_{\\theta}(x_i|z) (this should be mentioned in the paper if it is the case ), it would be instructive to comment on the assumptions on independence/conditional dependence of this model, also to state clearly how the generator is updated in this case and what are importance sampling weights. \n\n- Would it make sense in this N pixel discrete case generation to have also the discriminator produce N probabilities of real and fake as in PixelGAN in Isola et al? then see in this case what are the importance sampling weights this would parallel the instantaneous reward in RL?\n\n\n\n \n", "We would like to kindly remind the reviewer of our revision. The complete revision list is provided in the main thread (titled, \"Revision available\").", "We would like to kindly remind the reviewer of our revision. The complete revision list is provided in the main thread (titled, \"Revision available\").", "Thank you for the feedback, and I have read the revision.\n\nI would say the revised version has more convincing experimental results (although I'm not sure about the NLP part). The authors have also addressed my concerns on variance reduction, although it's still mysterious to me that the density ratio estimation method seems to work very well even at the begining stage.\n\nAlso developing GAN approaches for discrete variables is an important and unsolved problem.\n\nConsidering all of the above, I would like to raise the rating to 7, but lower my confidence to 3 (as I'm not an expert for NLP which is the main task for discrete generative models).\n\n==== original review ====\n\nThank you for an interesting read.\n\nMy understanding of the paper is that:\n\n1. the paper proposes a density-ratio estimator via the f-gan approach;\n2. the paper proposes a training criterion that matches the generator's distribution to a self-normalised importance sampling (SIS) estimation of the data distribution;\n3. in order to reduce the variance of the REINFORCE gradient, the paper seeks out to do matching between conditionals instead.\n\nThere are a few things that I expect to see explanations, which are not included in the current version:\n\n1. Can you justify your variance reduction technique either empirically or experimentally? Because your method requires sampling multiple x for a single given z, then in the same wall-clock time I should be able to obtain more samples for the vanilla version eq (8). How do they compare?\n\n2. Why your density ratio estimation methods work in high dimensions, even when at the beginning p and q are so different?\n\n3. It's better to include some quantitative metrics for the image and NLP experiments rather than just showing the readers images and sentences!\n\n4. Over-optimising generators is like solving a max-min problem instead. You showed your method is more robust in this case, can you explain it from the objective you use, e.g. the convex/concavity of your approach in general?\n\nTypo: eq (3) should be min max I believe?\n\nBTW I'm not an expert of NLP so I won't say anything about the quality of the NLP experiment.", "The paper introduces a new method for training GANs with discrete data. To this end, the output of the discriminator is interpreted as importance weight and REINFORCE-like updates are used to train the generator.\n\nDespite making interesting connections between different ideas in GAN training, I found the paper to be disorganized and hard to read. My main concern is the fact that the paper does not make any comparison with other methods for handling of discrete data in GANs. In particular, (Gulrajani et al.’17) show that it is possible to train Wasserstein GANs without sampling one-hot vectors for discrete variables during the training. Is there a reason to use REINFORCE-like updates when such a direct approach works?\n\nMinor: \ncomplex conjugate => convex conjugate ", "Dear reviewers, we have submitted a revision to our paper, \"Boundary Seeking GANs\" (we mistakenly submitted it twice, so please ignore one of the last two revisions when you build the diff pdf). We have done our best to address your comments, and hope you find the changes to be positive. Here is a rough summary of overall changes made to the paper:\n\n1) We have included samples from continuous BGAN trained on the full 1000-label 2012 Imagenet dataset and without conditioning on the label (e.g., see \"Conditional Image Synthesis With Auxiliary Classifier GANs\").\n2) We have added a section to the Appendix, 7.1, which quantitatively compares BGAN against WGAN-GP on discrete MNIST and which shows BGAN outperforming WGAN-GP on all metrics estimated, including Wasserstein distance, when comparing the *actual discrete distributions*.\n3) We have moved Theorem 2 to the Appendix, Section 7.2, as part of a larger section on the variance-reducing method. In this section is now an experiment that quantitatively validates our variance-reducing method (eq 10 vs eq 8).\n5) We have added a paragraph to the related works on the GAN likelihood ratio connection as well as boosted the paragraph on using REINFORCE.\n6) We removed the paragraph regarding the connection to classifiers in order to help clean up the paper.\n7) We have made various other minor edits following comments from the reviewers.\n\nThank you,\n - the authors\n\n[Update, Jan 5, 2018]\nWe have submitted one more revision. We had found a few more minor typos as well as made some additional small alterations to the text. Finally, there were some errors in the values reported in Section 7.1, which have been fixed. This does not change our conclusions in any way.", "Following your concerns, we have added a section to the Appendix that introduces an experiment comparing the estimated GAN-distance during training across models trained by the variance-reducing method (eq 10) and models trained by estimating beta (eqs 7, 8) using Monte Carlo. From these experiments, we are able to make the following conclusions: a) more samples from the conditional in training achieves lower GAN-divergence (2 JSD - 2log4) and b) eq 10 consistently achieves lower GAN-divergence than using MC estimate of beta from eq 8. Next, we added your insight regarding the convex/concavity of using the square error loss in the continuous case (see next to last paragraph, page 6, starting with “This objective can be seen”). \n\nNext, rather than using Inception score, which has not been used for quantitative experiments with discrete data, we trained new discriminators with higher capacities to estimate the Wasserstein distance or f-divergences between the MNIST training set and the discrete generated samples (keeping the generators fixed). Our results show that BGAN outperforms WGAN-GP consistently across *all metrics*, including the Wasserstein distance. Though we cannot say with absolutely certainty, it is very likely that this is because, while WGAN-GP is able to generate samples that visually resemble the target dataset, using the softmax outputs hurts the generator’s ability to model a truly discrete distribution. Please refer to the Appendix, section 7.1 for details.\n\nWe attempted some experiments to illuminate why BGAN works using importance sampling, especially at the beginning of training when we know the distribution overlap will be small. However, at this time, we do not have any results that would paint a particularly clear picture to the reader. For instance, when looking at the effective sample size over the first epoch, we found this quantity fluctuates rapidly at the beginning of training, until converging to a reasonable value (about 90%). We expect that the effective sample size might be highly variable in the beginning of training as the unnormalized weights as very low, but not uniform. However, this is still speculation, and we believe that more time and care is needed to make any conclusions in the text.", "We have added citations on other works that have explored the connection between the discriminator output and likelihood ratios, adding a paragraph in the Related Works section dedicated to this (see “On estimating likelihood ratios from the discriminator\"). We have also strengthened references and comparisons to other methods for training GANs with REINFORCE (see “GAN for discrete variables\"). Finally, we have clarified that when the data distribution is multivariate (such as with pixels), we assume the observed variables are independent conditioned on Z (see the next to last paragraph of page 5, starting with “Algorithm 1” as well as Algorithm 1).", "In order to better and more definitively answer your question regarding why to use BGAN as well as address your concerns of a relative lack of comparison to other methods, we have included an additional quantitative experiment comparing BGAN to WGAN-GP with discrete MNIST, a summary of which is in the main text with the full details provided in the Appendix, section 7.1. Rather than using Inception score, which has not been used for quantitative experiments with discrete data, we trained new discriminators with higher capacities to estimate the Wasserstein distance or f-divergences between the MNIST training set and the discrete generated samples (keeping the generators fixed). Our results show that BGAN outperforms WGAN-GP consistently across *all metrics*, including the Wasserstein distance. Though we cannot say with absolutely certainty, it is very likely that this is because, while WGAN-GP is able to generate samples that visually resemble the target dataset, using the softmax outputs hurts the generator’s ability to model a truly discrete distribution. Please refer to the Appendix, section 7.1 for details.", "Thank you for your feedback.\n\nJust wanted to say that for GAN papers I would like to see more insights. If you can say more about your method, e.g. the variance reduction technique (how it relates to e.g. Rao-Blackwellization) and the generator over-training, then that would be nice, and I will consider it.\n\nAlso, quantitative results will be important. I agree there's no rigorous assessment metric here (especially that I am not familiar with NLP), but for the natural image experiments, maybe inception score and other recently proposed metrics like FID can be helpful.\n\nLet me know if you have your revision ready.\n\n", "You’re welcome: we enjoyed writing it.\n1) Yes! We should have shown this, at least in the appendix. We can definitely justify the variance reduction technique empirically. Would you like to see an experiment where we compare the estimated likelihood ratio, using equations 8 vs 10 for training and a simple dataset like MNIST, in the main text or the appendix? We can provide results with varying sample sizes and compare against wall-clock time.\n\n2) This is a very interesting and important question; as we are effectively using importance sampling, if p and q are very different, the unnormalized importance weights will be effectively 0. So why does this work in our case? The hypothesis is that because the pixel values themselves overlap (as they are discrete), that this is enough to create some variation in the importance weights, which is magnified by the normalization. At first, this could encourage the generator to produce pixels that the discriminator likes to see, effectively bringing the two distributions closer together. This should also apply in the high-dimensional setting as even a small overlap will be magnified by the normalization process. We are currently formulating an experiment we believe will help educate the reader a little better about what is going on here, and will include it in the revision.\n\n3) We agree: and we’re quite unhappy that there currently aren’t any reasonable metrics for evaluating GANs. What sorts of metrics would you like to see? Likelihood estimates are not considered a good metric for performance of GANs (as they aren’t optimized on the likelihood directly), and we’ve found that methods for estimating likelihoods for discrete GANs using AIS (Wu et al 2016) completely failed for us (using their code). We can provide inception scores for MNIST across different f-divergences (and include WGAN + WGAN-GP for some reference). Alternatively, we can use an estimate of beta, which converges to the likelihood ratio as the discriminator improves, and provide a comparison across different f-divergences (at least for MNIST).\n\n4) Yes! This is a very nice observation: using the square distance from the decision boundary transforms a max problem over a convex function to one over a concave function. If you don’t mind, we would like to add this insight to the section addressing stability of the generator, as it is a very nice way of thinking of what is going on.\n\nAnd yes: it should be min max; thank you", "You’re absolutely right: the observations made in our paper are not completely unknown to the community. Another paper we failed to mention in our work is Tran et al “Hierarchical Implicit Models and Likelihood-Free Variational Inference”, which also uses the estimate of the likelihood ratio in learning. Our primary contribution is connecting all of these ideas and present a principled method for training GANs on discrete data. The methods that are doing reinforce are actually doing the right thing: using the discriminator output as the score corresponds to using the sigmoid of the log ratio estimate as the reward signal. However, to our knowledge, none of those works made the actual connection to the likelihood ratio estimate in their motivation / formulation. Most of them focus instead on the difficulties associated with language modeling (roll-out policies, actor-critic, MC search, etc), and our family of policy gradients are compatible with many of these.\n\nWe think it would be worth adding a paragraph to the related works sections summarizing other works that have made similar observations, like the ones you mention. Do you think this would be a positive addition to the paper for the revision?\n\nAs far as the movement from one to the multi discrete distributions: yes, you are correct. We are indeed assuming conditional independence in the generator output variables (conditioned on z). We should have mentioned that in the text and the algorithm and will make the changes you suggest in the revision.\n\nThe generator is trained applying the global importance weight derived from the discriminator (we will clarify this in the text) uniformly across all of the pixels. This raises several questions about doing correct credit assignment across the pixels. Using PixelGAN or PatchGAN would be one way to extend our method to do so. Each of the patch or pixel discriminators in this case would be estimating the likelihood ratios for the respective pixel distributions, which can be used to provide variable importance weights across the whole image. One could even construct a hierarchy of these things, so that you had importance weights on both the local and the global scale. We considered these as extensions of our method, but decided that they were out of scope of this particular work.\n\nYes, it should be min max (and thank you for pointing out some other typos, we really appreciate it).", "Just a global comment to modify the individual comments below:\n\nWe recently got very good results on training on full ImageNet 2012, without conditioning on the label, (e.g., not using conditional GANs or auxiliary classifier GANs), using our method. To our knowledge, these are the best ImageNet results compared to those available in the literature. However, while the images are very high-quality, diverse, and show no evidence of mode collapse, most resemble deformations of real things (though the background is usually very good). Would you consider this a positive addition to the paper?", "First, we actually *did* compare to another method, namely WGAN-GP with the softmax relaxation in the adversarial classifier experiment. We didn’t include WGAP-GP in the table because, as stated in the text, “Our efforts to train WGAN using gradient penalty failed completely”. So effectively, the error rate was 90% (chance), despite trying *very hard* to get it to work and using a wide variety of regularization hyper-parameters, learning rates, and training ratios. We even consulted the original authors, but they never confirmed whether it worked for them. So if WGAN-GP with continuous relaxation doesn’t work in this simple setting, why would you trust it with something more complicated?\n\nOther than WGAN-GP and ours, no other method has shown to “work” (e.g., Gumbel softmax). Some other works, e.g., Li et al, use the discriminator output to do REINFORCE; which is just a scaled version of our REINFORCE version of BGAN, namely sigma log w (we will add this insight to the revision). So the REINFORCE-based estimators we compare to in the classification experiment are doing *essentially the same thing*. But none of the other works that do this establish why this is a good idea: the motivation is pure RL.\n\nWhat sort of comparison would you like to see beyond this? If it’s simple enough and would help educate your decision, we would be more than happy to add it to the revision in a timely manner. Perhaps we can provide inception scores for MNIST across different f-divergences and include WGAN + WGAN-GP for some reference?\n\nIn some sense though, the answer to your question depends on your personal values and background as a researcher. There are a great number of researchers who find WGAN-GP unsatisfying / problematic (including us) because it’s training a model to match the softmax probabilities with one-hot vectors (apples to oranges). Some of the problems with this are covered in some detail in the now (in)famous blog post on followup work for NLP (Adversarial generation of natural language):\nhttps://medium.com/@yoav.goldberg/an-adversarial-review-of-adversarial-generation-of-natural-language-409ac3378bd7\nTo summarize this, WGAN-GP using the softmax outputs directly is a sort of an unprincipled and “easy” way to get around the back-prop through discrete sampling issue. Some people have shown it “works” (generates some sensible text) with language, but the results are so far so bad compared to MLE that no NLP person takes them seriously. The approach brings up many questions about why it *should* even learn the true distribution, but the authors never even bother to attempt to answer these question reasonably.\n\nBesides, WGAN has biased gradients (see Cramer GAN), so there’s that.\n\nIn summary to answer your question: if you want a method with little theoretical backing but is “easy” in the sense that you can plug it into your standard deep learning library and just train using back-prop, then BGAN is not for you. However, if you want a *principled* method that provably converges but requires a little more effort to code, then BGAN provides a strong set of tools for solving your problem in a sensible way.\n\nBut we implore you: please reconsider the significance of our work w.r.t. the larger research effort to do adversarial learning with discrete variables. We didn’t write this paper to show that we had “solved” NLP or to even cover the fundamental flaws associated with WGAN-GP with continuous relaxation. We wanted to offer a solution to a fundamentally hard and unsolved problem, and we were excited about the connections to other works we found along the way and felt it was worth sharing.\n\nAs far as the organization: perhaps you felt the paper was a little TL;DR (nearly 11 pages)? If so, we apologize about the length: we could have done the usual tricks to bring the paper length down: half the size of the figures, remove the headers such as “definition” removed the theorem/proof structure. But the choices we made we felt would improve the clarity of the piece (and indeed one of the reviewers applauds the quality, and both of the other reviewers clearly understand the paper well and had no issues with the quality). But it is easy enough to misjudge what an average reader might find is clear. What would make the paper easier to read, in your opinion? Should we move some of the definitions and proofs to the appendix?" ]
[ -1, 7, -1, -1, 7, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, -1, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "r12KCYBVz", "iclr_2018_rkTS8lZAb", "H1lP_k5ez", "rJegU3tef", "iclr_2018_rkTS8lZAb", "iclr_2018_rkTS8lZAb", "iclr_2018_rkTS8lZAb", "H1FAQ9Flz", "rJegU3tef", "H1lP_k5ez", "HypaYt_Zf", "H1FAQ9Flz", "rJegU3tef", "iclr_2018_rkTS8lZAb", "H1lP_k5ez" ]
iclr_2018_Hk0wHx-RW
Learning Sparse Latent Representations with the Deep Copula Information Bottleneck
Deep latent variable models are powerful tools for representation learning. In this paper, we adopt the deep information bottleneck model, identify its shortcomings and propose a model that circumvents them. To this end, we apply a copula transformation which, by restoring the invariance properties of the information bottleneck method, leads to disentanglement of the features in the latent space. Building on that, we show how this transformation translates to sparsity of the latent space in the new model. We evaluate our method on artificial and real data.
accepted-poster-papers
Observing that in contrast to classical information bottleneck, the deep variational information bottleneck (DVIB) model is not invariant to monotonic transformations of input and output marginals, the authors show how to incorporate this invariance along with sparsity in DVIB using the copula transform. The revised version of the paper addressed some of the reviewer concerns about clarity as well as the strength of the experimental section, but the authors are encouraged to improve these aspects of the paper further.
train
[ "ByJ3JKkBG", "S1ZdyY1SG", "H13MWgq4M", "rJYSUovgG", "ByQLos_xM", "ByR8Gr5gf" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I would have also liked to see a more direct and systemic validation of the claims made in the paper. For example, the shortcomings of DIB identified in Section 3.1, 3.2 could have been verified more directly by plotting I(y,t) for various monotonic transformations of x.\n\nWe verified this for beta transformation in Experiment 1. We observe that the impact of our method is most pronounced when different variables are transformed in possibly different ways (i.e. when they are subject to diverse transformations with various scales).\n\n\n\n\nA direct comparison of the explicit and implicit forms of the algorithms would also also make for a stronger paper in my opinion.\n\nWe mention the implicit copula transformation learned by neural networks in Section 3.3 for completeness as an alternative to the default explicit approach, but we would like to point out that the explicit approach is a preferred choice in practice.\nIn the same section (in the revised paper), we elaborate on the few situations where the implicit copula might be advantageous, such as when there is a necessity of implicit tie breaking between data points. We also explain why the explicit copula is usually more advantageous. One circumvents the problem of devising an architecture capable of learning the marginal cdf, thus simplifying the neural network. Perhaps more importantly, the implicit approach does not scale well with dimensionality of the data, since the networks used for approximating the marginal cdf have to be trained independently for every dimension.\n", "We would like to thank the reviewer for the additional review. We respond to the questions and issues raised in the review below.\n\n\n\n\nWhile Section 3.3 clearly defines the explicit form of the algorithm (where data and labels are essentially pre-processed via a copula transform), details regarding the “implicit form” are very scarce. From Section 3.4, it seems as though the authors are optimizing the form of the gaussian information bottleneck I(x,t), in the hopes of recovering an encoder $f_\\beta(x)$ which gaussianizes the input (thus emulating the explicit transform) ? Could the authors clarify whether this interpretation is correct, or alternatively provide additional clarifying details?\n\nThis seems to be a misunderstanding. The $f_\\beta$ transformation stands for an abstract, general transformation of the input data. In our model, it is implemented by the copula transformation (explicit or implicit) and the encoder network. $f_\\beta$ thus does not emulate the explicit transformation, and is not confined to representing the (implicit or explicit) copula transformation. The copula transformation, not necessarily implemented as a neural network, is a part of $f_\\beta$.\nThe purpose of introducing $f_\\beta$ is to explain the difference of the model with and without the extra copula transformation and why applying the transformation translates to sparsity not observed in the “regular” sparse Gaussian information bottleneck.\nWe elaborate on the difference between the implicit and explicit copula in the answer to the last question.\n\n\n\n\nThere are also many missing details in the experimental section: how were the number of “active” components selected ?\n\nThe only parameter of our model is $\\lambda$. As described in Section 3.4, by continuously increasing $\\lambda$, one decreases sparsity defined by the number of active neurons. Thus, one can adjust the number of active components by continuously varying $\\lambda$ (curves in Figures 2, 4, 6 with increasing numbers of active components correspond to increasing $\\lambda$).\nThe number of active components is chosen differently in different experiments. In Experiments 1, 6, 7 $\\lambda$, and thus the number of active components, is varied over a large interval. In Experiment 3, $\\lambda$ is also varied, and subsequently chosen so that the dimensionality of latent spaces in the two compared models is the same.\n\n\n\n\nWhich versions of the algorithm (explicit/implicit) were used for which experiments ? I believe explicit was used for Section 4.1, and implicit for 4.2 but again this needs to be spelled out more clearly\n\nAs we mentioned in the rebuttal, throughout the paper as well as for the experiments, the explicit copula transformation defined in Eq. (6) is used. The explicit transformation is also the default choice of the form of the copula transformation.\n\n\n\n\nI would also like to see a discussion (and perhaps experimental comparison) to standard preprocessing techniques, such as PCA-whitening.\n\nPCA whitening, in contrast to the copula transformation, does not disentangle marginal distributions from the dependence structure captured by the copula. It also does not restore the invariance properties of the model we identified as motivation. It does not lead to a boost in information curves such as in Figure 2; we can add the appropriate experiment to our manuscript.\n\n\n\n\nI do not think their [experiments’] scope (single synthetic, plus a single UCI dataset) is sufficient. While the gap in performance is significant on the synthetic task, this gap appears to shrink significantly when moving to the UCI dataset. How does this method perform for more realistic data, even e.g. MNIST ? I think it is crucial to highlight that the deficiencies of DIB matter in practice, and are not simply a theoretical consideration.\n[…]\nthe representation analyzed in Figure 7 is promising, but again the authors could have targeted other common datasets for disentangling, e.g. the simple sprites dataset used in the beta-VAE paper.\n\nWe would like to stress that imposing sparsity on the latent representation is an important aspect of our model. It is in general difficult to quantify latent representations. Our model yields significantly sparser representations even when the information curves are closer.\nOur model shows its full strength when a multiview analysis is involved, especially with data where multiple variables have different and rescaled distributions. Datasets constructed such that marginals (or simply labels, such as in the MNIST dataset) are uniform distributed do not pose enough challenge, since the output space is too easy to reconstruct even without the copula transformation.\nAs for dataset size, we would like to point out that finding meaningful sparse representations is more challenging for smaller datasets with higher dimensionality, therefore we think that the datasets we used do show the most relevant properties of the copula DIB.\n", "This paper identifies and proposes a fix for a shortcoming of the Deep Information Bottleneck approach, namely that the induced representation is not invariant to monotonic transform of the marginal distributions (as opposed to the mutual information on which it is based). The authors address this shortcoming by applying the DIB to a transformation of the data, obtained by a copula transform. This explicit approach is shown on synthetic experiments to preserve more information about the target, yield better reconstruction and converge faster than the baseline. The authors further develop a sparse extension to this Deep Copula Information Bottleneck (DCIB), which yields improved representations (in terms of disentangling and sparsity) on a UCI dataset.\n\n(significance) This is a promising idea. This paper builds on the information theoretic perspective of representation learning, and makes progress towards characterizing what makes for a good representation. Invariance to transforms of the marginal distributions is clearly a useful property, and the proposed method seems effective in this regard.\nUnfortunately, I do not believe the paper is ready for publication as it stands, as it suffers from lack of clarity and the experimentation is limited in scope.\n\n(clarity) While Section 3.3 clearly defines the explicit form of the algorithm (where data and labels are essentially pre-processed via a copula transform), details regarding the “implicit form” are very scarce. From Section 3.4, it seems as though the authors are optimizing the form of the gaussian information bottleneck I(x,t), in the hopes of recovering an encoder $f_\\beta(x)$ which gaussianizes the input (thus emulating the explicit transform) ? Could the authors clarify whether this interpretation is correct, or alternatively provide additional clarifying details ? There are also many missing details in the experimental section: how were the number of “active” components selected ? Which versions of the algorithm (explicit/implicit) were used for which experiments ? I believe explicit was used for Section 4.1, and implicit for 4.2 but again this needs to be spelled out more clearly. I would also like to see a discussion (and perhaps experimental comparison) to standard preprocessing techniques, such as PCA-whitening.\n\n(quality) The experiments are interesting and seem well executed. Unfortunately, I do not think their scope (single synthetic, plus a single UCI dataset) is sufficient. While the gap in performance is significant on the synthetic task, this gap appears to shrink significantly when moving to the UCI dataset. How does this method perform for more realistic data, even e.g. MNIST ? I think it is crucial to highlight that the deficiencies of DIB matter in practice, and are not simply a theoretical consideration. Similarly, the representation analyzed in Figure 7 is promising, but again the authors could have targeted other common datasets for disentangling, e.g. the simple sprites dataset used in the beta-VAE paper. I would have also liked to see a more direct and systemic validation of the claims made in the paper. For example, the shortcomings of DIB identified in Section 3.1, 3.2 could have been verified more directly by plotting I(y,t) for various monotonic transformations of x. A direct comparison of the explicit and implicit forms of the algorithms would also also make for a stronger paper in my opinion.\n\nPros:\n* Theoretically well motivated\n* Promising results on synthetic task\n* Potential for impact\nCons:\n* Paper suffers from lack of clarity (method and experimental section)\n* Lack of ablative / introspective experiments\n* Weak empirical results (small or toy datasets only).", "This paper presents a sparse latent representation learning algorithm based on an information theoretic objective formulated through meta-Gaussian information bottleneck and solved via variational auto-encoder stochastic optimization. The authors suggest Gaussianify the data using copula transformation and further adopt a diagonal determinant approximation with justification of minimizing an upper bound of mutual information. Experiments include both artificial data and real data. \n\nThe paper is unclear at some places and writing gets confusing. For example, it is unclear whether and when explicit or implicit transforms are used for x and y in the experiments, and the discussion at the end of Section 3.3 also sounds confusing. It would be more helpful if the author can make those points more clear and offer some guidance about the choices between explicit and implicit transform in practice. Moreover, what is the form of f_beta and how beta is optimized? In the first equation on page 5, is tilde y involved? How to choose lambda?\n\nIf MI is invariant to monotone transformations and information curves are determined by MIs, why “transformations basically makes information curve arbitrary”? Can you elaborate? \n\nAlthough the experimental results demonstrate that the proposed approach with copula transformation yields higher information curves, more compact representation and better reconstruction quality, it would be more significant if the author can show whether these would necessarily lead to any improvements on other goals such as classification accuracy or robustness under adversarial attacks. \n\nMinor comments: \n\n- What is the meaning of the dashed lines and the solid lines respectively in Figure 1? \n- Section 3.3 at the bottom of page 4: what is tilde t_j? and x in the second term? Is there a typo? \n- typo, find the “most orthogonal” representation if the inputs -> of the inputs \n\nOverall, the main idea of this paper is interesting and well motivated and but the technical contribution seems incremental. The paper suffers from lack of clarity at several places and the experimental results are convincing but not strong enough. \n\n***************\nUpdates: \n***************\nThe authors have clarified some questions that I had and further demonstrated the benefits of copula transform with new experiments in the revised paper. The new results are quite informative and addressed some of the concerns raised by me and other reviewers. I have updated my score to 6 accordingly. \n\n\n", "[====================================REVISION ======================================================]\nOk so the paper underwent major remodel, which significantly improved the clarity. I do agree now on Figure 5, which tips the scale for me to a weak accept. \n[====================================END OF REVISION ================================================]\n\nThis paper explores the problems of existing Deep variational bottle neck approaches for compact representation learning. Namely, the authors adjust deep variational bottle neck to conform to invariance properties (by making latent variable space to depend on copula only) - they name this model a copula extension to dvib. They then go on to explore the sparsity of the latent space\n\nMy main issues with this paper are experiments: The proposed approach is tested only on 2 datasets (one synthetic, one real but tiny - 2K instances) and some of the plots (like Figure 5) are not convincing to me. On top of that, it is not clear how two methods compare computationally and how introduction of the copula affects the convergence (if it does)\n\nMinor comments\nPage 1: forcing an compact -> forcing a compact\n“and and” =>and\nSection 2: mention that I is mutual information, it is not obvious for everyone\n\nFigure 3: circles/triangles are too small, hard to see \nFigure 5: not really convincing. B does not appear much more structured than a, to me it looks like a simple transformation of a. \n", "The paper proposed a copula-based modification to an existing deep variational information bottleneck model, such that the marginals of the variables of interest (x, y) are decoupled from the DVIB latent variable model, allowing the latent space to be more compact when compared to the non-modified version. The experiments verified the relative compactness of the latent space, and also qualitatively shows that the learned latent features are more 'disentangled'. However, I wonder how sensitive are the learned latent features to the hyper-parameters and optimizations?\n\nQuality: Ok. The claims appear to be sufficiently verified in the experiments. However, it would have been great to have an experiment that actually makes use of the learned features to make predictions. I struggle a little to see the relevance of the proposed method without a good motivating example.\n\nClarity: Below average. Section 3 is a little hard to understand. Is q(t|x) in Fig 1 a typo? How about t_j in equation (5)? There is a reference that appeared twice in the bibliography (1st and 2nd).\n\nOriginality and Significance: Average. The paper (if I understood it correctly) appears to be mainly about borrowing the key ideas from Rey et. al. 2014 and applying it to the existing DVIB model." ]
[ -1, -1, 5, 6, 6, 6 ]
[ -1, -1, 4, 3, 3, 1 ]
[ "H13MWgq4M", "H13MWgq4M", "iclr_2018_Hk0wHx-RW", "iclr_2018_Hk0wHx-RW", "iclr_2018_Hk0wHx-RW", "iclr_2018_Hk0wHx-RW" ]
iclr_2018_S1cZsf-RW
WHAI: Weibull Hybrid Autoencoding Inference for Deep Topic Modeling
To train an inference network jointly with a deep generative topic model, making it both scalable to big corpora and fast in out-of-sample prediction, we develop Weibull hybrid autoencoding inference (WHAI) for deep latent Dirichlet allocation, which infers posterior samples via a hybrid of stochastic-gradient MCMC and autoencoding variational Bayes. The generative network of WHAI has a hierarchy of gamma distributions, while the inference network of WHAI is a Weibull upward-downward variational autoencoder, which integrates a deterministic-upward deep neural network, and a stochastic-downward deep generative model based on a hierarchy of Weibull distributions. The Weibull distribution can be used to well approximate a gamma distribution with an analytic Kullback-Leibler divergence, and has a simple reparameterization via the uniform noise, which help efficiently compute the gradients of the evidence lower bound with respect to the parameters of the inference network. The effectiveness and efficiency of WHAI are illustrated with experiments on big corpora.
accepted-poster-papers
The paper proposes a new approach for scalable training of deep topic models based on amortized inference for the local parameters and stochastic-gradient MCMC for the global ones. The key aspect of the method involves using Weibull distributions (instead of Gammas) to model the variational posteriors over the local parameters, enabling the use of the reparameterization trick. The resulting methods perform slightly worse that the Gibbs-sampling-based approaches but are much faster at test time. Amortized inference has already been applied to topic models, but the use of Weibull posteriors proposed here appears novel. However, there seems to be no clear advantage to using stochastic-gradient MCMC instead of vanilla SGD to infer the global parameters, so the value of this aspect of WHAI unclear.
test
[ "B1gG5N5ez", "S1HoJBilG", "SyESlFoef", "BJoIAaEGz", "HJR836NGf", "B1kjsaEMG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors develop a hybrid amortized variational inference MCMC inference \nframework for deep latent Dirichlet allocation. Their model consists of a stack of\n gamma factorization layers with a Poisson layer at the bottom. They amortize \ninference at the observation level using a Weibull approximation. The structure \nof the inference network mimics the MCMC sampler for this model. Finally they \nuse MCMC to infer the parameters shared across data. A couple of questions:\n\n1) How effective are the MCMC steps at mixing? It looks like this approach helps a \nbit with local optima?\n\n2) The gamma distribution can be reparameterized via its rejection sampler \n\n@InProceedings{pmlr-v54-naesseth17a,\n title = \t {{Reparameterization Gradients through Acceptance-Rejection Sampling Algorithms}},\n author = \t {Christian Naesseth and Francisco Ruiz and Scott Linderman and David Blei},\n booktitle = \t {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics},\n pages = \t {489--498},\n year = \t {2017}\n}\n\nI think some of the motivation for the Weibull is weakened by this work. Maybe a \ncomparison is in order?\n\n3) Analytic KL divergence can be good or bad. It depends on the correlation between \nthe gradients of the stochastic KL divergence and the stochastic log-likelihood\n\n4) One of the original motivations for DLDA was that the augmentation scheme \nremoved the need for most non-conjugate inference. However, this approach doesn't \nuse that directly. Thus, it seems more similar to inference procedure in deep exponential \nfamilies. Was the structure of the inference network proposed here crucial?\n\n5) How much like a Weibull do you expect the posterior to be? This seems unclear.", "The paper presents a deep Poisson model where the last layer is the vector of word counts generated by a vector Poisson. This is parameterized by a matrix vector product, and the vector in this parameterizeation is itself generated by a vector Gamma with a matrix-vector parameterization. From there the vectors are all Gammas with matrix-vector parameterizations in a typical deep setup.\n\nWhile the model is reasonable, the purpose was not clear to me. If only the last layer generates a document, then what use is the deep structure? For example, learning hierarchical topics as in Figure 4 doesn't seem so useful here since only the last layer matters. Also, since no input is being mapped to an output, what does going deeper mean? It doesn't look like any linear mapping is being learned from the input to output spaces, so ultimately the document itself is coming from a simple linear Poisson model just like LDA and other non-deep methods.\n\nThe experiments are otherwise thorough and convincing that quantitative performance is improved over previous attempts at the problem.", "The authors propose a hybrid Bayesian inference approach for deep topic models that integrates stochastic gradient MCMC for global parameters and Weibull-based multilayer variational autoencoders (VAEs) for local parameters. The decoding arm of the VAE consists of deep latent Dirichlet allocation, and an upward-downward structure for the encoder. Gamma distributions are approximated as Weibull distributions since the Kullback-Leibler divergence is known and samples can be efficiently drawn from a transformation of samples from a uniform distribution. \n\nThe results in Table 1 are concerning for several reasons, i) the proposed approach underperfroms DLDA-Gibbs and DLDA-TLASGR. ii) The authors point to the scalability of the mini-batch-based algorithms, however, although more expensive, DLDA-Gibbs, is not prohibitive given results for Wikipedia are provided. iii) The proposed approach is certainly faster at test time, however, it is not clear to me in which settings such speed (compared to Gibbs) would be needed, provided the unsupervised nature of the task at hand. iv) It is not clear to me why there is no test-time difference between WAI and WHAI, considering that in the latter, global parameters are sampled via stochastic-gradient MCMC. One possible explanation being that during test time, the approach does not use samples from W but rather a summary of them, say posterior means, in which case, it defeats the purpose of sampling from global parameters, which may explain why WAI and WHAI perform about the same in the 3 datasets considered.\n\n- \\Phi is in a subset of R_+, in fact, columns of \\Phi are in the P_0-dimensional simplex.\n- \\Phi should have K_1 columns not K.\n- The first paragraph in Page 5 is very confusing because h is introduced before explicitly connecting it to k and \\lambda. Also, if k = \\lambda, why introduce different notations?", "We thank Reviewer 3 for his/her feedback. We have made revisions accordingly, with the main changes highlighted in blue. Below please find our detailed response. \n\nQ1: How effective are the MCMC steps at mixing? It looks like this approach helps a bit with local optima? \n\nA: The MCMC steps of DLDA-WHAI are quite effective, as demonstrated in Fig. 3 by its clearly faster convergence in comparison to DLDA-WAI, which uses SGD. We also think that WHAI helps escape local optima. In DLDA-WAI, we use the same method with Srivastava & Sutton (2017) to realize the simplex constraint on \\Phi^{(l)}, which needs a well-tuned regularization parameter on \\Phi^{(l)} in order to achieve a good performance and meaningful topics, especially for a deep model with two or more hidden layers. Thus, the MCMC steps of WHAI also help eliminate sensitive tuning parameters. \n\nQ2: The gamma distribution can be reparameterized via its rejection sampler called rejection sampling variational inference (RSVI) proposed in Naesseth et al. (2017). I think some of the motivation for the Weibull is weakened by this work. Maybe a comparison is in order? \n\nA: Thank you very much for the suggestion. Indeed, RSVI is an excellent method that lets us apply reparameterization to a much wider class of variational distribution, including approximately reparameterizing the gamma distribution. We have now included it into the revised paper, with the corresponding algorithm developed under RSVI referred to as gamma hybrid autoencoding inference (GHAI). \n\nAlthough RSVI is an attractive technique that is very general, as shown in the updated Table 2, DLDA-GHAI clearly underperforms DLDA-WHAI, suggesting that the potential benefits of using the gamma over the Weibull are overshadowed by the approximations made in RSVI, where the accepted noise for reparameterization are correlated with the gamma distribution parameters and some additional uniform random numbers are needed. Please see our discussion in Section 2.4 and added results on gamma hybrid autoencoding inference (GHAI) in Section 3.1 for more details.\n\nQ3: Analytic KL divergence can be good or bad. It depends on the correlation between the gradients of the stochastic KL divergence and the stochastic log-likelihood. \n\nA: We agree with your comment. Our results suggest that using Weibull provides both analytic KL and a good guidance of the gradient with respect to the ELBO for our deep model. \n\nQ4: One of the original motivations for DLDA was that the augmentation scheme removed the need for most non-conjugate inference. However, this approach doesn’t use that directly. Thus, it seems more similar to inference procedure in deep exponential families. Was the structure of the inference network proposed here crucial? \n\nA: The deterministic-upward and stochastic-downward structure of the inference network is crucial to obtain good and interpretable results for a deep model. For example, as shown in Fig. 5, if we remove all downward links of the inference network, the inferred latent factors become much less meaningful, and as shown in Table 1, GHAI-Independent and WHAI-Independent fail to improve as the model goes deeper.\n\nThis particular structure is inspired by the upward-downward Gibbs sampler of DLDA developed with data augmentation. Although DLDA can be considered as a special case of deep exponential families (DEFs), it is distinct from the other existing DEFs in having an upward-downward Gibbs sampling. For a DEF that does not have an upward-downward Gibbs sampler, we find it difficult to come up with an appropriately structured inference network that stochastically connects different hidden layers. That is probably why existing DEFs except for DLDA almost always use mean-filed variational inference, without explicit information propagation between layers. \n\nWe also note this particular structure may also be generalized to develop a deterministic-upward and stochastic-downward inference network for other models such as sigmoid belief network.\n\nQ5: How much like a Weibull do you expect the posterior to be? This seems unclear. \n\nA: We choose the Weibull distribution to approximate the gamma distributed conditional posterior shown in Equation 5 in the paper. With DLDA-Gibbs or DLDA-TLASGR, in general, the shape parameters in Equation 5 are found to be neither too close to zero nor too large, thus, as suggested by Fig. 1, we expect the Weibull to well approximate the gamma distributed conditional posteriors. ", "We thank Reviewer 2 for his/her comments and questions. We have made revisions accordingly and highlighted our main changes in blue.\n\nIf we only use a single hidden layer, then \\{\\theta_{nk}\\}_{k}, the weights of the topics in document n, follow independent gamma distributions in the prior. By going deep, we are able to construct a much more expressive hierarchical prior distribution, whose marginal is designed to capture the correlations between different topics at multiple hidden layers. From the viewpoint of deep learning, our multilayer deep generative model provides a distributed representation of the data, with a higher layer capturing an increasingly more general concept. Empirically, our experiments consistently show that making a model deeper leads to improved performance. \nIn Figure 4, without the deep structure, the inferred first-layer topics will have worse qualities, and their relationships will become difficult to understand. \n\nOur deep model is a deep generative model that has multiple stochastic layers. It is unsupervised trained to learn how to transform the gamma random noises injected at multiple different hidden layers to generate the correlated topic weights at the first layer, which are further multiplied with the learned topics as the Poisson rates to generate high-dimensional count vectors under the Poisson distribution. Thus, even though the Poisson layer is the same between a shallow model and a deep one, the latter has a much more sophisticated mechanism to generate (correlated) topic weights at the first layer, and infers a network to understand the complex relationships between different topics at multiple different levels. ", "We thank Reviewer 1 for his/her comments and suggestions. We have revised the paper accordingly, with the revised/added texts highlighted in blue. Below please find our response to Reviewer 1’s concerns on the results in Table 1. \n\nQ1: The proposed approach underperforms DLDA-Gibbs and DLDA-TLASGR.\n\nA: Measured by per-heldout-word perpelexity, DLDA-WHAI only slightly underperforms DLDA-Gibbs and DLDA-TLASGR. However, DLDA-WHAI is substantially faster than both DLDA-Gibbs and DLDA-TLASGR for sampling a multi-layer latent representation of a testing document given a sample of the global parameters. This is because to sample a latent representation for a document from the conditional posterior, while both DLDA-Gibbs and DLDA-TLASGR often require quite a few Gibbs sampling iterations, DLDA-WHAI requires only a single deterministic-upward projection, followed by a single stochastic-downward random draw. \n\nQ2: The authors point to the scalability of the mini-batch-based algorithms, however, although more expensive, DLDA-Gibbs, is not prohibitive given results for Wikipedia are provided.\n\nA: DLDA-Gibbs, which needs to process all documents in each iteration, requires a memory that is large enough to store all data and local parameters. For the 10-million-document Wiki dataset, a PC with 32G memory actually failed to run, and we had to use a workstation with 64G memory to obtain the results for DLDA-Gibbs. Note that 64G memory will ultimately become insufficient as one further increases the data size. Even if one can have a machine that may forever increase its memory to satisfy the need of DLDA-Gibbs, Fig. 3 shows that DLDA-Gibbs needs much longer time before achieving satisfying results, while a mini-batch based algorithm has already made substantial progress before DLDA-Gibbs even finishes a single iteration. \n\nQ3: The proposed approach is certainly faster at test time, however, it is not clear to me in which settings such speed (compared to Gibbs) would be needed, provided the unsupervised nature of the task at hand.\n\nA: The purpose of our probabilistic generative model is to extract the topics and learn the multilayer latent representation under these topics in an unsupervised manner. Being able to extract the latent representation of a test document with a low computational cost makes it attractive to be used in a wide variety of real applications. For example, to process a large number of incoming documents in real time, to process documents in mobile devices with low power consumptions, to quickly identify the key topics of a news article/blog post and recommend it to relevant users, and to rapidly extract the topic-proportion vector of a document and use it to retrieve related documents. Please see Srivastava & Sutton (2017) for additional discussions on the importance of fast inference for topic models.\n\nQ4: It is not clear to me why there is no test-time difference between WAI and WHAI, considering that in the latter, global parameters are sampled via stochastic-gradient MCMC. One possible explanation being that during test time, the approach does not use samples from W but rather a summary of them, say posterior means, in which case, it defeats the purpose of sampling from global parameters, which may explain why WAI and WHAI perform about the same in the 3 datasets considered.\n\nA: As shown in Fig. 3, WHAI converges faster than WAI, although the final perplexity obtained by averaging over collected samples are similar. While they share the same inference for the neural-network parameters of the auto-encoder, WHAI uses TLASGR-MCMC while WAI uses SGD to update \\Phi^{(l)}. We have added Mandt, Hoffman & Blei (2017) to support the practice of using SGD to obtain the approximate posterior samples of W. At the test time, both WHAI and WAI use the same number of samples of the global parameters, and use the auto-encoder of the same structure to generate the latent representation of a test document under each global-parameter sample, which is why WHAI and WAI have the same test time. \n\nNewly added reference: S. Mandt, M. D. Hoffman, and D. M. Blei. Stochastic gradient descent as approximate Bayesian inference. arXiv:1704.04289, to appear in Journal of Machine Learning Research, 2017. \n\nOur answer to the other comments: we have now clearly specified the simplex constraint on the columns of \\Phi and clearly defined the neural networks for k, \\lambda, and h." ]
[ 6, 6, 5, -1, -1, -1 ]
[ 4, 2, 4, -1, -1, -1 ]
[ "iclr_2018_S1cZsf-RW", "iclr_2018_S1cZsf-RW", "iclr_2018_S1cZsf-RW", "B1gG5N5ez", "S1HoJBilG", "SyESlFoef" ]
iclr_2018_H1MczcgR-
Understanding Short-Horizon Bias in Stochastic Meta-Optimization
Careful tuning of the learning rate, or even schedules thereof, can be crucial to effective neural net training. There has been much recent interest in gradient-based meta-optimization, where one tunes hyperparameters, or even learns an optimizer, in order to minimize the expected loss when the training procedure is unrolled. But because the training procedure must be unrolled thousands of times, the meta-objective must be defined with an orders-of-magnitude shorter time horizon than is typical for neural net training. We show that such short-horizon meta-objectives cause a serious bias towards small step sizes, an effect we term short-horizon bias. We introduce a toy problem, a noisy quadratic cost function, on which we analyze short-horizon bias by deriving and comparing the optimal schedules for short and long time horizons. We then run meta-optimization experiments (both offline and online) on standard benchmark datasets, showing that meta-optimization chooses too small a learning rate by multiple orders of magnitude, even when run with a moderately long time horizon (100 steps) typical of work in the area. We believe short-horizon bias is a fundamental problem that needs to be addressed if meta-optimization is to scale to practical neural net training regimes.
accepted-poster-papers
An interesting analysis of the issue of short-horizon bias in meta-optimization that highlights a real problem in a number of existing setups. I concur with Reviewer 3 that it would be nice to provide a constructive solution to this issue: if something like K-FAC does indeed work well, it would be a great addition to a final version of this paper. Nonetheless, I think the paper would be a interesting addition to ICLR and recommend acceptance.
train
[ "BkZRhnbxz", "Hkhtvm5eM", "B1EVroyWG", "Hkt9Hx9mG", "rJn-Vl9Xf", "H1TfSx5Xz", "SJBhNe9Xz", "By9IIjkZM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper discusses the problems of meta optimization with small look-ahead: do small runs bias the results of tuning? The result is yes and the authors show how differently the tuning can be compared to tuning the full run. The Greedy schedules are far inferior to hand-tuned schedules as they focus on optimizing the large eigenvalues while the small eigenvalues can not be \"seen\" with a small lookahead. The authors show that this effect is caused by the noise in the obective function.\n\npro:\n- Thorough discussion of the issue with theoretical understanding on small benchmark functions as well as theoretical work\n- Easy to read and follow\n\ncons:\n-Small issues in presentation: \n* Figure 2 \"optimal learning rate\" -> \"optimal greedy learning rate\", also reference to Theorem 2 for increased clarity.\n* The optimized learning rate in 2.3 is not described. This reduces reproducibility.\n* Figure 4 misses the red trajectories, also it would be easier to have colors on the same (log?)-scale. \n The text unfortunately does not explain why the loss function looks so vastly different\n with different look-ahead. I would assume from the description that the colors are based\n on the final loss values obtaine dby choosing a fixed pair of decay exponent and effective LR. \n\nTypos and notation:\npage 7 last paragraph: \"We train the all\" -> We train all\nnotation page 5: i find \\nabla_{\\theta_i} confusing when \\theta_i is a scalar, i would propose \\frac{\\partial}{\\partial \\theta_i}\npage 2: \"But this would come at the expense of long-term optimization process\": at this point of the paper it is not clear how or why this should happen. Maybe add a sentence regarding the large/Small eigenvalues?", "This paper proposes a simple problem to demonstrate the short-horizon bias of the learning rate meta-optimization.\n\n- The idealized case of quadratic function the analytical solution offers a good way to understand how T-step look ahead can benefit the meta-algorithm.\n- The second part of the paper seems to be a bit disconnected to the quadratic function analysis. It would be helpful to understand if there is gap between gradient based meta-optimization and the best effort(given by the analytical solution)\n- Unfortunately, no guideline or solution is offered in the paper.\n\nIn summary, the idealized model gives a good demonstration of the problem itself. I think it might be of interest to some audiences in ICLR.", "This paper studies the issue of truncated backpropagation for meta-optimization. Backpropagation through an optimization process requires unrolling the optimization, which due to computational and memory constraints, is typically restricted or truncated to a smaller number of unrolled steps than we would like.\n\nThis paper highlights this problem as a fundamental issue limiting meta-optimization approaches. The authors perform a number of experiments on a toy problem (stochastic quadratics) which is amenable to some theoretical analysis as well as a small fully connected network trained on MNIST. \n\n(side note: I was assigned this paper quite late in the review process, and have not carefully gone through the derivations--specifically Theorems 1 and 2).\n\nThe paper is generally clear and well written.\n\nMajor comments\n-------------------------\nI was a bit confused why 1000 SGD+mom steps pre-training steps were needed. As far as I can tell, pre-training is not typically done in the other meta-optimization literature? The authors suggest this is needed because \"the dynamics of training are different at the very start compared to later stages\", which is a bit vague. Perhaps the authors can expand upon this point?\n\nThe conclusion suggests that the difference in greedy vs. fully optimized schedule is due to the curvature (poor scaling) of the objective--but Fig 2. and earlier discussion talked about the noise in the objective as introducing the bias (e.g. from earlier in the paper, \"The noise in the problem adds uncertainty to the objective, resulting in failures of greedy schedule\"). Which is the real issue, noise or curvature? Would running the problem on quadratics with different condition numbers be insightful?\n\nMinor comments\n-------------------------\nThe stochastic gradient equation in Sec 2.2.2 is missing a subscript: \"h_i\" instead of \"h\"\n\nIt would be nice to include the loss curve for a fixed learning rate and momentum for the noisy quadratic in Figure 2, just to get a sense of how that compares with the greedy and optimized curves.\n\nIt looks like there was an upper bound constraint placed on the optimized learning rate in Figure 2--is that correct? I couldn't find a mention of the constraint in the paper. (the optimized learning rate remains at 0.2 for the first ~60 steps)?\n\nFigure 2 (and elsewhere): I would change 'optimal' to 'optimized' to distinguish it from an optimal curve that might result from an analytic derivation. 'Optimized' makes it more clear that the curve was obtained using an optimization process.\n\nFigure 2: can you change the line style or thickness so that we can see both the red and blue curves for the deterministic case? I assume the red curve is hiding beneath the blue one--but it would be good to see this explicitly.\n\nFigure 4 is fantastic--it succinctly and clearly demonstrates the problem of truncated unrolls. I would add a note in the caption to make it clear that the SMD trajectories are the red curves, e.g.: \"SMD trajectories (red) during meta-optimization of initial effective ...\". I would also change the caption to use \"meta-training losses\" instead of \"training losses\" (I believe those numbers are for the meta-loss, correct?). Finally, I would add a colorbar to indicate numerical values for the different grayscale values.\n\nSome recent references that warrant a mention in the text:\n- both of these learn optimizers using longer numbers of unrolled steps:\nLearning gradient descent: better generalization and longer horizons, Lv et al, ICML 2017\nLearned optimizers that scale and generalize, Wichrowska et al, ICML 2017\n- another application of unrolled optimization:\nUnrolled generative adversarial networks, Metz et al, ICLR 2017\n\nIn the text discussing Figure 4 (middle of pg. 8) , \"which is obtained by using...\" should be \"which are obtained by using...\"\n\nIn the conclusion, \"optimal for deterministic objective\" should be \"deterministic objectives\"", "Q1: The optimized learning rate in 2.3 is not described. This reduces reproducibility.\nSorry for such confusion. We use the losses formed by the forward dynamics given in Theorem 1 as training objective, and use Adam to find the learning rate and momentum at each time steps that minimize that training objective. The meta training learning rate is 0.003, and it is trained for 500 meta training steps. We added this description in our revised version.\n\nQ2: Figure 4 misses the red trajectories, also it would be easier to have colors on the same (log?)-scale. \nMeta-descent on 20k is time-consuming to run. Since the visualized hyper-surface is smooth, we expect that meta-descent will behave as expected to converge to the local minimum. We will add the red trajectory in the next version of the paper.\n\nQ3: Why the loss function looks so vastly different with different look-ahead? \nMore number of look-ahead means more optimization steps. The loss will go lower as one trains longer.\n\nQ4. page 2: \"But this would come at the expense of long-term optimization process\": at this point of the paper it is not clear how or why this should happen. Maybe add a sentence regarding the large/Small eigenvalues? \nThanks for your suggestion. We modified the entire paragraph as you and reviewer 3 suggested. We believe the current version is clearer.\n", "Q1: Why 1000 SGD+mom steps pre-training steps were needed?\nWe want to choose a setting that our observation is less sensitive to which part of training. If we always start looking ahead at zeroth step, then there is a higher chance that the optimal hyperparameter is only fitted to the beginning; whereas if we start at some pre-trained steps, e.g. 1000, then the optimal hyperparameter is more likely to generalize to, say 500 or 5000 steps.\n\nQ2: Which is the real issue, noise or curvature?\nThe problem will arise if you have both noise in the objective and different curvature directions. We showed that in a deterministic problem, the greedy optimal learning rate and momentum is optimal as it is essentially doing conjugate gradient, regardless of how many different curvature directions you have. We also showed in theorem 3 that the greedy learning rate is optimal if the curvature is spherical. On the other hand, if there’s noise in the objective, and there are many different curvature directions the problem will arise. This is because, the noise in the objective forbids one to completely get rid of the loss on a particular direction. Hence, one should always first remove the loss on low curvature directions and then move onto high curvature directions. But short-horizon objective encourages the opposite because high curvature directions gives most rapid decrease in loss. Therefore, both noise in the objective and different curvature directions cause the problem.\n\nQ3: Figure 2: 1. Show fixed learning rate. 2. Thickness of the red curve. 3. Upper bound.\nFigure 2 is edited as reviewer suggests. Also the reviewer is correct that we upper bounded the learning rate to avoid the loss on any curvature direction becoming larger than its initial value, so as to assure the quadratic assumption. We added the description in the revised version.\n\nQ4: Figure 4: 1. Add a color bar to indicate numerical values for the different grayscale values.\nThanks for the suggestion. We will add it in the next version of our paper.\n\nQ5: Citations:\nWe added those citations reviewer mentioned. ", "We want to thank reviewer again for raising such a great idea to show the problem in a more accessible way. We edited the figure as reviewer suggested.", "Q1: The second part of the paper seems to be a bit disconnected to the quadratic function analysis. It would be helpful to understand if there is gap between gradient based meta-optimization and the best effort (given by the analytical solution)\nAns: The second part of the paper experimentally verified the theory in the first part while generalizing to general neural networks with non-convex problems. It shows that quadratic analysis is a valid model for hyper-parameter optimization. We will work on the flow of the paper with more connection between the two parts of the paper.\n\nQ2: Unfortunately, no guideline or solution is offered in the paper.\nAns: We agree with the reviewer that in the current version of the paper there’s no solution provided to the problem. We only offered one potential solution to the problem, following theorem 3. Theorem 3 states that the greedy solution is optimal when the curvature is spherical and the noise is codiagonalizable with the curvature. This implies stochastic meta descent could work well with a good enough natural gradient method, such as Kronecker-factored approximate curvature (K-FAC). We will show more experimental results on this subject in a later version. \n", "I think you could make a figure that much more clearly demonstrates the issue to replace or add to the current Figure 1.\n\nCompute the meta-loss for learning the learning rate for some small problem (e.g. stochastic quadratics). This meta-loss is a 1D function over the learning rate. For a small number of unrolled steps, this function should have minima at low values of the learning rate. You can plot this meta-loss for different numbers of unrolls on the same graph, which should show that the minima of the meta-loss shifts to higher learning rates as you unroll for more steps. This is related to Figure 4, but I think would be a nice way to introduce the problem in an easily digestible picture." ]
[ 7, 6, 8, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1MczcgR-", "iclr_2018_H1MczcgR-", "iclr_2018_H1MczcgR-", "BkZRhnbxz", "B1EVroyWG", "By9IIjkZM", "Hkhtvm5eM", "B1EVroyWG" ]
iclr_2018_rkpoTaxA-
Self-ensembling for visual domain adaptation
This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (Tarvainen et. al 2017) of temporal ensembling (Laine et al. 2017), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.
accepted-poster-papers
An interesting application of self-ensembling/temporal ensembling for visual domain adaptation that achieves state of the art on the visual domain adaptation challenge. Reviewers noted that the approach is quite engineering-heavy, but I am not sure it's really much worse than making a pixel-to-pixel approach work well for domain adaptation. I hope the authors follow through with their promise to add experiments to the final version (notably the minimal augmentation experiments to show just how much this domain adaptation technique is tailored towards imagenet-like things). As it stands, this paper would be a good contribution to ICLR as it shows an efficient and interesting way to solve a particular visual domain adaptation problem.
train
[ "B1Ih1S54G", "S1HXzycxf", "HJ7P8yYNM", "r1uziOjxf", "S1OIJnigM", "Syu2DDUmz", "ryhrgwLXz", "HJqdkDLQG", "B1vyYcxyG" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Thanks for pointing this out, as its' most likely correct; problems will likely arise in situations where there is severe class imbalance in the target dataset.\n\nIf the editors permit it, we may need to add this caveat to our paper.", "This paper presents a domain adaptation algorithm based on the self-ensembling method proposed by [Tarvainen & Valpola, 2017]. The main idea is to enforce the agreement between the predictions of the teacher and the student classifiers on the target domain samples while training the student to perform well on the source domain. The teacher network is simply an exponential moving average of different versions of the student network over time. \n\nPros:\n+ The paper is well-written and easy to read\n+ The proposed method is a natural extension of the mean teacher semi-supervised learning model by [Tarvainen & Valpola, 2017]\n+ The model achieves state-of-the-art results on a range of visual domain adaptation benchmarks (including top performance in the VisDA17 challenge)\n\nCons:\n- The model is tailored to the image domain as it makes heavy use of the data augmentation. That restricts its applicability quite significantly. I’m also very interested to know how the proposed method works when no augmentation is employed (for fair comparison with some of the entries in Table 1).\n- I’m not particularly fond of the engineering tricks like confidence thresholding and the class balance loss. They seem to be essential for good performance and thus, in my opinion, reduce the value of the main idea.\n- Related to the previous point, the final VisDA17 model seems to be engineered too heavily to work well on a particular dataset. I’m not sure if it provides many interesting insights for the scientific community at large.\n\nIn my opinion, it’s a borderline paper. While the best reported quantitative results are quite good, it seems that achieving those requires a significant engineering effort beyond just applying the self-ensembling idea. \n\nNotes:\n* The paper somewhat breaks the anonymity of the authors by mentioning the “winning entry in the VISDA-2017”. Maybe it’s not a big issue but in my opinion it’s better to remove references to the competition entry.\n* Page 2, 2.1, line 2, typo: “stanrdard” -> “standard”\n\nPost-rebuttal revision:\nAfter reading the authors' response to my review, I decided to increase the score by 2 points. I appreciate the improvements that were made to the paper but still feel that this work a bit too engineering-heavy, and the title does not fully reflect what's going on in the full pipeline.", "Thank you for you comments! Regarding class balancing loss, I'm wondering if it's safe to force the predictions on the target batch to be similar to the uniform distribution. As you mention in the paper, SVHN is a non-balanced dataset therefore a random batch won't really follow the uniform label distribution. I guess one has to be very careful with the scale of that term.", "The paper addresses the problem of domain adaptation: Say you have a source dataset S of labeled examples and you have a target dataset T of unlabeled examples and you want to label examples from the target dataset. \n\nThe main idea in the paper is to train two parallel networks, a 'teacher network' and a 'student network', where the student network has a loss term that takes into account labeled examples and there is an additional loss term coming from the teacher network that compares the probabilities placed by the two networks on the outputs. This is motivated by a similar network introduced in the context of semi-supervised learning by Tarvainen and Valpola (2017). The parameters are then optimized by gradient descent where the weight of the loss-term associated with the unsupervised learning part follows a Gaussian curve (with time). No clear explanation is provided for why this may be a good thing to try. The authors also use other techniques like data augmentation to enhance their algorithms.\n\nThe experimental results in the paper are quite nice. They apply the methodology to various standard vision datasets with noticeable improvements/gains and in one case by including additional tricks manage to better than other methods for VISDA-2017 domain adaptation challenge. In the latter, the challenge is to use computer-generated labeled examples and use this information to label real photographic images. The present paper does substantially better than the competition for this challenge. ", "The paper was very well-written, and mostly clear, making it easy to follow. The originality of the main method was not immediately apparent to me. However, the authors clearly outline the tricks they had to do to achieve good performance on multiple domain adaptation tasks: confidence thresholding, particular data augmentation, and a loss to deal with imbalanced target datasets, all of which seem like good tricks-of-the-trade for future work. The experimentation was extensive and convincing.\n\nPros:\n* Winning entry to the VISDA 2017 visual domain adaptation challenge competition.\n* Extensive experimentation on established toy datasets (USPS<>MNIST, SVHN<>MNIST, SVHN, GTSRB) and other more real-world datasets (including the VISDA one)\n\nCons:\n* Literature review on domain adaptation was lacking. Recent CVPR papers on transforming samples from source to target should be referred to, one of them was by Shrivastava et al., Learning from Simulated and Unsupervised Images through Adversarial Training, and another by Bousmalis et al., Unsupervised Pixel-level Domain Adaptation with GANs. Also you might want to mention Domain Separation Networks which uses gradient reversal (Ganin et al.) and autoencoders (Ghifary et al.). There was no mention of MMD-based methods, on which there are a few papers. The authors might want to mention non-Deep Learning methods also, or that this review relates to neural networks,\n* On p. 4 it wasn't clear to me how the semi-supervised tasks by Tarvainen and Laine were different to domain adaptation. Did you want to say that the data distributions are different? How does this make the task different. Having source and target come in different minibatches is purely an implementation decision.\n* It was unclear to me what footnote a. on p. 6 means. Why would you combine results from Ganin et al. and Ghifary et al. ?\n* To preserve anonymity keep acknowledgements out of blind submissions. (although not a big deal with your acknowledgements)", "Thank you for your review\n\n* We agree that our work is tailored to the image domain. With a view to addressing your concerns, we have run further experiments to quantify the effects of each part of our approach - including data augmentation - for all of the small image benchmarks. We have therefore removed Table 2 as the information that it presented can be more compactly shown in Table 1, alongside everything else. We have added further discussion of the effect of our affine augmentation to section 3.3 and demonstrated its effect on both domain adaptation and plain supervised experiments. What we currently have is the same augmentation scheme used by Lain et al. and Tarvainen at al, which consists of translations (all) and horizontal flips (CIFAR/STL only). Experiments with minimal augmentation (gaussian noise added to the input only, therefore usable outside the image domain) are currently running; we will add them if the experiments complete on time.\n\nFurthermore, we found that our model performs slightly better on the MNIST <-> SVHN experiments when using RGB images rather than greyscale, so we have replaced our greyscale results with RGB ones. This represents a slightly bigger domain jump, so we hope that this increases your confidence in our work.\n\n* We see what you mean concerning engineering tricks. In defence of confidence thresholding, rather than being a new additional trick it replaces a time-dependent ramp-up curve used by Laine et al. in their work. We have made this a little more explicit in section 3.3. As for class balancing loss, it is similar in purpose and implementation to the entropy maximisation loss used in the IMSAT model of Hu et al. (an unsupervised clustering model that also uses data augmentation). We have mentioned this in section 3.4. We did not cite this paper in our original version as we were unaware of it at the time.\n\n* We have run further VisDA experiments. We found that pairing back our augmentation scheme improved performance on the validation set and made little different on the test set. Our original complex augmentation scheme was tested on a very small subset (1280 samples) of the training and validation sets during the development of our model. It turns out that these results did not generalise to the full set, so lesson learned (we were facing a tight competition deadline too). Our new reduced augmentation scheme consists of random crops, random horizontal flips and random uniform scaling, thus bringing it in line with augmentation schemes commonly used in ImageNet networks, such as He et al.'s ResNets. We have also performed 5 independent runs of each of our newer experiments and given a breakdown of the results.", "Thank you for your review.\n\nWe have clarified our discussion of the Gaussian curve based unsupervised loss scaling that was originally proposed by Laine et al. Beyond stating that the scaling function must ramp up slowly they don't discuss their choice of scaling function, so we present it as is. That said, we would propose replacing it with confidence thresholding, especially as it is more stable that Gaussian ramp-up in more challenging scenarios. We have explicitly clarified this in section 3.2.", "Thank you for your review. We hope that our revision will address your concerns.\n\n* Thanks for pointing out the shortcomings of our literature review. We have stated that we are focusing on neural networks and we have cited the works that you mentioned. We have briefly mentioned MMD based approaches, although not in detail as we do not have an in-depth familiarity with the mathematics behind it. We have had to condense our literature review somewhat in order to not go too far over the page limit.\n\n* We have made the distinction between semi-supervised learning and domain adaptation more clear, as the distributions of the source and target datasets are indeed different. As for having separate source and target mini-batches, we have clarified how this fits in and was inspired by the work of Li et al. (2016). Time permitting, we may be able to run some experiments to quantify the effect this decision has and add the results to Table 1.\n\n* It seems that Ghifary et al. reimplemented Ganin's RevGrad approach. Neither paper had results for all the small image benchmarks that we discuss, so we took results from both papers to get a complete set. We have clarified the footnote.\n\n* We have suppressed the acknowledgements for now.\n", "There are three instances of text in red that indicate items that we would like to correct.\n\nFirstly in the conclusions section on page 9 the word 'check' in red was a 'note to self' to verify the fact that our networks also exhibit strong performance on sample from the source domain. At submission time our experiment logs from our small image benchmarks backed this claim up. At the time we had not managed to verify this claim for the VisDA experiments, hence the 'note to self'. This has since been done and the claim holds. Given our approach to training (simultaneous supervised training on source domain and unsupervised training on target domain) we had a strong reason to believe this claim to be true at the time of submission.\n\nIn tables 1 and 2 on page 6 there are results in red, as they result from averaging less than the 5 independent runs as claimed in the table 1 caption. We have since run more experiments to get the full 5 results. The only substantial change is the 11.11 +/- 0 result for STL -> CIFAR in the 'Mean teacher' row which has now changed to 15.51 +/- 8.7. The rest are within a few tenths of a % of the results shown in the submitted version.\n\nFurthermore, since submission we discovered a bug in our image augmentation code that affects the small colour image benchmarks (STL <-> CIFAR, Syn Digits -> SVHN and SynSigns -> GTSRB). Fixing the bug looks set to yield improved results (so far by looking at the results from the experiments that have completed). We would like to update tables 1 and 2 to reflect this." ]
[ -1, 7, -1, 7, 7, -1, -1, -1, -1 ]
[ -1, 4, -1, 3, 5, -1, -1, -1, -1 ]
[ "HJ7P8yYNM", "iclr_2018_rkpoTaxA-", "Syu2DDUmz", "iclr_2018_rkpoTaxA-", "iclr_2018_rkpoTaxA-", "S1HXzycxf", "r1uziOjxf", "S1OIJnigM", "iclr_2018_rkpoTaxA-" ]
iclr_2018_SJi9WOeRb
Gradient Estimators for Implicit Models
Implicit models, which allow for the generation of samples but not for point-wise evaluation of probabilities, are omnipresent in real-world problems tackled by machine learning and a hot topic of current research. Some examples include data simulators that are widely used in engineering and scientific research, generative adversarial networks (GANs) for image synthesis, and hot-off-the-press approximate inference techniques relying on implicit distributions. The majority of existing approaches to learning implicit models rely on approximating the intractable distribution or optimisation objective for gradient-based optimisation, which is liable to produce inaccurate updates and thus poor models. This paper alleviates the need for such approximations by proposing the \emph{Stein gradient estimator}, which directly estimates the score function of the implicitly defined distribution. The efficacy of the proposed estimator is empirically demonstrated by examples that include meta-learning for approximate inference and entropy regularised GANs that provide improved sample diversity.
accepted-poster-papers
The paper presents the Stein gradient estimator, a kernelized direct estimate of the score function for implicitly defined models. The authors demonstrate the estimator for GANs, meta-learning for approx. inference in Bayesian NNs, and approximating gradient-free MCMC. The reviewers found the method interesting and principled. The GAN experiments are somewhat toy-ish as far as I am concerned, so I'd encourage the authors to try out larger-scale models if possible, but otherwise this should be an interesting addition to ICLR.
train
[ "ryjc6__ez", "H1xEAg3lM", "rJ8QfICez", "Sydc7eZMz", "SJlbOODMG", "r1XbwWZMG", "Sy3KkZWfz", "r1zqfN7Zf", "rJ5a7tg-G", "HJS1ru6gG", "r19JQOTgG", "SJCCQvTgz", "SJxdjXBgM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "public", "official_reviewer" ]
[ "Post rebuttal phase (see below for original comments)\n================================================================================\nI thank the authors for revising the manuscript. The methods makes sense now, and I think its quite interesting. While I do have some concerns (e.g. choice of eta, batching may not produce a consistent gradient estimator etc.), I think the paper should be accepted. I have revised my score accordingly.\n\nThat said, the presentation (esp in Section 2) needs to be improved. The main problem is that many symbols have been used without being defined. e.g. phi, q_phi, \\pi, and a few more. While the authors might assume that this is obvious, it can be tricky to a reader - esp. someone like me who is not familiar with GANs. In addition, the derivation of the estimator in Section 3 was also sloppy. There are neater ways to derive this using RKHS theory without doing this on a d' dimensional space.\n\nRevised summary: The authors present a method for estimating the gradient of some training objective for generative models used to sample data, such as GANs. The idea is that this can be used in a training procedure. The idea is based off the Stein's identity, for which the authors propose a kernelized solution. The key insight comes from rewriting the variational lower bound so that we are left with having to compute the gradients w.r.t a random variable and then applying Stein's identity. The authors present applications in Bayesian NNs and GANs.\n\n\nSummary\n================================================================\nThe authors present a method for estimating the gradient of some training objective\nfor generative models used to sample data, such as GANs. The idea is that this can be\nused in a training procedure. The idea is based off the Stein's identity, for which the\nauthors propose a kernelized solution. The authors present applications in Bayesian NNs\nand GANs.\n\n\n\n\nDetailed Reviews\n================================================================\n\nMy main concern is what I raised via a comment, for which I have not received a response\nas yet. It seems that you want the gradients w.r.t the parameters phi in (3). But the\nline immediately after claims that you need the gradients w.r.t the domain of a random\nvariable z and the subsequent sections focus on the gradients of the log density with\nrespect to the domain. I am not quite following the connection here.\n\nAlso, it doesn't help that many of the symbols on page 2 which elucidates the set up\nhave not been defined. What are the quantities phi, q, q_phi, epsilon, and pi?\n\nPresentation\n- Bottom row in Figure 1 needs to be labeled. I eventually figured that the colors\n correspond to the figures above, but a reader is easily confused.\n- As someone who is not familiar with BNNs, I found the description in Section 4.2\n inadequate.\n \nSome practical concerns:\n- The fact that we need to construct a kernel matrix is concerning. Have you tried\n batch verstions of these estimator which update the gradients with a few data points?\n- How is the parameter \\eta chosen in practice? Can you comment on the values that you\n used and how it compared to the eigenvalues of the kernel matrix?\n\nMinor\n- What is the purpose behind sections 3.1 and 3.2? They don't seem pertinent to the rest\n of the exposition. Same goes for section 3.5? I don't see the authors using the\n gradient estimators for out-of-sample points?\n\nI am giving an indifferent score mostly because I did not follow most of the details.", "In this paper, the authors proposed the Stein gradient estimator, which directly estimates the score function of the implicit distribution. Direct estimation of gradient is crucial in the context of GAN because it could potentially lead to more accurate updates. Motivated by the Stein’s identity, the authors proposed to estimate the gradient term by replacing expectation with the empirical counterpart and then turn the resulting formulation into a regularized regression problem. They also showed that the traditional score matching estimator (Hyvarinen 2005) can be obtained as a special case of their estimator. Moreover, they also showed that their estimator can be obtained by minimizing the kernelized Stein discrepancy (KSD) which has been used in goodness-of-fit test. In the experiments, the proposed method is evaluated on few tasks including Hamiltonian flow with approximate gradients, meta-learning of approximate posterior samplers, and GANs using entropy regularization. \n\nThe novelty of this work consists of an approach based on score matching and Stein’s identity to estimate the gradient directly and the empirical results of the proposed method on meta-learning for approximate inference and entropy regularized GANs. The proposed method is new and technically sound. The authors also demonstrated through several experiments that the proposed technique can be applied in a wide range of applications.\n\nNevertheless, I suspect that the drawback of this method compared to existing ones is computational cost. If it takes significantly longer to compute the gradient using proposed estimator compared to existing methods, the gain in terms of accuracy is questionable. By spending the same amount of time, we may obtain an equally accurate estimate using other methods. For example, the authors claimed in Section 4.3 that the Stein gradient estimator is faster than other methods, but it is not clear as to why this is the case. Hence, the comparison in terms of computational cost should also be included either in the text or in the experiment section.\n\nWhile the proposed Stein gradient estimator is technically interesting, the experimental results do not seem to evident that it significantly outperforms existing techniques. In Section 4.2, the authors only consider four datasets (out of six UCI datasets). Also, in Section 4.3, it is not clear what the point of this experiment is: whether to show that entropy regularization helps or the Stein gradient estimator outperforms other estimators.\n\nSome comments:\n\n- Perhaps, it is better to move Section 3.3 before Section 3.2 to emphasize the main contribution of this work, i.e., using Stein’s identity to derive an estimate of the gradient of the score function.\n- Stein gradient estimator vs KDE: What if the kernel is not translation invariant? \n- In Section 4.3, why did you consider the entropy regularizer? How does it help answer the main hypothesis of this paper?\n- The experiments in Section 4.3 seems to be a bit out of context.\n", "This paper deals with the estimation of the score function, i.e., the derivative of the log likelihood. Some methods were introduced and a new method using Stein identity was proposed. The setup of the trasnductive learning was introduced to add the prediction power to the proposed method. The method was used to several applications.\n\nThis is an interesting approach to estimate the score function for location models in a non-parametric way. I have a couple of minor comments below. \n\n- Stein identity is the formula that holds for the class of ellipsoidal distribution including Gaussian distribution. I'm not sure the term \"Stein identity\" is appropriate to express the equation (8). \n- Some boundary condition should be assumed to assure that integration by parts works properly. Describing an explicit boundary condition to guarantee the proper estimation would be nice. ", "(We have revised the paper to make the presentation clearer. Please consider it and we would welcome your feedback.)\n\nThank you for your time for reviewing the paper. We appreciate your comment that the proposed approach is interesting.\n\nWe would like to emphasise that our work is highly novel (as both reviewers 2 and 3 pointed out). \n\n1. The Stein gradient estimator is a novel score function estimator, which, as you mentioned, generalises the score matching estimator. To our knowledge, this is the first **non-parametric** direct estimator: the KDE method, although also non-parametric, is an **indirect** method as it first estimates the density then takes the gradient.\n\n2. We applied the gradient estimation methods to a wide range of novel applications. To our knowledge, before our development, no paper has considered meta-learning tasks for approximate inference. Also, the entropy regularisation idea for GANs is novel, which cannot be done without an efficient gradient estimation method. \n\nIn an on-going work, we have applied the Stein gradient estimator to training implicit generative models, and small-scale experiments have shown promising results.\n\nNow on your comments:\n\n1. Yes as you pointed out, the original Stein's identity (Stein 1972, 1981) are for Gaussian distributions. However, the identity has been generalised to more general case. In equation (6) of the revised manuscript, we explicitly write down the integration by part derivations with the boundary condition assumed. Indeed for distributions with Gaussian-like tails almost any test function will satisfy the boundary condition. \n\n2. If you would like to see an counterexample: if q(x) is Cauchy, then h(x) should be less or equal than order of x^2. But in practice, since the kernel in use often has decaying tails, it is generally the case that the boundary condition is satisfied.\n\nThank you again for reading the feedback and we look forward to hearing from you again.\n\n", "Thank you for the positive review. We will think about how to revise the paper. Now on your further comments:\n\n1. consistency\nWe did not claim the proposed Stein gradient estimator is unbiased. This is because: 1) we used the V-statistics of KSD, 2) the fixed point of the MC approximated objective is not necessary the fixed point of the KSD. Similar things apply to the KDE and Score matching estimators. However, asymtotic consistency results have been proved for KDE, and Score matching the proof requires the kernel machine hypothesis set to contain the ground truth. This is not always the case, and our proposal might be prefered here because it is non-parametric. \n\nWe are currently working on establishing similar asymtotic consistency results for the Stein gradient estimator.\n\n2. preference of the RKHS story\nIndeed if we directly start to talk about kernels then I would rather prefer the derivation of section 3.2 (in the current version). However for people (like engineers) who are not familiar with the RKHS theory, the explanation of section 3.1 might be more intuitive, and that's why I decided to include both of them. This is in similar spirit as to derive linear regression equations in many statistics textbooks: we first write down the solutions, and then notice that we can use the kernel trick to address the d' >> K problem.\n\nThank you for your feedback again and do let us know what we can do to improve the paper.", "(We have revised the paper to make the presentation clearer. Please consider it and we would welcome your feedback.)\n\nThank you for your time for reviewing the paper. Again we are sorry that the presentation is not very clear in the first version of the manuscript. We have revised the paper according to your comments and added a brief introduction to Bayesian neural networks in the appendix.\n\nWe believe that our paper is highly novel and contains significant contributions (as reviewer 3 commented). The paper is based on an important observation that an accurate gradient approximation method would be very helpful in many learning tasks that involve fitting an implicit distribution. As the other two reviewers pointed out, the proposed Stein gradient estimator is highly novel, and the experiments consider novel tasks that have not been considered in the literature, e.g. meta-learning for approximate inference, and entropy regularisation methods for GANs. \n\nNow for your detailed comments:\n\n1. notations of phi, pi, etc.\nWe are sorry again for unclear presentation in the first version. In the latest version of the manuscript, we have explicitly defined them and provided a detailed derivation of the entropy gradient in eq (3). Please let us know if it is still unclear.\n\n2. computing kernel matrix.\nIn section 4.3 we performed mini-batch training, and this means we only need to compute the gradient of log q on the mini-batch data. We found that with mini-batch size K=100 (which is typical for deep learning tasks) the computational cost is quite cheap, see the revised paper for a report of running time.\n\n3. choice of \\eta.\nIndeed for kernel methods, \\eta needs to be tuned. However, our empirical observation indicates that for better performance of the Stein approach, small \\eta is often preferred than large ones. Apparently, matrix inversion has numerical issues, so in our tests, we set \\eta to be some small value but large enough to ensure numerical stability.\n\n4. purpose of 3.1 and 3.2 (in the first version).\nSince the Stein gradient estimator is kernel-based, we need to compare to existing kernel-based gradient estimator. Therefore we introduce them in 3.1 and 3.2 (in the first version of the paper).\n\n5. purpose of 3.5 (in the first version).\nOur experiment 4.1 actually needs predictive estimators, since we want the particles of parallel chains to be independent of each other. The estimator derived in section 3.3 (of the first version) introduces correlations between the estimates of the score function at different locations.\n\nAlso in an on-going work, we apply the proposed Stein gradient estimator to training implicit generative models, which also requires predicting the gradient values. We already have some success on MNIST data, and now we are incorporating kernel learning techniques to scale it to massive data.\n\nThank you again for reading the feedback, and we look forward to hearing from you again.", "(We have revised the paper to make the presentation clearer. Please consider it and we would welcome your feedback.)\n\nThank you for your time for reviewing the paper. We appreciate your positive comment that the paper contains significant contributions to the community. The diversity of the experimental tasks show that gradient estimation is fundamental to many machine learning tasks, so we believe the proposed estimator is widely applicable as you pointed out.\n\nAlso, we would like to thank you for the suggestions on making the paper clearer. We have re-organised the presentation to emphasise the contribution of the Stein gradient estimator. \n\nNow on your comments:\n\n1. Computation cost\nWe added two paragraphs in the manuscript for further discussions on this. In short, we discussed:\n\nComparisons between kernel methods and other ideas. It is also known that the denoising auto-encoder (DAE), when trained with infinitesimal noise, also provides a score function estimator. However, this requires training the DAE, and depending on the neural network architecture, it can take significantly much more time compared to the kernel-based estimators which often have analytical solutions.\n\nFor the three kernel-based methods mentioned in the paper, both Score and Stein method require inverting a K*K matrix (O(K^3) time). All three methods require computing the kernel matrix (O(K^2 * d) time). However in the BNN and GAN experiments, since d >> K, the cost is dominated by the kernel matrix computation, meaning that all three methods have similar computational costs. Indeed we reported the running times for the GAN experiments which are almost identical. Also adding the entropy regularisation only resulted in 1s/epoch more time compared to vanilla BEGAN, which is actually quite cheap.\n\n2. BNN experiment.\nWe have clearly shown that the Stein approach is significantly better than the other two gradient estimators. SGLD with small step-size is known to work well, and the Stein method works equally well in this case. To our knowledge, this is the first attempt of meta-learning for approximate samplers, and our results demonstrate that this direction is worth investigation. We strongly believe that with a better neural network structure our method can be improved.\n\nRegarding the scale of the experiment: UCI datasets are standard benchmarks for Bayesian neural networks (e.g. see the PBP paper, Hernandez-Lobato and Adams 2015), and for datasets of this scale, we know that point estimates work worse. The size of the network is of the same scale as reported in Fig 5 (left) of (Andrychowicz et al. 2016).\n\n3. the GAN experiment in 4.3\nThe purpose of section 4.3 is to show the application of gradient estimation methods to tasks other than approximate inference (4.1 and 4.2). Our goal here is to show: (i) by adding entropy regulariser it can help address the mode collapse problem, and (ii) the resulting diversity measure also reflects the approximation accuracy of the entropy gradient. In this experiment, we showed that the Stein approach works considerably better.\n\nIndeed our ultimate goal of developing gradient estimation methods is to use them for training implicit generative models, and if successful, it can serve as an alternative to GAN-like approaches. In an on-going work, we already have some success on MNIST data. We are now working on incorporating kernel learning techniques to scale it to massive data.\n\n4. non translation invariant kernel case.\nTo our knowledge, it is rare for KDE methods to use non translation invariant kernels. And we have never seen consistency results proved for KDE gradient estimator in this case. But indeed connections between Stein and KDE methods is still a research question when using non translation invariant kernels.\n\nThank you again for reading the feedback and we look forward to hearing from you again.", "I am sorry again for rush derivations, will revise the derivations and provide an official response to your review. But just to quickly explain Q1 and Q3 here:\n\nQ1:\nLet's assume h(x) output a scalar for a moment. Then, stein's identity can be proved using integration by parts:\n\n\\int q(x) [ h(x) * dlogq(x)/dx + dh(x)/dx] dx\n= \\int [h(x) * dq(x)/dx + q(x) * dh(x)/dx ] dx // dlogx/dx = x^{-1}\n= \\int d[h(x)q(x)]/dx dx\n= h(x)q(x)|_{\\partial X} \n= 0 // assumed by the boundary condition\n\nNow write multi-dimension version h(x) = (h_1(x), ..., h_{d'}(x)). Then looking at eq (8), we notice that the ith row of the LHS matrix is actually\n\\int q(x) [ h_i(x) * dlogq(x)/dx + dh_i(x)/dx] dx,\nmeaning that if we assume boundary condition for h(x) (which also implies boundary condition for h_i(x)), then we can prove again that the LHS matrix is actually zero.\n\nQ3:\nHere I assume we can apply the reparameterisation trick for q (see the VAE papers). This says, \nsampling z ~ q_{\\phi}(z | x) is equivalent to 1) sample \\epsilon ~ \\pi(\\epsilon), then 2) compute z = f_{\\phi}(\\epsilon, x). \\pi(\\epsilon) is the distribution of the noise which is usually Gaussian. This means, we can rewrite the expectation in q to expectation in \\pi. Please see section 2.4 in the original VAE paper (Kingma and Welling 2013) for an example math derivation. \n\nThen I wanted to differentiate the variational lower-bound wrt \\phi. Especially, eq. (3) derived the gradient of the entropy term wrt \\phi. I will add in detailed derivations in revision, but for your quick reference please see eq (5-7) in (Roeder et al. 2017) for an example derivation.", "I am still not following the details for 1 and 3 yet. Feel free to elaborate below, but ideally these should have appeared in the paper.\nFor 2, what is pi?", "Thank you for your interest in this paper!\n\nFor your questions:\n1. Yes the original Stein (1981) paper only described the identity for a multivariate Gaussian distribution, and it assumed the test function to output scalars. However, the proof technique only used integration by parts, which, if the boundary condition is assumed, should be able to generalise to general distribution case.\n\nOur twist of the formula comes from the observation that we can pack multiple (scalar output) test functions into a vector. This has also been considered in e.g. Liu et al. (2016). \n\nWhat would you suggest to call (8) other than \"Stein's identity\"?\n\n2. Yes it would be nice to have an example, however in the Gaussian case since the tails decay exponentially, almost all functions satisfy the boundary condition. Maybe it would be helpful to do an example with long-tail distributions.\n\nHope this helps and thanks again!", "Thank you for your time on reviewing this paper!\n\nOn your questions:\n1. Yes the original Stein (1981) paper only described the identity for a multivariate Gaussian distribution, and it assumed the test function to output scalars. However, the proof technique only used integration by parts, which, if the boundary condition is assumed, should be able to generalise to general distribution case.\n\nOur twist of the formula comes from the observation that we can pack multiple (scalar output) test functions into a vector. This has also been considered in e.g. Liu et al. (2016). Will fix the descriptions -- thank you!\n\n2. Sorry for the rush of background introduction. Here q denotes the approximate posterior, and q_{\\phi} just explicitly writes the dependency of q to its parameter \\phi. \\epsilon is the noise variable used in the reparameterisation trick.\n\n3. Again sorry for the rush of the derivation -- will add a full equation in appendix. \nIn short the idea is to apply the reparameterisation trick, and notice that the gradient of \\phi contains the path gradient (the first term in (3)) and an expectation of the REINFORCE gradient (the second term). Using the log-derivative trick we can show that the second term is zero.\n\nHope this helps!", "This paper deals with the estimation of the score function, i.e., the derivative of the log likelihood. Some methods were introduced and a new method using Stein identity was proposed. The setup of the trasnductive learning was introduced to add the prediction power to the proposed method. The method was used to several applications.\n\nThis is an interesting approach to estimate the score function for location models in a non-parametric way. I have a couple of minor comments below. \n\n- Stein identity is the formula that holds for the class of ellipsoidal distribution including Gaussian distribution. I'm not sure the term \"Stein identity\" is appropriate to express the equation (8). \n- Some boundary condition should be assumed to assure that integration by parts works properly. Describing an explicit boundary condition to guarantee the proper estimation would be nice. \n", "1. Can you give a reference for Stein's multivariate identity - paper and theorem? The Stein 1981 paper only seems to discuss the univariate case.\n2. What are the following quantities on page 2: phi, q, q_phi, epsilon, pi\n3. It seems that you want the gradients w.r.t the parameters phi in (3). But the line immediately claims that you need the gradients w.r.t the domain of a random variable z and the subsequent sections focus on the gradients of the log density with respect to the domain. I am not quite following the connection here." ]
[ 7, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 2, 4, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJi9WOeRb", "iclr_2018_SJi9WOeRb", "iclr_2018_SJi9WOeRb", "rJ8QfICez", "ryjc6__ez", "ryjc6__ez", "H1xEAg3lM", "rJ5a7tg-G", "r19JQOTgG", "SJCCQvTgz", "SJxdjXBgM", "iclr_2018_SJi9WOeRb", "iclr_2018_SJi9WOeRb" ]
iclr_2018_B1nZ1weCZ
Learning to Multi-Task by Active Sampling
One of the long-standing challenges in Artificial Intelligence for learning goal-directed behavior is to build a single agent which can solve multiple tasks. Recent progress in multi-task learning for goal-directed sequential problems has been in the form of distillation based learning wherein a student network learns from multiple task-specific expert networks by mimicking the task-specific policies of the expert networks. While such approaches offer a promising solution to the multi-task learning problem, they require supervision from large expert networks which require extensive data and computation time for training. In this work, we propose an efficient multi-task learning framework which solves multiple goal-directed tasks in an on-line setup without the need for expert supervision. Our work uses active learning principles to achieve multi-task learning by sampling the harder tasks more than the easier ones. We propose three distinct models under our active sampling framework. An adaptive method with extremely competitive multi-tasking performance. A UCB-based meta-learner which casts the problem of picking the next task to train on as a multi-armed bandit problem. A meta-learning method that casts the next-task picking problem as a full Reinforcement Learning problem and uses actor-critic methods for optimizing the multi-tasking performance directly. We demonstrate results in the Atari 2600 domain on seven multi-tasking instances: three 6-task instances, one 8-task instance, two 12-task instances and one 21-task instance.
accepted-poster-papers
The paper contains an interesting way to do online multi task learning, by borrowing ideas from active learning and comparing and contrasting a number of ways on the arcade learning environment. Like the reviewers, I have some concerns about using the target scores and I think more analysis would be needed to see just how robust this method is to the choice/distribution of target scores (the authors mention that things don't break down as long as the scores are "reasonable", but that's not a particularly empirical nor precise statement). My inclination is to accept the paper, because of the earnest efforts made by the authors in understanding how DUA4C works. However, I do agree that the paper should have a larger focus on that: basically Section 6 should be expanded, and the experiments should be rerun in such a way that the setup for DUA4C is more "favorable" (in terms of hyper-parameter optimization). If there's any gap between any of the proposed methods and DUA4C, then this would warrant further analysis of course (since it would mean that there's an advantage to using target scores).
test
[ "B1lMs83HG", "r1XoHKtlf", "BkTVPAYeG", "HkFWypcgf", "rJj6KMfVG", "Sy6Q3DTXz", "ryLScD6QM", "rkpwKvT7M", "rybYG3Pgz", "H1kXk_weG", "H1KJtmmez" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "author", "public" ]
[ "We thank the reviewer of increasing his score. We further address your comments below:\n\n> The paper should have contained a precise description of the DUA4C algorithm --not only experimental results--.\n\nThe paper does contain a precise description of the DUA4C algorithm. The Algorithm 7 on Page 23 is exactly that.\n\n> For instance, when a target score is doubled and becomes unfeasible, how the algorithm behaves ? Why is there no degradation in this case ?\n\nIn our experiments with ALE, we never entered a situation where the estimated target scores became unfeasible. While we do not have empirical evidence to no degradation in this case, we do believe that degradation will still not occur if the reward function as shown in Eq (2) is used. This is because the second half of the reward function tries to make sure that the performance on worst 3 games is good. As a result, once an unfeasible target is set for one of the games, the agent will switch to other games to make sure that not just the worst but the worst 3 games have good performance. \n\nAs a side note, for feasible targets, we have seen that the algorithms we have proposed are robust to the higher than usual target scores as seen in the second half of Appendix G.\nAs we have stated in the paper, we believe that our work is an important first step in the direction of achieving Multi-Tasking agents using Active Learning principles. We agree with the reviewer that evidence of no degradation in case of unfeasible targets is an interesting addition to the paper and have left the empirical verification of the same to future work.", "The paper present online algorithms for learning multiple sequential problems. The main contribution is to introduce active learning principles for sampling the sequential tasks in an online algorithm. Experimental results are given on different multi-task instances. The contributions are interesting and experimental results seem promising. But the paper is difficult to read due to many different ideas and because some algorithms and many important explanations must be found in the Appendix (ten sections in the Appendix and 28 pages). Also, most of the paper is devoted to the study of algorithms for which the expected target scores are known. This is a very strong assumption. In my opinion, the authors should have put the focus on the DU4AC algorithm which get rids of this assumption. Therefore, I am not convinced that the paper is ready for publication at ICLR'18.\n* Differences between BA3C and other algorithms are said to be a consequence of the probability distribution over tasks. The gap is so large that I am not convinced on the fairness of the comparison. For instance, BA3C (Algorithm 2 in Appendix C) does not have the knowledge of the target scores while others heavily rely on this knowledge.\n* I do not see how the single output layer is defined.\n* As said in the general comments, in my opinion Section 6 should be developped and more experiments should be done with the DUA4C algorithm.\n* Section 7.1. It is not clear why degradation does not happen. It seems to be only an experimental fact.", "\nThe authors show empirically that formulating multitask RL itself as an active learning and ultimately as an RL problem can be very fruitful. They design and explore several approaches to the active learning (or active sampling) problem, from a basic \nchange to the distribution to UCB to feature-based neural-network based RL. The domain is video games. All proposed approaches beat the uniform sampling baselines and the more sophisticated approaches do better in the scenarios with more tasks (one multitask problem had 21 tasks).\n\n\nPros:\n\n- very promising results with an interesting active learning approach to multitask RL\n\n- a number of approaches developed for the basic idea\n\n- a variety of experiments, on challenging multiple task problems (up to 21 tasks/games)\n\n- paper is overall well written/clear\n\nCons:\n\n- Comparison only to a very basic baseline (i.e. uniform sampling)\nCouldn't comparisons be made, in some way, to other multitask work?\n\n\n\nAdditional comments:\n\n- The assumption of the availability of a target score goes against\nthe motivation that one need not learn individual networks .. authors\nsay instead one can use 'published' scores, but that only assumes\nsomeone else has done the work (and furthermore, published it!).\n\nThe authors do have a section on eliminating the need by doubling an\nestimate for each task) which makes this work more acceptable (shown\nfor 6 tasks or MT1, compared to baseline uniform sampling).\n\nClearly there is more to be done here for a future direction (could be\nmentioned in future work section).\n\n- The averaging metrics (geometric, harmonic vs arithmetic, whether\n or not to clip max score achieved) are somewhat interesting, but in\n the main paper, I think they are only used in section 6 (seems like\n a waste of space). Consider moving some of the results, on showing\n drawbacks of arithmetic mean with no clipping (table 5 in appendix E), from the appendix to\n the main paper.\n\n\n- The can be several benefits to multitask learning, in particular\n time and/or space savings in learning new tasks via learning more\n general features. Sections 7.2 and 7.3 on specificity/generality of\n features were interesting.\n\n\n\n--> Can the authors show that a trained network (via their multitask\n approached) learns significantly faster on a brand new game\n (that's similar to games already trained on), compared to learning from\n scratch?\n\n--> How does the performance improve/degrade (or the variance), on the\n same set of tasks, if the different multitask instances (MT_i)\n formed a supersets hierarchy, ie if MT_2 contained all the\n tasks/games in MT_1, could training on MT_2 help average\n performance on the games in MT_1 ? Could go either way since the network\n has to allocate resources to learn other games too. But is there a pattern?\n\n\n\n- 'Figure 7.2' in section 7.2 refers to Figure 5.\n\n\n- Can you motivate/discuss better why not providing the identity of a\n game as an input is an advantage? Why not explore both\n possibilities? what are the pros/cons? (section 3)\n\n\n\n\n", "In this paper active learning meets a challenging multitask domain: reinforcement learning in diverse Atari 2600 games. A state of the art deep reinforcement learning algorithm (A3C) is used together with three active learning strategies to master multitask problem sets of increasing size, far beyond previously reported works.\n\nAlthough the choice of problem domain is particular to Atari and reinforcement learning, the empirical observations, especially the difficulty of learning many different policies together, go far beyond the problem instantiations in this paper. Naive multitask learning with deep neural networks fails in many practical cases, as covered in the paper. The one concern I have is perhaps the choice of distinct of Atari games to multitask learn may be almost adversarial, since naive multitask learning struggles in this case; but in practice, the observed interference can appear even with less visually diverse inputs.\n\nAlthough performance is still reduced compared to single task learning in some cases, this paper delivers an important reference point for future work towards achieving generalist agents, which master diverse tasks and represent complementary behaviours compactly at scale.\n\nI wonder how efficient the approach would be on DM lab tasks, which have much more similar visual inputs, but optimal behaviours are still distinct.\n", "I will slightly increase my score and I will not argue against the paper because the paper contains interesting material. Nevertheless, in my opinion, the active strategy heavily relies on the knowledge of target scores. The paper should have contained a precise description of the DUA4C algorithm --not only experimental results--. For instance, when a target score is doubled and becomes unfeasible, how the algorithm behaves ? Why is there no degradation in this case ?", "Thank you for the reviews. We address your comments below:\n\n> In my opinion, the authors should have put the focus on the DU4AC algorithm which get rids of this assumption.\nWe believe that the Doubling Paradigm is an important part of the paper and thus, as requested by the reviewer, we have added additional results for the DUA4C agent. \n\nApart from MT1, we now show results on another 6 task instance (MT2), one 8 task instance (MT4) and one 12 task instance (MT5). \nIn all the cases, the DUA4C agent outperforms the BA3C agent and is able to perform well on all the MTIs. \nWe are still running the DUA4C agent on the 21 task instance and will be able to add the results on the same in the camera-ready version of the paper. These results have increased the quality of our work and we hope the reviewer raises his score in the light of these new experiments.\n\n\n> Differences between BA3C and other algorithms are said to be a consequence of the probability distribution over tasks. The gap is so large that I am not convinced on the fairness of the comparison. For instance, BA3C (Algorithm 2 in Appendix C) does not have the knowledge of the target scores while others heavily rely on this knowledge.\n\nAs stated in Section 4.1, we do believe that the lackluster performance of BA3C agent is due to the uniform sampling of the tasks. The DUA4C agent is not provided with the baselines either and it is nevertheless able to beat the BA3C agent by a margin on all the MTIs. The experiments with DUA4C verify our claim that it is indeed the probability distribution over the tasks that causes the huge improvement in our agents.\n\n\n> I do not see how the single output layer is defined.\n\nAs stated in Section 3, the single output layer is a superset of all the actions in different tasks. Take an MTI with Pong and Breakout. Pong has valid actions as up, down, and no-op(do nothing). Breakout has valid actions as left, right and no-op. The single output layer will have valid actions as up, down, left, right and no-op. While playing an episode of Pong, if the agent chooses left or right(non-valid actions for Pong), it would be treated as a no-op action. \nIn all our experiments, since we deal with Atari Games, we set the output layer as all the possible 18 actions in ALE with non-valid actions as a no-op.\nYou can now see how not providing the identity of the task makes learning hard. The agent on seeing a frame is supposed to figure out what is the valid action subset first and thus, learning is harder.\n\n\n> As said in the general comments, in my opinion Section 6 should be developed and more experiments should be done with the DUA4C algorithm.\n\nWe have hopefully addressed the issue of developing DUA4C further with the new experiments.\n\n\n> Section 7.1. It is not clear why degradation does not happen. It seems to be only an experimental fact.\n\nWhile we do agree that we haven’t provided with a theoretical explanation of why degradation doesn’t happen, Section 7.1 does provide with an intuition for why the algorithm is able to prevent catastrophic forgetting. We reiterate: Catastrophic Forgetting in our agents is avoided due to the way in which we sample the tasks. The probability of a task getting sampled in our agents is higher for those tasks on which the agent is currently bad at. Once the agent becomes good on a task, if degradation has happened on a task which was previously good, the agent will switch back to the other task and will thus ensure that it trains more on the degraded task.\n", "Thanks for reviewing the paper, the comments and questions! We believe addressing these questions will increase the quality of the work, and we will certainly do that.\n\n> Comparison only to a very basic baseline (i.e. uniform sampling). Couldn't comparisons be made, in some way, to other multitask work?\n\nWe do make a direct comparison to another multi-task work. As stated in Section 5, the tasks in MT4 (8 task instance) are exactly the same as those used in Actor Mimic Networks (Parisotto et al., 2015). AMNs achieve a q_am of 0.79 while all of our agents achieve a q_am greater than 0.9.\n\n\n> The assumption … future work section).\n\nBefore we go ahead, we would like to reiterate that we see the baselines as target scores that we want to achieve on the tasks. As we have shown in Appendix G, it’s not necessary to take them from published works, a human being could try solving a task and set his score as the target as well. Our algorithm is robust to target scores as well as seen in the same Appendix, i.e you could choose (reasonably) bigger targets as well.\n\nWe however also believe that the Doubling Paradigm is an important part of the paper and thus, as requested by the reviewer, we have added additional results for the DUA4C agent. Apart from MT1, we now show results on another 6 task instance (MT2), one 8 task instance (MT4) and one 12 task instance (MT5). We are still running the DUA4C agent on the 21 task instance and will be able to add the results on the same in the camera-ready version of the paper. In all the cases, the DUA4C agent outperforms the BA3C agent and is able to perform well on all the MTIs. These results have increased the quality of our work and we thank the reviewer again for raising these requests.\n\n\n> Can the authors show that a trained network (via their multitask approached) learns significantly faster on a brand new game (that's similar to games already trained on), compared to learning from scratch?\n\nThe work we have presented focuses specifically on Multi-task learning only and not transfer learning and thus, we didn’t show results on transfer learning. While we haven’t shown explicit results on transfer learning, we STRONGLY believe that it will indeed be the case that the MTAs will learn faster on a new similar game. This is attributed to the fact that all the agents in our work learn task agnostic features (as shown in Section 7) and having learned these features beforehand will speed up training on a similar new task. All in all, we are currently designating transfer learning as future work.\n\n\n> How does the performance improve/degrade (or the variance), on the same set of tasks, if the different multitask instances (MT_i) formed a supersets hierarchy, ie if MT_2 contained all the tasks/games in MT_1, could training on MT_2 help average performance on the games in MT_1 ? Could go either way since the network has to allocate resources to learn other games too. But is there a pattern?\n\nWe do have a supersets hierarchy in the MTIs we’ve chosen. Note that MT1 is a subset of MT5. We see that it is indeed the case that the network has allocated resources to learn other games too. For an A5C agent trained on MT5, the q_am for just the MT1 tasks is 0.697. For the A5C agent trained on MT1, the q_am is 0.799. Please note that the size of the network is same in both the cases. Clearly, the network has allocated some of its representational power to learn the other games. We, however, do not claim this to be a pattern and this forms an interesting direction for further work. We thank the reviewer for this question. This provides further insight into how the network is allocating its resources for multi-tasking.\n\n\n> 'Figure 7.2' in section 7.2 refers to Figure 5.\n\nWe apologize for the typo. We’ve fixed it in the revision.\n\n\n> Can you motivate/discuss better why not providing the identity of a game as an input is an advantage? Why not explore both possibilities? what are the pros/cons? (section 3)\n\nNot providing the identity of the game is clearly not an advantage. This is because the agent now has to figure out the subset of actions which make sense for the task (if the actions not valid for a task are chosen, it is treated as a no-op action). It makes the setup harder to solve. The motivation behind doing this is that in real-world problems, the identity of the tasks might not be provided. We point out that in spite of not providing the identity of the tasks, the agents perform quite well on the MTIs.", "Thank you for the positive reviews. We address your comments below:\n\n> The choice of distinct of Atari games to multitask learn may be almost adversarial.\n\nWe agree with the reviewer that the choice of tasks in our paper could be adversarial because the state spaces are very different visually. This was intentional (we point it out in the caption of Fig 1) with the purpose of raising the standard of the results and strengthens the work presented because, in spite of the state spaces being so visually different, the agents are able to perform very well on all the tasks as the results show.\n\n\n> How efficient would the algorithm be for visually similar tasks?\n\nAs we claim in the introduction to Section 7, an ideal MTA performs well due to learning task-agnostic abstract features which help it generalize across multiple tasks. In the case where tasks have visually similar state spaces, finding such features is clearly easier. We thus believe solving visually similar tasks are easier. \nApplying the framework to environments apart from Atari has currently been left as future work because of time and computational constraints.", "Thank you for the confirmation.", "We thank the reader for pointing out the error.\n\nThere indeed is a discrepancy in the STA3C scores on Page 7 and 8. We checked our results again and we found that the STA3C scores in Figure 3 on Page 7 are correct (they can be verified in Appendix J of the paper). We'll make the corrections in Figure 4 and update the paper once we're allowed to.\n\nAs explained in the paper, the STA3C scores are of no importance during the learning of the DUA4C agent. Thus, we checked our results for the final values of the performance metrics (since they do depend on the STA3C agent scores) in case they needed to be changed as well. We found that the performance metrics (p_am, q_am, etc.) were all calculated using the correct baseline scores and thus they are ALL CORRECT. That is the results reports in Table 2 are correct.\n\nWhile we agree that DUA4C does better on Demon Attack only (and is nearly equal to STA3C on Seaquest), the purpose of a Multi-tasking network is to do reasonably well on ALL the tasks, which might come at the cost of not doing better than the baseline on some tasks. We had included this discussion in the paper in the performance metrics section where we motivated the new performance metric q_am.\n\nFinally, no we didn't use a different set of parameters for the A3C scores. We're not sure of how the error crept only into the plot. We apologize again for the mistake in Figure 4.", "There seems to be a difference in the baseline scores reported between page 7 and page 8 for A3C/STA3C scores. If I use the baseline scores from page 7 and compare them against DUA4C, I think DUA4C does better in Demon Attack only?\n\nHave you used a different set of parameters to compute the A3C scores for the DUA4C experiments?" ]
[ -1, 5, 7, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, 3, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ "rJj6KMfVG", "iclr_2018_B1nZ1weCZ", "iclr_2018_B1nZ1weCZ", "iclr_2018_B1nZ1weCZ", "Sy6Q3DTXz", "r1XoHKtlf", "BkTVPAYeG", "HkFWypcgf", "H1kXk_weG", "H1KJtmmez", "iclr_2018_B1nZ1weCZ" ]
iclr_2018_rkHywl-A-
Learning Robust Rewards with Adverserial Inverse Reinforcement Learning
Reinforcement learning provides a powerful and general framework for decision making and control, but its application in practice is often hindered by the need for extensive feature and reward engineering. Deep reinforcement learning methods can remove the need for explicit engineering of policy or value features, but still require a manually specified reward function. Inverse reinforcement learning holds the promise of automatic reward acquisition, but has proven exceptionally difficult to apply to large, high-dimensional problems with unknown dynamics. In this work, we propose AIRL, a practical and scalable inverse reinforcement learning algorithm based on an adversarial reward learning formulation that is competitive with direct imitation learning algorithms. Additionally, we show that AIRL is able to recover portable reward functions that are robust to changes in dynamics, enabling us to learn policies even under significant variation in the environment seen during training.
accepted-poster-papers
The AIRL is presented as a scalable inverse reinforcement learning algorithm. A key idea is to produce "disentangled rewards", which are invariant to changing dynamics; this is done by having the rewards depend only on the current state. There are some similarities with GAIL and the authors argue that this is effectively a concrete implementation of GAN-GCL that actually works. The results look promising to me and the portability aspect is neat and useful! In general, the reviewers found this paper and its results interesting and I think the rebuttal addressed many of the concerns. I am happy that the reproducibility report is positive which helped me put this otherwise potentially borderline paper into the 'accept' bucket.
train
[ "HyKxU-erf", "S1Nj--xSG", "BJ-3TanEM", "SJfeNePVM", "Hyn6kL_xG", "ryZzenclz", "ryyF8NyZM", "BJz9VDTQG", "SJAvEPa7f", "SJnzNvaXz", "B1c_e2Fzf", "S1HsAfGzf", "rkGMGe-bG", "r17fiybWM", "SJbt4SDez", "HJOGkoD1z", "H11JStDyG", "Skxih5LyM" ]
[ "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "public", "public", "public", "public", "public", "author", "public" ]
[ "If the ground truth reward depends on both states and actions, the algorithm cannot represent the true reward and thus the performance of the policy will not match that of the experts (we have included new experiments in Section 7.3 for this case). The results will likely depend on the task - in our experiments the performance was not much worse than the experts, but the only action-dependent term in the reward for OpenAI Gym locomotion tasks is a control penalty for actions with large magnitude.\n\nHowever, we also argue that no IRL algorithm which operates over arbitrary reward function classes will be able to recover ground truth rewards in this case, since we cannot avoid reward shaping (section 5). In order to remove shaping, we need to manually restrict the class of reward functions such that shaping is not possible. An alternative approach is to adopt a multi-task IRL paradigm to generalize across different dynamics.\n\nThe state definition for most OpenAI Gym locomotion tasks (including the ones used in this paper) contains velocities - thus we can still represent the ground truth reward. ", "Section 3.1 of Finn 2016 (http://arxiv.org/abs/1611.03852) is incorrect in regard to learning the partition function on the bias of the last sigmoid layer. We can't uniquely separate the bias term from the rest of the function approximator. For example, the cost function approximator c_\\theta(tau) could incorporate the log Z term and we could set the learned bias term to 0. Thus, there is no point in explicitly adding a separate learned bias term to capture the partition function as in Finn16 - we simply learn a function f(\\tau) which implicitly learns the partition function, although we cannot extract it.", "Under the assumption of MaxEnt IRL, demonstration can be seen as trajectories drawn from (1/Z)exp(-c(\\tau)), however, in eq(2) the partition function disappears(compared to Sec 3.1 in http://arxiv.org/abs/1611.03852). Correct me if I am wrong, as the authors propose f*(\\tau) = R*(\\tau) + const(R is an entropy regularized reward, that's ok), do they mean by the logZ item(if the partition function exists in eq(2)) can be a constant? And refer to Appendix A.4, f* == log\\pi_E == A*(s,a), the second part of this equation comes from soft Q-learning because of the entropy regularized reward R, the the first part holds because it eliminates the partition function Z, so I wonder if it still holds even when the partition function is added? \n\nAnd there is another minor typo(?) The inline equation under eq(2): R(\\tau) = log(1-D(\\tau)) - log(D(\\tau)) -> R(\\tau) = log(D(\\tau)) - log(1-D(\\tau)), according to Appendix A.3 ", "Thank you for your reply and the revision of the paper. I briefly gone through the revised paper. My concerns have been addressed (but I should say that I have not verified the math closely).", "The paper provides an approach to learning reward functions in high-dimensional domains, showing that it performs comparably to other recent approaches to this problem in the imitation-learning setting. It also argues that a key property to learning generalizable reward functions is for them to depend on state, but not state-action or state-action-state. It uses this property to produce \"disentangled rewards\", demonstrating that they transfer well to the same task under different transition dynamics.\n\nThe need for \"state-only\" rewards is a useful insight and is covered fairly well in the paper. The need for an \"adversarial\" approach is not justified as fully, but perhaps is a consequence of recent work. The experiments are thorough, although the connection to the motivation in the abstract (wanting to avoid reward engineering) is weak.\n\nDetailed feedback:\n\n\"deployed in at test-time on environments\" -> \"deployed at test time in environments\"?\n\n\"which can effectively recover disentangle the goals\" -> \"which can effectively disentangle the goals\"?\n\n\"it allows for sub-optimality in demonstrations, and removes ambiguity between demonstrations and the expert policy\": I am not certain what is being described here and it doesn't appear to come up again in the paper. Perhaps remove it?\n\n\"r high-dimensional (Finn et al., 2016b) Wulfmeier\" -> \"r high-dimensional (Finn et al., 2016b). Wulfmeier\".\n\n\"also consider learning cost function with\" -> \"also consider learning cost functions with\"?\n\n\"o learn nonlinear cost function have\" -> \"o learn nonlinear cost functions have\".\n\n\" are not robust the environment changes\" -> \" are not robust to environment changes\"?\n\n\"We present a short proof sketch\": It is unclear to me what is being proven here. Please state the theorem.\n\n\"In the method presented in Section 4, we cannot learn a state-only reward function\": I'm not seeing that. Or, maybe I'm confused between rewards depending on s vs. s,a vs. s,a,s'. Again, an explicit theorem statement might remove some confusion here.\n\n\"AIRLperforms\" -> \"AIRL performs\".\n\nFigure 2: The blue and green colors look very similar to me. I'd recommend reordering the legend to match the order of the lines (random on the bottom) to make it easier to interpret.\n\n\"must reach to goal\" -> \"must reach the goal\"?\n\n\"pointmass\" -> \"point mass\". (Multiple times.)\n\nAmin, Jiang, and Singh's work on efficiently learning a transferable reward function seems relevant here. (Although, it might not be published yet: https://arxiv.org/pdf/1705.05427.pdf.)\n\nPerhaps the final experiment should have included state-only runs. I'm guessing that they didn't work out too well, but it would still be good to know how they compare.\n", "SUMMARY:\nThis paper considers the Inverse Reinforcement Learning (IRL) problem, and particularly suggests a method that obtains a reward function that is robust to the change of dynamics of the MDP.\n\nIt starts from formulating the problem within the MaxEnt IRL framework of Ziebart et al. (2008). The challenge of MaxEnt IRL is the computation of a partition function. Guided Cost Learning (GCL) of Finn et al. (2016b) is an approximation of MaxEnt IRL that uses an adaptive importance sampler to estimate the partition function. This can be shown to be a form of GAN, obtained by using a specific discriminator [Finn et al. (2016a)].\n\nIf the discriminator directly works with trajectories tau, the result would be GAN-GCL. But this leads to high variance estimates, so the paper suggests using a single state-action formulation, in which the discriminator f_theta(s,a) is a function of (s,a) instead of the trajectory. The optimal solution of this discriminator is to have f(s,a) = A(s,a) — the advantage function.\nThe paper, however, argues that the advantage function is “entangled” with the dynamics, and this is undesirable. So it modified the discriminator to learn a function that is a combination of two terms, one only depends on state-action and the other depends on state, and has the form of shaped reward transformation.\n\n\nEVALUATION:\n\nThis is an interesting paper with good empirical results. As I am not very familiar with the work of Finn et al. (2016a) and Finn et al. (2016b), I have not verified the detail of derivations of this new paper very closely. That being said, I have some comments and questions:\n\n\n* The MaxEnt IRL formulation of this work, which assumes that p_theta(tau) is proportional to exp( r_theta (tau) ), comes from\n[Ziebart et al., 2008] and assumes a deterministic dynamics. Ziebart’s PhD dissertation [Ziebart, 2010] or the following paper show that the formulation is different for stochastic dynamics:\n\nZiebart, Bagnell, Dey, “The Principle of Maximum Causal Entropy for Estimating Interacting Processes,” IEEE Trans. on IT, 2013.\n\nIs it still a reasonable thing to develop based on this earlier, an inaccurate, formulation?\n\n\n* I am not convinced about the argument of Appendix C that shows that AIRL recovers reward up to constants.\nIt is suggested that since the only items on both sides of the equation on top of p. 13 depend on s’ are h* and V, they should be equal.\nThis would be true if s’ could be chosen arbitrararily. But s’ would be uniquely determined by s for a deterministic dynamics. In that case, this conclusion is not obvious anymore.\n\nConsider the state space to be integers 0, 1, 2, 3, … .\nSuppose the dynamics is that whenever we are at state s (which is an integer), at the next time step the state decreases toward 1, that is s’ = phi(s,a) = s - 1; unless s = 0, which we just stay at s’ = s = 0. This is independent of actions.\nAlso define r(s) = 1/s for s>=1 and r(0) = 0.\nSuppose the discount factor is gamma = 1 (note that in Appendix B.1, the undiscounted case is studied, so I assume gamma = 1 is acceptable).\n\nWith this choices, the value function V(s) = 1/s + 1/(s-1) + … + 1/1 = H_s, i.e., the Harmonic function.\nThe advantage function is zero. So we can choose g*(s) = 0, and h*(s) = h*(s’) = 1.\nThis is in contrast to the conclusion that h*(s’) = V(s’) + c, which would be H_s + c, and g*(s) = r(s) = 1/s.\n(In fact, nothing is special about this choice of reward and dynamics.)\n\nAm I missing something obvious here?\n\nAlso please discuss how ergodicity leads to the conclusion that spaces of s’ and s are identical. What does “space of s” mean? Do you mean the support of s? Please make the argument more rigorous.\n\n\n* Please make the argument of Section 5.1 more rigorous.", "This paper revisits the generative adversarial network guided cost learning (GAN-GCL) algorithm presented last year. The authors argue learning rewards from sampled trajectories has a high variance. Instead, they propose to learn a generative model wherein actions are sampled as a function of states. The same energy model is used for sampling actions: the probability of an action is proportional to the exponential of its reward. To avoid overfitting the expert's demonstrations (by mimicking the actions directly instead of learning a reward that can be generalized to different dynamics), the authors propose to learn rewards that depend only on states, and not on actions. Also, the proposed reward function includes a shaping term, in order to cover all possible transformations of the reward function that could have been behind the expert's actions. The authors argue formally that this is necessary to disentangle the reward function from the dynamics. Th paper also demonstrates this argument empirically (e.g. Figure 1).\n\nThis paper is well-written and technically sound. The empirical evaluations seem to be supporting the main claims of the paper. The paper lacks a little bit in novelty since it is basically a variante of GAN-GCL, but it makes it up with the inclusion of a shaping term in the rewards and with the related formal arguments. The empirical evaluations could also be strengthened with experiments in higher-dimensional systems (like video games). \n\n\"Under maximum entropy IRL, we assume the demonstrations are drawn from an optimal policy p(\\tau) \\propto exp(r(tau))\" This is not an assumption, it's the form of the solution we get by maximizing the entropy (for regularization).\n", "Thank you for the constructive feedback. We’ve incorporated your comments and clarified certain points of the paper below. Please let us know if there are other additional issues which need clarification.\n\n> The MaxEnt IRL formulation of this work, which assumes that p_theta(tau) is proportional to exp( r_theta (tau) ), comes from\n[Ziebart et al., 2008] and assumes a deterministic dynamics. Ziebart’s PhD dissertation [Ziebart, 2010] or the following paper show that the formulation is different for stochastic dynamics.\nIs it still a reasonable thing to develop based on this earlier, an inaccurate, formulation?\n\nWe have updated the background (section 3) and appendix (section A) to use the maximum causal entropy framework rather than the earlier maximum entropy framework of [Ziebart 08]. Our algorithm requires no changes since the causal entropy framework more accurately describes what we were doing in the first place (our old derivations were valid in the deterministic case, where MaxEnt and MaxCausalEnt are identical, but in the stochastic case, our approach in fact matches MaxCausalEnt).\n\n> * I am not convinced about the argument of Appendix C that shows that AIRL recovers reward up to constants.\nAlso please discuss how ergodicity leads to the conclusion that spaces of s’ and s are identical. What does “space of s” mean? Do you mean the support of s? Please make the argument more rigorous.\n* Please make the argument of Section 5.1 more rigorous.\n\nWe’ve provided more formal proofs for Section 5 and the appendix. In order to fix the statements, we’ve changed the condition on the dynamics - a major component is that it requires that each state be reachable from >1 other state within one step. Ergodicity is neither a sufficient nor necessary condition on the dynamics, but special cases such as an ergodic MDP with self-transitions at each state satisfies the new condition (though the minimum necessary conditions are less restrictive).\n", "Thank you for the detailed feedback. We have included all of the typo corrections and clarifications, as well as included state-only runs in the imitation learning experiments (Section 7.3). As detailed below, we believe that we have addressed all of the issues raised in your review, but we would appreciate any further feedback you might offer.\n\n> The need for an \"adversarial\" approach is not justified as fully, but perhaps is a consequence of recent work.\n\nAdversarial approaches are an inherent consequence of using sampling-based methods for training energy-based models, and we’ve edited Section 2, paragraph 2 to make this more clear. There is in fact no other (known) choice for doing this: any method that does maxent IRL and generates samples (rather than assuming known dynamics) must be adversarial in nature, as shown by Finn16a. Traditional methods like tabular MaxEnt IRL [Ziebart 08] have an adversarial nature as they must alternate between an inner-loop RL problem (the sampler) and updating the reward function (the discriminator).\n\n> Although the connection to the motivation in the abstract (wanting to avoid reward engineering) is weak.\n\nWe’ve slightly modified the paragraph before section 7.1 to make this connection more clear. We use environments where a reward function is available for the purpose of easily collecting demonstrations (otherwise we would need to resort to motion capture or teleoperation). However the experimental setup after demo collection is exactly the same as one would encounter while using IRL when a ground truth reward is not available.\n\n> Amin, Jiang, and Singh's work on efficiently learning a transferable reward function seems relevant here. (Although, it might not be published yet: https://arxiv.org/pdf/1705.05427.pdf.)\n\nAmin, Jian & Singh’s work is indeed relevant and we have also included it in the related work section.\n\n> Perhaps the final experiment should have included state-only runs. I'm guessing that they didn't work out too well, but it would still be good to know how they compare.\n\nWe’ve included these in the experiments. State-only runs perform slightly worse as expected, since the true reward has torque penalty terms which depend on the action, and cannot be captured by the model. However the performance isn’t so bad that the agent fails to solve the task.\n", "Thank you for the thoughtful feedback. We’ve incorporated the suggestions to the best of our ability, and clarified portions of the paper, as described below.\n\n> \"Under maximum entropy IRL, we assume the demonstrations are drawn from an optimal policy p(\\tau) \\propto exp(r(tau))\" This is not an assumption, it's the form of the solution we get by maximizing the entropy (for regularization).\n\nWe’ve modified Section 3 to remove this ambiguity (note that we’ve also modified the section to use the causal entropy framework as requested by another reviewer). This statement was referring to the fact that we are assuming the expert is drawing samples from the distribution p(tau), not the fact that p(tau) \\propto exp(r(tau)).\n\n> \"The paper lacks a little bit in novelty since it is basically a variant of GAN-GCL, but it makes it up with the inclusion of a shaping term in the rewards and with the related formal arguments.\"\n\nIn regard to GAN-GCL, we would note that, although the method draws heavily on the theory in this workshop paper, it is unpublished and does not describe an implementation of any actual algorithm -- the GAN-GCL paper simply describes a theoretical connection between GANs and IRL. Our implementation of the algorithm that is closest to the one suggested by the theory in the GAN-GCL workshop paper does not perform very well in practice (Section 7.3).\n", "In figure 1, the authors show an example of state only reward recovers the groundtruth. However, the groundtruth reward here is a function of only state. What if the groundtruth reward is a function of both the state and the action, can we still apply this method? \n\nIn the continuous control experiment 2, if the ant achieves reward by moving forward. It is obvious that the reward depends on both s and s', I'm confused how a reward solely depending on s is able to recover the groundtruth.", "We reproduce the results from the submitted ICLR paper: \"Learning Robust Rewards with Adversarial Inverse Reinforcement Learning\" where we reproduce the previous state of the art results, namely the Generative Adversarial Network - Guided Cost Learning (Finn et. al 2016), the Generative Adversarial Imitation Learning (Ho & Ermon 2016) and make a comparison to the non-robust version of the AIRL algorithm (a previous iteration of the robust version of the AIRL algorithm) methods on the pendulum, custom designed ant and pointmass openAI gym environments. \n\nThis paper introduces an inverse reinforcement learning technique that utilizes a Generative Adversarial Network (GAN) (Goodfellow et. al 2014) to generate its rewards called Adversarial Inverse Reinforcement Learning (AIRL) (Anonymous, 2018). The algorithm updates the discriminator by training with expert trajectories, and then updating the policy in an attempt to confuse the discriminator. The paper makes two main claims about the functionality of the algorithm: 1) AIRL can learn robust rewards that make it the optimal algorithm for transfer learning tasks. 2) That AIRL is scalable up to high-dimensional tasks. This paper goes on to further claim that AIRL can perform competitively in imitation learning environments when compared to previous state-of-the-art algorithms such as generative adversarial imitation learning (GAIL) (Ho & Ermon 2016), but when it comes to transfer learning tasks, AIRL performs significantly better compared to those same algorithms.\n\nWhile we could not implement the robust AIRL algorithm, we made the effort to do as many experiments with the baseline algorithms. We believe that our inability to reproduce the full robust AIRL algorithm is not a statement on the reproducibility of this paper, but simply due to our lack of technical expertise. The results of these methods are as follows: \n\nTransfer Learning Experiments:\n\n Method & Pointmass-Maze & Disabled Ant \n GAN-GCL & -61.8 & -79.201\n AIRL & -51.2 & -92.578\n GAIL & -40.2 & -70.5668\n TRPO (Expert Policy) & -17.1 & 150.7\n\nImitation Learning Experiments:\n\n Method & Pendulum & Ant \n GAN-GCL & -242.5 & 467.7\n AIRL & -210.7 & 983.7\n GAIL & -198.2 & 1501.3\n TRPO (Expert Policy) & -128.4 & 2000.6 \n\nAs can be seen from these tables, our results and those found in the paper seem to be fairly similar. Our lower results with the AIRL algorithm is to be expected as we implemented the non-robust version whereas the paper shows results for the robust version. The variance in our results could be due to the unspecified n_iteration parameter, where higher/lower values of the n_iteration could contribute to higher/lower scores respectively. \n\nOur choice of hyperparameters were effectively those found in the report and the default hyperparameters found in the code provided to us by the authors. We increased the number of iterations to 1500 for the ant and pointmaze tasks, and increased it to 500 for the pendulum task to increase the chance of convergence. \n\nOur choice of selecting these environments was to test the claims that AIRL can not only be effective in transfer learning tasks, but to also scale up to high dimensional environments. Our empirical observations suggested that the ant/disabled ant task provided the highest dimensional environment for which we could test the scalability. Running on 7th Generation Intel® Core™ i7-7700HQ Quad Core Processor at 2.6GHz took 1 hour and 30 minutes to run 1000 iterations of the non-robust AIRL algorithm on the disabled ant task. \n\nAs far as the level of reproducibility is concerned. Having been provided the code from the authors went a long way to helping us reproduce the experiment. Within the code, the custom environments and the baseline algorithms were all provided which helped ensure that we were conducting the experiments in a fairly similar environment. Even though we could not reproduce the robust version of the AIRL algorithm, the mathematical foundations for the algorithm, along with pseudocode of the algorithm itself, is stated very clearly in the paper. As a result, our conjecture is that anyone who is more experienced in implementing code in RLLAB and openAIgym should have relatively little difficulty in implementing the robust version of the AIRL algorithm, but we do not have the expertise to state this for fact. ", "Yes, you missed an important fact for f*(s,a) = log_\\pi_E(a|s) = A*(s,a), this equality holds only under the policy are updated with a max entropy regularization(you can refer to this article: http://arxiv.org/abs/1702.08165). And under this context, Q and V are not just the the reward sum but with an extra entropy item. So it may be not correct to use such a simple example. In addition, since the methods in this paper are all with the max entropy item, it is ok for the authors to use this form of result.", "I think \"Under maximum entropy IRL, we assume the demonstrations are drawn from an optimal policy p(\\tau) \\propto exp(r(tau))\" is right. Since we sampled from expert policy or just made some demonstrations, and we were unaware to this max entropy items when we did this, we can only assume that these trajs obeying the boltzman distribution. In fact, in the context of MaxEnt IRL, only the optimal trajs distribution output by the model(by maximizing the max entropy item under the feature expectaton constraints) has a closed form of \\propto exp(r(tau)).", "Hello,\n\nWe are also a group of 3 students from McGill University, who are participating in the ICLR 2018 Reproducibility Research. We are also interested in the AIRL algorithm proposed in your paper and validating the results found in your paper. We noticed that you released a portion of your code, namely the non-robust version of your AIRL algorithm to other participants in this reproducibility challenge. We were wondering if you could also provide us with that link as it will go a long way to assisting us in reproducing your results.\n\nFor your convenience, we have provided our emails below.\n\nBest Regards,\n\nIsaac ([email protected])\nDavid ([email protected])", "Hello authors,\n\nThat would be an excellent starting point. Our email addresses are as follows:\n\n - Jin ([email protected])\n - Max ([email protected])\n - Sam ([email protected])\n\nThank you for your help,\nJin, Max, and Sam", "Hi Jin, Max, and Sam\n\nWe won't release the full code until after acceptance, but we have already released a publicly available implementation of the baselines + the \"non-robust\" version of AIRL. This should be a very good starting point for reproducibility. If you can provide an email address, we can send you a link (so as to not break anonymity on OpenReview).", "Hello authors,\n\nWe are three students at the University of Michigan working together to submit to the ICLR 2018 Reproducibility Workshop. We are all very interested in the AIRL algorithm described in your paper “Learning Robust Rewards with Adversarial Inverse Reinforcement Learning” and would like to focus on it for our submission. Specifically, we hope to both reproduce your results as well as conduct further experiments and hyperparameter tuning. Our team would greatly benefit from working with your implementation. Your paper mentioned that you intend to post the implementation of AIRL. Do you happen to have a schedule for when the code will be available for us to use?\n\nThank you for your time and your work,\nJin, Max, and Sam\n" ]
[ -1, -1, -1, -1, 6, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 4, 2, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "B1c_e2Fzf", "BJ-3TanEM", "iclr_2018_rkHywl-A-", "BJz9VDTQG", "iclr_2018_rkHywl-A-", "iclr_2018_rkHywl-A-", "iclr_2018_rkHywl-A-", "ryZzenclz", "Hyn6kL_xG", "ryyF8NyZM", "iclr_2018_rkHywl-A-", "iclr_2018_rkHywl-A-", "ryZzenclz", "ryyF8NyZM", "iclr_2018_rkHywl-A-", "H11JStDyG", "Skxih5LyM", "iclr_2018_rkHywl-A-" ]
iclr_2018_B1DmUzWAW
A Simple Neural Attentive Meta-Learner
Deep neural networks excel in regimes with large amounts of data, but tend to struggle when data is scarce or when they need to adapt quickly to changes in the task. In response, recent work in meta-learning proposes training a meta-learner on a distribution of similar tasks, in the hopes of generalization to novel but related tasks by learning a high-level strategy that captures the essence of the problem it is asked to solve. However, many recent meta-learning approaches are extensively hand-designed, either using architectures specialized to a particular application, or hard-coding algorithmic components that constrain how the meta-learner solves the task. We propose a class of simple and generic meta-learner architectures that use a novel combination of temporal convolutions and soft attention; the former to aggregate information from past experience and the latter to pinpoint specific pieces of information. In the most extensive set of meta-learning experiments to date, we evaluate the resulting Simple Neural AttentIve Learner (or SNAIL) on several heavily-benchmarked tasks. On all tasks, in both supervised and reinforcement learning, SNAIL attains state-of-the-art performance by significant margins.
accepted-poster-papers
An interesting new approach for doing meta-learning incorporating temporal convolution blocks and soft attention. Achieves impressive SOTA results on few shot learning tasks and a number of RL tasks. I appreciate the authors doing the ablation studies in the appendix as that raises my confidence in the novelty aspect of this work. I thus recommend acceptance, but do encourage the authors to perform the ablation experiments promised to Reviewer 1 (especially the one to "show how much SNAILs performance degrades when TCs are replaced with this method [of Vaswani et al.].")
train
[ "r1Ma5fixz", "S1J6xOmgf", "BJDSdqqxM", "BJtifUTQf", "rJC87UpmM", "SydimI6QG", "rJpTzLamf", "BJSLNSp7f", "Sy5QVH6mG", "B1Cg_OA-f", "BJGc5UgxM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "public" ]
[ "This work proposes an approach to meta-learning in which temporal convolutions and attention are used to synthesize labeled examples (for few-shot classification) or action-reward pairs (for reinforcement learning) in order to take the appropriate action. The resulting model is general-purpose and experiments demonstrate efficacy on few-shot image classification and a range of reinforcement learning tasks.\n\nStrengths\n\n- The proposed model is a generic meta-learning useful for both classification and reinforcement learning.\n- A wide range of experiments are conducted to demonstrate performance of the proposed method.\n\nWeaknesses\n\n- Design choices made for the reinforcement learning setup (e.g. temporal convolutions) are not necessarily applicable to few-shot classification.\n- Discussion of results relative to baselines is somewhat lacking.\n\nThe proposed approach is novel to my knowledge and overcomes specificity of previous approaches while remaining efficient.\n\nThe depth of the TC block is determined by the sequence length. In few-shot classification, the sequence length can be known a prior. How is the sequence length determined for reinforcement learning tasks? In addition, what is done at test-time if the sequence length differs from the sequence length at training time?\n\nThe causality assumption does not seem to apply to the few-shot classification case. Have the authors considered lifting this restriction for classification and if so does performance improve?\n\nThe Prototypical Networks results in Tables 1 and 2 do not appear to match the performance reported in Snell et al. (2017).\n\nThe paper is well-written overall. Some additional discussion of the results would be appreciated (for example, explaining why the proposed method achieves similar performance to the LSTM/OPSRL baselines).\n\nI am not following the assertion in 5.2.3 that MAML adaption curves can be seen as an upper bound on the performance of gradient-based methods. I am wondering if the authors can clarify this point.\n\nOverall, the proposed approach is novel and achieves good results on a range of tasks.\n\nEDIT: I have read the author's comments and am satisfied with their response. I believe the paper is suitable for publication in ICLR.", "The authors propose a model for sequence classification and sequential decision making. The model interweaves attention layers, akin to those used by Vaswani et al, with temporal convolution. The authors demonstrate superior performance on a variety of benchmark problems, including those for supervised classification and for sequential decision making.\n\nUnfortunately, I am not an expert in meta-learning, so I cannot comment on the difficulty of the tasks (e.g. Omniglot) used to evaluate the model or the appropriateness of the baselines the authors compare against (e.g. continuous control).\n\nThe experiment section definitely demonstrate the effort put into this work. However, my primary concern is that the model seems somewhat lacking in novelty. Namely, it interweaves the Vaswani style attention with with temporal convolutions (along with TRPO. The authors claim that Vaswani model does not incoporate positional information, but from my understanding, it actually does so using positional encoding. I also do not see why the Vaswani model cannot be lightly adapted for sequential decision making. I think comparison to such a similar model would strengthen the novelty of this paper (e.g. convolution is a superior method of incorporating positional information).\n\nMy second concern is that the authors do not provide analysis and/or intuitions on why the proposed models outperform prior art in few-shot learning. I think this information would be very useful to the community in terms of what to take away from this paper. In retrospect, I wish the authors would have spent more time doing ablation studies than tackling more task domains.\n\nOverall, I am inclined to accept this paper on the basis of its experimental results. However I am willing to adjust my review according to author response and the evaluation of the experiment section by other reviewers (who are hopefully more experienced in this domain).\n\nSome minor feedback/questions for the authors:\n- I would prefer mathematical equations as opposed to pseudocode formulation\n- In the experiment section for Omniglot, when the authors say \"1200 classes for training and 432 for testing\", it sounds like the authors are performing zero-shot learning. How does this particular model generalize to classes not seen during training?", "The paper proposes a general neural network structure that includes TC (temporal convolution) blocks and Attention blocks for meta-learning, specifically, for episodic task learning. Through intensive experiments on various settings including few-shot image classification on Omniglot and Mini-ImageNet, and four reinforcement learning applications, the authors show that the proposed structure can achieve highly comparable performance wrt the corresponding specially designed state-of-the-art methods. The experiment results seem solid and the proposed structure is with simple design and highly generalizable. The concern is that the contribution is quite incremental from the theoretical side though it involves large amount of experimental efforts, which could be impactful. Please see the major comment below.\n\nOne major comment:\n- Despite that the work is more application oriented, the paper would have been stronger and more impactful if it includes more work on the theoretical side. \nSpecifically, for two folds: \n(1) in general, some more work in investigating the task space would be nice. The paper assumes the tasks are “related” or “similar” and thus transferrable; also particularly in Section 2, the authors define that the tasks follow the same distribution. But what exactly should the distribution be like to be learnable and how to quantify such “related” or “similar” relationship across tasks? \n(2) in particular, for each of the experiments that the authors conduct, it would be nice to investigate some more on when the proposed TC + Attention network would work better and thus should be used by the community; some questions to answer include: when should we prefer the proposed combination of TC + attention blocks over the other methods? The result from the paper seems to answer with “in all cases” but then that always brings the issue of “overfitting” or parameter tuning issue. I believe the paper would have been much stronger if either of the two above are further investigated.\n\nMore detailed comments:\n- On Page 1, “the optimal strategy for an arbitrary range of tasks” lacks definition of “range”; also, in the setting in this paper, these tasks should share “similarity” or follow the same “distribution” and thus such “arbitrariness” is actually constrained.\n\n- On Page 2, the notation and formulation for the meta-learning could be more mathematically rigid; the distribution over tasks is not defined. It is understandable that the authors try to make the paradigm very generalizable; but the ambiguity or the abstraction over the “task distribution” is too large to be meaningful. One suggestion would be to split into two sections, one for supervised learning and one for reinforcement learning; but both share the same design paradigm, which is generalizable.\n\n- For results in Table 1 and Table 2, how are the confidence intervals computed? Is it over multiple runs or within the same run? It would be nice to make clear; in addition, I personally prefer either reporting raw standard deviations or conduct hypothesis testing with specified tests. The confidence intervals may not be clear without elaboration; such is also concerning in the caption for Table 3 about claiming “not statistically-significantly different” because no significance test is reported. \n\n- At last, some more details in implementation would be nice (package availability, run time analysis); I suppose the package or the source code would be publicly available afterwards?", "\nWe wish to thank the reviewers for their thoughtful feedback!\n\nBelow, we respond to some general questions present in all reviews:\n\nEven though few-shot learning is not inherently a sequential problem, there is merit to obtaining contextual information before the attention kernel is applied. For example, in Matching Networks [2], the authors pass the feature vectors through an LSTM before doing their attention operator. Without TCs or a LSTM, the embeddings on which we “attend” are myopic in the sense that each example is embedded independently of the other examples in the support set. The use of embeddings that are a function of the entire support set was found to be essential in [2].\n\nIn Appendix B Table 6, we conduct several ablation studies to evaluate how much each aspect of SNAIL contributed to its overall performance (in the few-shot classification domain). These suggest that both TC’s and attention are both essential components of the SNAIL architecture.\n\nThe primary strength of SNAIL is that it is maximally generic, making no assumptions about the structure of the learned meta-algorithm. In domains such as few-shot classification, there exists human intuition about algorithmic priors. Methods built around particular inductive biases (such as Matching Networks [2] or Prototypical Networks [4], which learn distance metrics) work reasonably well; SNAIL is versatile enough to match or exceed this performance. \n\nHowever, the true utility of meta-learning is to learn algorithms for domains where there is little human intuition (which occurs in many RL tasks). Indeed, our experiments show that MAML [3], which uses gradient descent as the algorithmic prior, performs poorly. In contrast, SNAIL’s lack of algorithmic prior offers the flexibility to learn an algorithm from scratch. Furthermore, in domains such as bandits or MDPs, we show that SNAIL is competitive with human designed algorithms.\n\nAs discussed in the paper, one caveat is that SNAIL’s flexibility results in a large number of model parameters, which can lead to overfitting on extremely out-of-distribution tasks. For instance, the official Omniglot test set represents task distribution that are “close” to the training task distribution, and MNIST could define another task distribution that is much farther away from the training distribution. Following Matching Networks [2], we took a SNAIL model trained on [20-way, 5-shot Omniglot] and tested it in [10-way, 5-shot, MNIST], achieving 68% accuracy. In contrast, [2], which use distance metric as the algorithmic prior, obtains 72%. \n\nOur analysis of thus: all meta-learning methods, given sufficient capacity, achieve roughly the same performance on the training task distribution (or when the distance between training and test distributions is zero). As this distance increases, any method’s performance will necessarily decrease; as indicated by our empirical results, SNAIL’s performance drops off slower than that of other methods, as SNAIL’s versatility allows it to learn more expressive and specialized strategies. However, as the distance from the training distribution becomes very large, in the regime where there is less potential for meta-learning, methods with algorithmic priors have slightly “heavier tails” than SNAIL. A benchmark that allows more precise extrapolation along this axis (distance from training distribution) would likely offer more insight, although the curation of such a benchmark is beyond the scope of this work.\n\nBelow, we respond to the individual comments of the reviewers.\n\n[1] Finn et al. “Model Agnostic Meta-Learning”. https://arxiv.org/pdf/1703.03400.pdf\n[2] Vinyals et al. ”Matching Networks for Few-Shot Learning”. https://arxiv.org/pdf/1606.04080.pdf\n[3] Vaswani et al. “Attention is all you need”. https://arxiv.org/pdf/1706.03762.pdf\n[4] Snell et al. “Prototypical Networks for Few-Shot Learning”. https://arxiv.org/pdf/1703.05175.pdf\n[5] Santoro et al. “Meta-Learning with Memory-Augmented Neural Networks”. http://proceedings.mlr.press/v48/santoro16.pdf\n", "Please refer to our main response in an above comment that addresses the primary and shared questions amongst all reviewers. Here we respond to your specific comments.\n\n“in general, some more work in investigating the task space would be nice. The paper assumes the tasks are “related” or “similar” and thus transferrable; also particularly in Section 2, the authors define that the tasks follow the same distribution. But what exactly should the distribution be like to be learnable and how to quantify such “related” or “similar” relationship across tasks?” \n\n>>> Measures of task similarity would certainly be useful in understanding how well we can expect a meta-learner to generalize. However, it remains an open problem and beyond the scope of our work -- our contribution is the proposed class of model architectures, which we experimentally validate on a number of benchmarks (where there is a high degree of task similarity, and thus potential for meta-learning to succeed) from the meta-learning literature.\n\n“On Page 2, the notation and formulation for the meta-learning could be more mathematically rigid; the distribution over tasks is not defined. It is understandable that the authors try to make the paradigm very generalizable; but the ambiguity or the abstraction over the “task distribution” is too large to be meaningful. One suggestion would be to split into two sections, one for supervised learning and one for reinforcement learning; but both share the same design paradigm, which is generalizable.”\n\n>>> Our formulation of the meta-learning problem is consistent with prior work, as one can see in MAML [1] and Matching Networks [2].\n\n“For results in Table 1 and Table 2, how are the confidence intervals computed? Is it over multiple runs or within the same run? It would be nice to make clear; in addition, I personally prefer either reporting raw standard deviations or conduct hypothesis testing with specified tests. The confidence intervals may not be clear without elaboration; such is also concerning in the caption for Table 3 about claiming “not statistically-significantly different” because no significance test is reported.”\n\n>>> The confidence intervals in Tables 1 & 2 are calculated over 10000 episodes of the evaluation procedure described in Section 5.1 (95% confidence). The statistical significance in Table 3 is determined by a one-sided t-test with p=0.05. We will make these clarifications in the final version of the paper.\n", "Please refer to our main response in an above comment that addresses the primary and shared questions amongst all reviewers. Here we respond to your specific comments.\n\n“The authors claim that Vaswani model does not incorporate positional information, but from my understanding, it actually does so using positional encoding. I also do not see why the Vaswani model cannot be lightly adapted for sequential decision making. I think comparison to such a similar model would strengthen the novelty of this paper (e.g. convolution is a superior method of incorporating positional information.”\n\n>>> Vaswani et. al. [3] add to their feature vector a representation of where the example is in the sequence (described in section 3.5 of [3]). This method crucially does not perform any local comparisons where the embedding of a particular image is modified depending on the others is being compared against (which Matching Networks [2] found to be essential). For the final paper, we will conduct an ablation to show how much SNAILs performance degrades when TCs are replaced with this method. Preliminary experiments on the MDP problem using attention like in [3] (with the positional encoding mentioned above) performed marginally better than a random policy. We will include this ablation (the [3]-style model on RL tasks) in the final version of the paper.\n\n“In the experiment section for Omniglot, when the authors say \"1200 classes for training and 432 for testing\", it sounds like the authors are performing zero-shot learning. How does this particular model generalize to classes not seen during training?”\n\n>>> During test-time, for the 1-shot 5-way and 5-shot 5-way problems, the model is given 1 labeled example of each of the 5 selected test classes and 5 labeled examples of each of the 5 selected test classes respectively. Therefore it is not zero-shot. The set of 432 test classes are not seen during training.\n", "Please refer to our main response in an above comment that addresses the primary and shared questions amongst all reviewers. Here we respond to your specific comments.\n\n“The causality assumption does not seem to apply to the few-shot classification case. Have the authors considered lifting this restriction for classification and if so does performance improve?”\n\n>>> This was done in line with past work on meta-learning (such as Santoro et al. [5]) for a maximally direct comparison. In general, past work (and this work) consider such processing because it’s compatible with streaming over incoming data (relevant for future large scale applications) and it aligns well with future extensions on few-shot active learning (where the model sequentially creates its own support set by querying an oracle to label a chosen image from the dataset). \n\n“The depth of the TC block is determined by the sequence length. In few-shot classification, the sequence length can be known a priori. How is the sequence length determined for reinforcement learning tasks? In addition, what is done at test-time if the sequence length differs from the sequence length at training time?”\n\n>>> In some RL tasks, there is a maximum episode length, and we can choose the depth of the TC block accordingly (this true for all of the tasks considered in our paper). If the episode length is unbounded, or differs between training and test, we can simply choose a reasonable value (depending on the problem) and rely on the fact that the attention operation has an infinite receptive field. We can think of the TC block as producing “local” features within a long sequence, from which the attentive lookups can select pertinent information.\n\n“The Prototypical Networks results in Tables 1 and 2 do not appear to match the performance reported in Snell et al. (2017).”\n\n>>> Snell et al. [4] found that using more classes at training time than at test time improved their model’s performance. Their best results used 20 classes at training time and 5 at test time. To make their results comparable to prior work, we reported the performance of Prototypical Networks when the same number of classes was used at training and test time (Appendix Tables 5 and 6 in their paper), as all of the methods we listed in Tables 1 & 2 might also benefit from this modification.\n\n“I am not following the assertion in 5.2.3 that MAML adaption curves can be seen as an upper bound on the performance of gradient-based methods. I am wondering if the authors can clarify this point.”\n\n>>> These set of experiments were conducted to demonstrate the advantage of having highly general meta-learners where the meta-algorithm is fully-learned, especially when it comes to task distributions with a lot of exploitable structure. These continuous control problems originally introduced by the MAML paper can be easily solved by the agent identifying which task it is in over the first couple timesteps and then proceeding to execute the optimal policy. In the above comment, which we should reword, we meant to say that MAML’s performance demonstrates that gradient-based methods which take their update steps after several test rollouts are fundamentally disadvantaged compared to RNN-like methods for this problem. \n", "Thank you! And apologizes for missing this prior work; we will make sure to add ARCs as a baseline for the next revision.\n", "We are actively working on cleaning up the code base and disentangling it from dependencies that can’t be made public. We hope to release the code on GitHub very soon. ", "The idea is novel the experiment results are good in a range of tasks. \nHowever, some details of the model design and experiment implementations need to be clarified with the help of the source code.\nCan you release the code for the public please?", "Hi,\n\nVery nice work! However, your results on Omniglot seem to be well within the error margins of results reported with Attentive Recurrent Comparators. I hope that you can consider citing the results in your future revisions. " ]
[ 7, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B1DmUzWAW", "iclr_2018_B1DmUzWAW", "iclr_2018_B1DmUzWAW", "iclr_2018_B1DmUzWAW", "BJDSdqqxM", "S1J6xOmgf", "r1Ma5fixz", "BJGc5UgxM", "B1Cg_OA-f", "iclr_2018_B1DmUzWAW", "iclr_2018_B1DmUzWAW" ]
iclr_2018_SywXXwJAb
Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design
Formal understanding of the inductive bias behind deep convolutional networks, i.e. the relation between the network's architectural features and the functions it is able to model, is limited. In this work, we establish a fundamental connection between the fields of quantum physics and deep learning, and use it for obtaining novel theoretical observations regarding the inductive bias of convolutional networks. Specifically, we show a structural equivalence between the function realized by a convolutional arithmetic circuit (ConvAC) and a quantum many-body wave function, which facilitates the use of quantum entanglement measures as quantifiers of a deep network's expressive ability to model correlations. Furthermore, the construction of a deep ConvAC in terms of a quantum Tensor Network is enabled. This allows us to perform a graph-theoretic analysis of a convolutional network, tying its expressiveness to a min-cut in its underlying graph. We demonstrate a practical outcome in the form of a direct control over the inductive bias via the number of channels (width) of each layer. We empirically validate our findings on standard convolutional networks which involve ReLU activations and max pooling. The description of a deep convolutional network in well-defined graph-theoretic tools and the structural connection to quantum entanglement, are two interdisciplinary bridges that are brought forth by this work.
accepted-poster-papers
This paper seemingly joins a cohort of ICLR submissions which attempt to port mature concepts from physics to machine learning, make a complex and non-trivial theoretical contribution, and fall short on the empirical front. The one aspect that sets this apart from its peers is that the reviewers agree that the theoretical contribution of this work is clear, interesting, and highly non-trivial. While the experiment sections (MNIST!) is indubitably weak, when treating this as a primarily theoretical contribution, the reviewers (in particular 6 and 3) are happy to suggest that the paper is worth reading. Taking this into account, and discounting somewhat the short (and, by their own admission, uncertain) assessment of reviewer 5, I am leaning towards pushing for the acceptance of this paper. At very least, it would be a shame not to accept it to the workshop track, as this is by far the strongest paper of this type submitted to this conference.
val
[ "r183ZUKVz", "BJX98ixfM", "SJWogYvEM", "SJ45Qm8Zz", "Hk3NX9vWM", "SJw9gV2ZM", "HJbGEWHGf", "HyS9hL6Wz", "B18GHL6Wf", "S1jF4IaWM" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "public", "public" ]
[ "We thank reviewer for the time and effort invested, and for the consideration of our response. \n\nBy \"empirical trends we observed with ConvACs were precisely the same\", we mean that with ConvACs, as with ReLU ConvNets, \"wide-base\" architecture had a clear advantage over \"wide-tip\" in the local task, whereas in the global task it is wide-tip that was clearly superior. The differences in accuracies between the two architectures in the case of ConvACs was similar to that reported in fig. 4 - on the order of 5% in favor of wide-base/tip on local/global task (respectively). The overall accuracies with ConvACs were sightly lower than with ReLU ConvNets - a 1% difference on average. We will add these details to the final version of the manuscript.\n\nIn terms of further experiments, we are currently pursuing an empirical follow-up work in which we evaluate our findings on more challenging benchmarks, with more elaborate architectures. We hope to release this work to arXiv in the near future and add a reference to it in this manuscript before its finalization.", "The paper proposes a structural equivalence between the function realised by a convolutional arithmetic circuit (ConvAC) and a quantum many-body wave function, which facilitates the use of quantum entanglement measures as quantifiers of a deep network’s expressive ability to model correlations. \n\nThe work is definitely worthwhile digging deeper, bridging some gap and discussions between physics and deep learning. The ultimate goal for this work, if I understands correctly, is a provide a theoretical explanation to the design of deep neural architectures. The paper is well-written (above most submissions, top 10%) with clear clarity. However, removing all the fancy stuff and looking into the picture further, I have several major concerns.\n\n+ Potential good research direction to connect physical sciences (via TN) to deep learning theories (via ConvAC).\n\n- [Novelty is limited and proof is vague] The paper uses physical concepts to establish a \"LAYER WIDTHS EFFECT ON THE EXPRESSIVENESS OF A DEEP NETWORK\", the core theory (proposed method) part is Section 5 alone and the rest (Section 2,3,4) is for introductory purposes. Putting Theorem 1 in simple, deep learning based English, it says for a dataset with features of a size D, there exists a partition of length scale \\epsilon \u0018< D, which is guaranteed to separate between different parts of a feature. Based on this, they give a rule of thumb to design the width (i.e., channel numbers) of layers in a deep neural network: (a) layer l = logD is more important than those of deeper layers; (b) among these deeper layers, deeper ones need to be wider, which is derived from the min-cut in the ConvAC TN case. How (a) is derived or implied from theorem 1? \n\nIt seems to me that the paper goes with a rigorous manner till the proof of theorem 1, with all the concepts and denotations well demonstrated. Suddenly when it comes to connecting the practical design of deep networks, the conclusion becomes qualitative without much explanation via figures or visualisation of the learned features to prove the effectiveness of the proposed scheme.\n\n- [Experiments are super weak] The paper has a good motivation and a beautiful story and yet, the experiments are poor to verify them. The reason as to why authors use ConvAC is that it more resembles the tensor operations introduced in the paper. There is a sentence, \"Importantly, through the concept of generalized tensor decompositions, a ConvAC can be transformed to a standard convolutional network with ReLU activation and average/max pooling\", to tell the relation between ConvAC and traditional convolutions. The theory is based on the analysis of ConvAC, and all of a sudden the experiments are conducted on the traditional convolution. This is not rigorous and not professional for a \"technically-sound\" paper. How the generalized concepts of tensor decompositions can be applied from ConvAC to vanilla convolutions?\n\nThe experiments seem to extend the channel width of *all* layers in a hand-crafted manner (10, 4r, 4r, xxx). Based on the derived rule of thumb, the most important layer in MNIST should be layer 3 or 4 (log 10). Some simple ablative analysis should be:\n(i) baseline: fix layer 3, add more layers thereafter in the network;\n(ii) fix layer 3, reduce the channel numbers after layer 3.\nThe (ii) case should be at least comparable to (i) if theorem 1 is correct.\n\nMoreover, to verify conclusion (b) which I mentioned earlier, does the current setting (10, 4r, 4r, xx) consider \"deeper ones need to be wider\"? What is the value of r? MNIST is a over-used dataset and quite small. I see the performance in Figure 4 (the only experiment result in the paper) just exceeds 90%. A simple trained NN (not CNN) could reach well 96% or so.\n\nMore ablative study (convAC or vanilla conv, other datasets, comparison components in width design, etc.) are seriously needed. Otherwise, it is just not convincing to me.\n\nIf the authors target on the network design in a more general manner (not just in layer width, but the design of number of filters, layers, etc.), there are already some neat work in this community and you should definitely compare them: e.g., Neural Architecture Search with Reinforcement Learning, ICLR 2017. I know the paper starts with building the connection from physics to deep learning and it is natural to solve the width design issue alone. This is not a major concern.\n\n-------------\nWe see lots of fancy conceptions, trying to bind interdisciplinary subjects to help analyse deep learning theories over the last few years, especially in ICLR 2017. This year same thing happens. I am not degrading the motivation/intuition of the work; instead, I think it is pretty novel to explain the design of neural nets, by way of quantum physics. But the experiments and the conclusion derived from the analysis make the paper not solid to me and I am quite skeptical about its actual effectiveness.\n", "Dear authors,\n\nThanks for your feedback. I have gone through the discussion over again. The two implications of Theorem 1 is misunderstood (and therefore the suggested experiment); thanks for your clarification. The explanations are clear to me now and the additional experiments make the paper more solid.\n\nI have modified the rating from rejection to weak accept. But, I still have two comments on the experiments.\n\n(a) settings (randomly put original image in a larger image) of experiments on MNIST is different and thus harder than other methods. Why not run some baselines of other methods, using the same setting as you do? In this manner, the readers will get a sense of to what extent the improved method will be.\n\nFor now, the *only* experiment reported in the paper is via Figure 4. It seems to me a little bit weak.\n\n(b) I am still not convinced that why you do the experiments on a regular convnet architectures. Since the main theoretical conclusion part is based on convAC, why not directly conduct on convAC and then have a subsection or paragraph saying that your algorithm also generalizes well - \"our conclusions extend to popular ConvNet architectures - with ReLU activations and max pooling.\".\n\n\"We note that the empirical trends we observed with ConvACs were precisely the same - a footnote indicating this was added to the paper.\" what do you mean by \"precisely\" the same? The same numeric number(error/accuracy)? The same trend or others?\n\nI understand the authors must have put a lot of effort into this work and I don't want to kill a theretical-insightful paper. After reading through feedback and revised manuscript, I think it is marginally above the average of ICLR papers, based on my partial and biased experience. But still i would suggest authors to strengthen the experiment part.\n ", "The paper makes a striking connection between two apparently unrelated problems: the problem of designing neural networks to handle a certain type of correlation and the problem of designing a structure to represent wave-function with quantum entanglement. In the wave-function context, the Schmidt decomposition of the wave function is an inner product of tensors. Thus, the mathematical glue connecting the neural networks and quantum entanglement is shown to be tensor networks, which can represent higher order tensors through inner product of lower-order tensors. \n\nThe main technical contribution in the paper is to map convolutional networks with product pooling function (called ConvACs) to a tensor network. Given this mapping, the authors exploit results in tensor networks (in particular the quantum max-flow min-cut theorem) to calculate the rank of the matricized tensor between a pair of vertex sets using the (appropriately defined) min-cut. \n\nThe connection has potential to yield fruitful new results, however, the potential is not manifested (yet) in the paper. The main application in deep convolutional networks proposed by the paper is to model how much correlation between certain partition of input variables can be captured by a given convolutional network design. However, it is unclear how to use Theorem 1 to design neural networks that capture a certain correlation. \n\nA simple example is given in the experiment where the wider layers can be either early in the the neural network or at the later stages; demonstrating that one does better than the other in a certain regime. It seems that there is an obvious intuition that explains this phenomenon: wider base networks with large filters are better suited to the global task and narrow base networks that have more parameters later down have more local early filters suited to the local task. The experiments do not quite reveal the power of the proposed approach, and it is unclear how, if at all, the proposed approach can be applied to more complicated networks. \n\nIn summary, this paper is of high theoretical interest and has potential for future applications.", "The authors try to bring in two seemingly different areas and try\nto leverage the results in one for another.\nFirst authors show that the equivalence of the function realized(in\ntensor form, given in earlier work) by a ConvAC and\nthe function used to model n-body quantum system. After establishing\nthe equivalence of two, the authors argue that\nquantum entanglement measures used to measure correlations in n-body\nquantum systems can be used as an expressive measure\n(how much correlation in input they can handle) of the function\nrealized by a ConvAC. Separation Rank analysis, which was done\nearlier, becomes a special case. As the functional equivalence is\nestablished, authors adopt Tensor Network framework,\nto analyze the properties of the ConvAC. The main result being able\nto quantify the expressiveness to some extend to the min\ncut of the underlying Tensor Network graph corresponding to ConvAC.\nThis is further used to argue about guide-lining the\nwidth of various parts of ConvAC, if some prior correlation\nstructure is known about the input. This is also validated\nexperimentally.\n\nAlthough I do not see major results at this moment, this work can be\nof great significance. The attempt to bring in two areas\nhave to be appreciated. This work opens up a footing to do graph\ntheoretical analysis of deep learning architectures and from\nthe perspective of Quantum entanglement, this could lead to open up new directions. \nThe paper is lucidly written, comprehensively covering the\npreliminaries. I thoroughly enjoyed reading it, and I think the\npaper and the work would be of great contribution to the community.\n\n(There are some typos (preform --> perform ))", "This paper draws an interesting connection between deep neural networks and theories of quantum entanglement. They leveraged the tool for analyzing quantum entanglement to deep neural networks, and proposed a graph theoretical analysis for neural networks. They demonstrated how their theory can help designing neural network architectures on the MNIST dataset.\n\nI think the theoretical findings are novel and may contribute to the important problem on understanding neural networks theoretically. I am not familiar with the theory for quantum entanglement though.", "We thank the reviewer for the time and feedback; our response follows.\n\n- Theory:\n\nThere seems to be a misunderstanding.\nThe rephrasing of our results made by reviewer (text following \"in simple, deep learning based English\") is incorrect. Reviewer claims that the implications of theorem 1 are: \n(a) Width of layer log(D), where D is the typical size of a feature, is more important than those of deeper layers.\n(b) Among these less important deep layers, deeper ones should be wider.\nThe actual implications of theorem 1 are as stated explicitly at the end of sec 5:\n(i) Width of layers *up to* log(D) are more important than those of deeper layers.\n(ii) Among the layers that *are important*, i.e. layers 1...log(D), deeper ones should be wider. The widths of the less important layers, i.e. layers log(D)+1 and up, are irrelevant.\n\nImplications (i) and (ii) above are a straightforward consequence of theorem 1, which characterizes correlations in terms of min cuts in a TN graph. Specifically, with features of size D, a partition that splits a feature will lead to a min cut that necessarily does not include edges corresponding to layers log(D)+1 and up. Edges corresponding to layers 1...log(D) on the other hand will be included, hence the lesser need to strengthen these edges by widening the layers. We did not frame implications (i) and (ii) above as a theorem, as they are a direct consequence of theorem 1, and we prefer to keep the text fluent and compact. Having said that, we realize from the review that additional details may assist the reader, and so have added such to the end of sec 5.\n\nA final note on theory:\nReviewer writes \"the paper goes with a rigorous manner till the proof of Theorem 1\". We would like to stress in the most explicit manner that unlike the derivation of conclusions (i) and (ii) from theorem 1, the theorem itself is highly non-trivial. It is stated formally in the body of the paper, and proven in full in the appendix (pages 17-27).\n\n- Experiments:\n\nUnfortunately, here too there seem to be misunderstandings.\nFirst, the misinterpretation of our theoretical findings as (a)+(b) instead of (i)+(ii) has led to criticism on our experimental protocol. Reviewer's suggestions to focus on a single layer l=log(D), and to make sure that deeper layers should be wider, do not go along with our findings. More generally, any attempt to clearly isolate a single most important layer from others is doomed to fail, as in a real-world dataset (even as simple as MNIST) image features do not have a fixed size (as we emphasize at the end of sec 5). On real-world tasks our findings imply that the width of deep layers is important for modeling correlations across distances, whereas the width of early layers is important for short range correlations. We accordingly compare two simple network architectures:\n- A1: width increases with depth\n- A2: width decreases with depth\non two prediction tasks:\n- T1: classifying large objects\n- T2: classifying small objects\nOur experiments clearly show that A1 is superior for T1, and vice versa, in compliance with our conclusion. We acknowledge reviewer's claims that much remains to be proven empirically in order to ensure that this conclusion applies in state of the art settings. This is a direction we are currently pursuing in a follow-up work.\n\nAn additional misunderstanding arising from the review relates to the performance on MNIST.\nTo make sure that we do not introduce bias in terms of objects' locations in an image, our classification tasks include (resized) MNIST digits *located randomly inside an image* (see sec 6). This is significantly more difficult than the original MNIST benchmark, thus our results are not comparable to those acheived on the latter. \n\nA minor addition to the experiments section has been made for emphasizing the above.\n\nFinally, reviewer writes: \"The theory is based on the analysis of ConvAC, and all of a sudden the experiments are conducted on the traditional convolution. This is not rigorous and not professional\"\nOne of the main roles of our experiments, stated clearly at the opening of sec 6, was to show that our conclusions extend to popular ConvNet architectures - with ReLU activations and max pooling. We do not adapt our analysis to this architecture as in [1], thus a-priori it is unclear to what extent our results capture ReLU and max pooling. This question was addressed through the experiments. We note that the empirical trends we observed with ConvACs were precisely the same - a footnote indicating this was added to the paper.\n\n\n- Related work:\n\nOur focus in this work is on the design of network architectures based on theoretical analyses. We do not compare ourselves to empirical methods for architectural design. The paper should be understood as a contribution to the theoretical understanding of deep learning (via connection to quantum physics), with an empirically demonstrated practical conclusion.\n\n[1] Cohen and Shashua, ICML 2016.", "We thank the reviewer for the time and feedback.", "We thank the reviewer for the supporting feedback!", "We thank the reviewer for the feedback and support. \n\nThe example we give for practical guidelines relates to layers widths - wider base networks, with more\nparameters in shallower layers, are better fit to model local input correlations and vice versa. As another\nexample, Theorem 1 implies that the contiguous pooling scheme commonly used in deep networks is\nmore appropriate when short-range correlations are present in the data, and that a different pooling\nscheme which merges symmetric activations is preferable when long-range correlations are present\n(this is pointed out and verified experimentally by [1], which uses other methods). \nConclusions obtained by relying on the min-cut analysis in Theorem 1 indeed *exactly* hold only for\nthe network on which it was proven (ConvAC), however the experiments performed in our paper and in\n[1] provide evidence that such conclusions extend to other commonly used architectures. \n\nReference\n--------------------------\n[1] Nadav Cohen and Amnon Shashua. Inductive bias of deep convolutional networks through pooling\ngeometry. In 5th International Conference on Learning Representations (ICLR), 2017" ]
[ -1, 6, -1, 7, 8, 6, -1, -1, -1, -1 ]
[ -1, 4, -1, 3, 5, 2, -1, -1, -1, -1 ]
[ "SJWogYvEM", "iclr_2018_SywXXwJAb", "HJbGEWHGf", "iclr_2018_SywXXwJAb", "iclr_2018_SywXXwJAb", "iclr_2018_SywXXwJAb", "BJX98ixfM", "SJw9gV2ZM", "Hk3NX9vWM", "SJ45Qm8Zz" ]
iclr_2018_Skp1ESxRZ
Towards Synthesizing Complex Programs From Input-Output Examples
In recent years, deep learning techniques have been developed to improve the performance of program synthesis from input-output examples. Albeit its significant progress, the programs that can be synthesized by state-of-the-art approaches are still simple in terms of their complexity. In this work, we move a significant step forward along this direction by proposing a new class of challenging tasks in the domain of program synthesis from input-output examples: learning a context-free parser from pairs of input programs and their parse trees. We show that this class of tasks are much more challenging than previously studied tasks, and the test accuracy of existing approaches is almost 0%. We tackle the challenges by developing three novel techniques inspired by three novel observations, which reveal the key ingredients of using deep learning to synthesize a complex program. First, the use of a non-differentiable machine is the key to effectively restrict the search space. Thus our proposed approach learns a neural program operating a domain-specific non-differentiable machine. Second, recursion is the key to achieve generalizability. Thus, we bake-in the notion of recursion in the design of our non-differentiable machine. Third, reinforcement learning is the key to learn how to operate the non-differentiable machine, but it is also hard to train the model effectively with existing reinforcement learning algorithms from a cold boot. We develop a novel two-phase reinforcement learning-based search algorithm to overcome this issue. In our evaluation, we show that using our novel approach, neural parsing programs can be learned to achieve 100% test accuracy on test inputs that are 500x longer than the training samples.
accepted-poster-papers
This paper proposes a method for training an neural network to operate stack-based mechanism in order to act as a CFG parser in order to, eventually, improve program synthesis and program induction systems. The reviewers agreed that the paper was compelling and well supported empirically, although one reviewer suggested that analysis of empirical results could stand some improvement. The reviewers were not able to achieve a clear consensus on the paper, but given that the most negative reviewer has also declared themselves the least confident in their assessment, I am happy to recommend acceptance on the basis of the median rather than mean score.
train
[ "HJ7k9-5ef", "Hyp9KYPez", "SyKcRn9xf", "BJo8MfaQG", "HJlWhynmM", "S17K82EQf", "rkVJZtiMG", "Bko83vXfG", "rJCciDXff", "HJKbiwmMG", "SJcoYPXff", "r1atCHsbG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "public" ]
[ "Summary:\nI thank the authors for their update and clarifications. They have addressed my concerns, and I will keep my score as it is.\n\n-----------------------------------------------------------------------\nThe authors present a system that parses DSL expressions into syntax trees when trained using input-output examples. Their approach is based around LSTMs predicting a sequence of program-like instructions/arguments, and they argue that their work is an illustration of how we should approach synthesis of complex algorithms using neural techniques.\n\nOverall I liked this paper: \nThe authors provide a frank view on the current state of neural program synthesis, which I am inclined to agree with: (1) existing neural program synthesis has only ever worked on ‘trivial’ problems, and (2) training program synthesizers is hard, but providing execution traces in the training data is not a practical solution. I am somewhat convinced that the task considered in this paper is not trivial (so the authors do not obviously fall into trap (1) ), and I am convinced that the authors’ two-phase reinforcement learning solution to (2) is an interesting approach.\nMy score reflects the fact that this paper seems like a solid piece of work: The task is difficult, the solution interesting and the results are favourable.\n\nHowever, I would like the authors to clarify the following points:\n1.\tIn the inner loop of Algorithm 1, it looks like Net_in is updated M1 times, and a candidate trace is only stored if arguments that generate an exact match with the ground truth tree are found. Since M1 = 20, I am surprised that an exact match can be generated with so few samples/updates. Similarly, I am surprised that the appendix mentions that only 1000 samples in the outer loop are required to find an exact match with the instruction trace. Could you clarify that I am reading this correctly and perhaps suggest intuition for why this method is so efficient. What is the size of the search space of programs that your LL machine can run? Should I conclude that the parsing task is actually not as difficult as it seems, or am I missing something?\n2.\tThe example traces in the appendix (fig 3, 6) only require 9 instructions. I’m guessing that these short programs are just for illustration – could you provide an example execution trace for one of the programs in the larger test sets? I assume that these require many more instructions & justify your claims of difficulty.\n3.\tAs a technique for solving the parsing problem, this method seems impressive. However, the authors present the technique as a general approach to synthesizing complex programs. I feel that the authors need to either justify this bold assertion with least one additional example task or tone down their claims. In particular, I would like to see performance on a standard benchmarking task e.g. the RobustFill tasks. I want to know whether (1) the method works across different tasks and (2) the baselines reproduce the expected performance on these benchmark tasks. \n4.\tRelated to the point above, the method seems to perform almost too well on the task it was designed for – we miss out on a chance to discuss where the model fails to work.\n\nThe paper is reasonably clear. It took a couple of considered passes to get to my current understanding of Algorithm 1, and I found it essential to refer to the appendix to understand LL machines and the proposed method. In places, the paper is somewhat verbose, but since many ideas are presented, I did not feel too annoyed by the fact that it (significantly) overshoots the recommended 8 page limit.\n\n", "This paper presents a reinforcement learning based approach to learn context-free\nparsers from pairs of input programs and their corresponding parse trees. The main\nidea of the approach is to learn a neural controller that operates over a discrete\nspace of programmatic actions such that the controller is able to produce the\ndesired parse trees for the input programs. The neural controller is trained using \na two-phase reinforcement learning approach where the first phase is used to find\na set of candidate traces for each input-output example and the second phase is \nused to find a satisfiable specification comprising of 1 unique trace per example\nsuch that there exists a program that is consistent with all the traces. The \napproach is evaluated on two datasets comprising of learning parsers for an \nimperative WHILE language and a functional LAMBDA language. The results show that\nthe proposed approach is able to achieve 100% generalization on test sets with \nprograms upto 100x longer than the training programs, while baseline approaches \nsuch as seq2seq and stack LSTM do not generalize at all.\n\nThe idea to decompose the synthesis task into two sub-tasks of first learning\na set of individual traces for each example, and then learning a program consistent\nwith a satisfiable subset of traces is quite interesting and novel. The use of \nreinforcement learning in the two phases of finding candidate trace sets with \ndifferent reward functions for different operators and searching for a satisfiable \nsubset of traces is also interesting. Finally, the results leading to perfect \ngeneralization on parsing 100x longer input programs is also quite impressive.\n\nWhile the presented results are impressive, a lot of design decisions such as \ndesigning specific operators (Call, Reduce,..) and their specific semantics seem\nto be quite domain-specific for the parsing task. The comparison with general \napproaches such as seq2seq and stack LSTM might not be that fair as they are \nnot restricted to only those operators and this possibly also explains the low \ngeneralization accuracies. Can the authors comment on the generality of the \npresented approach to some other program synthesis tasks?\n\nFor comparison with the baseline networks such as seq2seq and stack-LSTM, what \nhappens if the number of training examples is 1M (say programs upto size 100)? \n10k might be too small a number of training examples and these networks can \neasily overfit such a small dataset.\n\nThe paper mentions that developing a parser can take upto 2x/3x more time than \ndeveloping the training set. How large were the 150 examples that were used for\ntraining the models and were they hand-designed or automatically generated by a\nparsing algorithm? Hand generating parse trees for complex expressions seems to\nbe more tedious and error-prone that writing a modular parser.\n\nThe reason there are only 3 to 5 candidate traces per example is because the training\nexamples are small? For longer programs, I can imagine there can be thousands of bad\ntraces as it only needs one small mistake to propagate to full traces. Related to this\nquestion, what happens to the proposed approach if it is trained with 1000 length programs?\n\nWhat is the intuition behind keeping M1, M2 and M3 constants? Shouldn’t they be adaptive\nvalues with respect to the number of candidate traces found so far? \n\nFor phase-1 of learning candidate traces, what happens if the algorithm was only using the \noutside loop (M2) and performing REINFORCE without the inside loop?\n\nThe current paper presentation is a bit too dense to clearly understand the LL machine \nmodel and the two-phase algorithm. A lot of important details are currently in the\nappendix section with several forward references. I would suggest moving Figure 3 \nfrom appendix to the main paper, and also add a concrete example in section 4 to \nbetter explain the two-phase strategy.\n\n", "This paper proposes a method for learning parsers for context-free languages. They demonstrate that this achieves perfect accuracy on training and held-out examples of input/output pairs for two synthetic grammars. In comparison, existing approaches appear to achieve little to no generalization, especially when tested on longer examples than seen during training.\n\nThe approach is presented very thoroughly. Details about the grammars, the architecture, the learning algorithm, and the hyperparameters are clearly discussed, which is much appreciated. Despite the thoroughness of the task and model descriptions, the proposed method is not well motivated. The description of the relatively complex two-phase reinforcement learning algorithm is largely procedural, and it is not obvious how necessary the individual pieces of the algorithm are. This is particularly problematic because the only empirical result reported is that it achieves 100% accuracy. Quite a few natural questions left unanswered, limiting what readers can learn from this paper, e.g.\n- How quickly does the model learn? Is there a smooth progression that leads to perfect generalization?\n- Presumably the policy learned in Phase 1 is a decent model by itself, since it can reliably find candidate traces. How accurate is it? What are the drawbacks of using that instead of the model from the second phase? Are there systematic problems, such as overfitting, that necessitate a second phase?\n- How robust is the method to hyperparameters and multiple initializations? Why choose F = 10 and K = 3? Presumably, there exists some hyperparameters where the model does not achieve 100% test accuracy, in which case, what are the failure modes?\n\nOther misc. points:\n- The paper mentions that \"the training curriculum is very important to regularize the reinforcement learning process.\" Unless I am misunderstanding the experimental setup, this is not supported by the result, correct? The proposed method achieves perfect accuracy in every condition.\n- The reimplementations of the methods from Grefenstette et al. 2015 have surprisingly low training accuracy (in some cases 0% for Stack LSTM and 2.23% for DeQueue LSTM). Have you evaluated these reimplementations on their reported tasks to tease apart differences due to varying tasks and differences due to varying implementations?", "We want to clarify that the main challenge for phase 2 is the large search space of the valid specifications due to the rule of product. In particular, in AM language, assuming we find 3 candidate traces leading to the correct output for each training example, then the search space of phase 2 is 3^24=282,429,536,481, which is very large. Our solution first breaks the entire training set into several smaller lessons, so that the search space is reduced (see the \"The effectiveness of training curriculum\" paragraph in Sec 5, p. 13); also, in phase II of the algorithm, we use a sampling-based strategy to further reduce the number of specifications examined to be within 30 (see the \"Phase II: Searching for a satisfiable specification.\" paragraph in Sec 4.1, p. 9). \n\nTherefore, the small number of candidate traces for each single input-output example does not make the problem simple, since the large search space of our problem mainly comes from the rule of product, which is the major difficulty.\n\nHaving said this, we would like to clarify some further questions. For the \"x+y\" example, there are 3 shortest correct instruction type traces: 2 are provided in Fig 2 and 7, and the third (wrong) one is as follows:\n\nSHIFT SHIFT REDUCE CALL SHIFT REDUCE RETURN REDUCE FINAL\n\nIn our experiments, we find that each input-output example has at least 3-5 candidate instruction type traces. But unfortunately, we cannot provide the full set of instruction type traces for our WHILE and LAMBDA training curriculum. To do so, we need to exhaustively enumerate all possible instruction traces from a huge space that is intractable for even inputs of length 9. We do not know any practical approaches to estimate this number so far.\n\nWe also want to clarify that our candidate traces involve the minimal number of call/returns necessary for the LL machine to produce the correct output. In fact, we can add arbitrary number of call/returns to make a trace still valid; however, doing so will make the trace longer, so they no longer have the minimal length.\n\nWe further add the number of correct execution traces and correct instruction type traces of the shortest length for each example in the AM training set at the top of page 13. Hope these can help to clarify your concerns.\n", "I'm still not sure I understand how much ambiguity is covered by the second phase of the algorithm. For example, in the case of your AM language, how many traces are there for the input \"x+y\" that yield the correct parse tree. If I eliminate traces with degenerate call/return pairs, and ignore the function ids then I can only see two different traces which generate the correct parse tree (the one in the paper, plus one which adds an additional call return pair before shifting the plus). Adding the function ids back in, with K=3, I think there is still only 12 possible correct traces. Would it be possible to calculate the average number of correct traces for each parse tree in the training set for each of your 6 evaluation settings? Would it be possible to calculate it for each of the training samples in your toy AM language (i.e. each of the samples at the bottom of page 10) for illustrative purposes? It would be great if you could also calculate the number of correct instruction type traces as well. This would really help to clarify the inherent difficultly of the underlying problem.\n", "We have added a set of experiments to train baseline models on the dataset with 1M samples of length 50, and included the results in our paper. We observe that for seq2seq and seq2tree models, training with 1M samples mitigates the overfitting issue; however, for Stack LSTM, Queue LSTM and DeQue LSTM, both training and test accuracies drop to 0.", "We have updated the paper with the following changes:\n\n(1) We add a section (Section 5) including a running example to further describe the training curriculum, motivate the two-phase algorithm, and present the search space to show the difficulty of our parsing problem.\n\n(2) We move a figure illustrating the parsing process with the LL machine from the Appendix to the main body, i.e., Figure 2 in the revised version.\n\n(3) We have carefully revised our claim throughout the paper to mention that our strategy is only evaluated to be effective on the parser learning task. We also mention that we leave applying this strategy to more tasks as a future direction.\n", "Thank you for your comment! We would like to clarify this common confusion.\n\nWe want to emphasize that we highly agree that the learning neural programs on a generic machine, such as Turing Machine, is an important area. However, in many cases, such a problem can be hard or even implausible. For example, consider the problem of sorting, when only an unsorted list is given as the input, and the corresponding sorted list is given as the output, how is it possible to know whether the target program to be learned is a merge sort or a quick sort or any other sorting algorithm? Such an ambiguity cannot be easily resolved without providing more specification to restrict the search space.\n\nIn particular, another important field is to restrict the search space to be within a concrete domain by customizing the domain-specific machine. In doing so, the program to be learned can be restricted to the domain of interest as well. We have observed many important papers adopting this style. For example, two best papers from ICLR 2016 and ICLR 2017 are studying Neural Programmer-Interpreter (NPI). In NPI, each different task uses a different domain-specific machine; one of the latest program synthesis work, RobustFill (ICML 2017), also restricts its programs to be in the domain defined by a domain-specific language composed by string operations. These are all successful applications of applying a domain-specific machine/language to restrict the program of interest to be learned. Our work is following the same paradigm, but jumps a big step forward to consider a much more complex program domain.\n\nTo sum up, we agree that learning programs on a generic machine is important; but we also want to argue and highlight that learning programs on a domain specific machine may have more practical impact, and is definitely attracting more interests.\n", "Thank you for your review! We are working on a revision, and we will upload the new version no later than next week.\n\nWe will update our paper to explain more about our training method, including the curriculum learning and two-phase training algorithm. In particular, we will add a section to include some running examples, and describe what would happen if not following our proposed strategy, e.g., removing phase 2 in our algorithm. In general, using an alternative method, the model could overfit to a subset of the training examples, and thus fails to generalize.\n\nFor the choice of hyper-parameters, F (the number of different function IDs) and K (the maximal number of elements in each stack frame’s list) are parameters of the LL machine, not the training algorithm. These parameters ensure that the LL machine is expressive enough to serve as a parser. If the values of them are too small, then there exists no program that can run the machine to simulate the parser for a complex grammar.\n\nFor the curriculum learning, we want to point out that we only report our approach employing the curriculum training. If curriculum training is not employed, then our model cannot even fit to the training data, and thus will fail completely on the test data. This point has been explained in Section 4.1 and Section 5. We will make it more explicit in our next revision.\n\nOur re-implementation of the methods from Grefenstette et al. 2015 can replicate their results on their tasks. We will open-source our code for replication of both Grefenstette et al. 2015 and our experiments after the double-blind review period.\n", "Thank you for your review! We will upload a revision by next week. We will update our paper to explain more about our training method. In particular, we will include some running examples to further explain our training algorithm and demonstrate the complexity of our parsing problem. About your questions:\n\n1. We will add more explanation about what will happen during training. To give you a teaser, consider an example input of 3 tokens, x+y, there are 72810 valid traces --- which seems not so many. But consider that we need to find one valid trace for each example, then there could be 72810^n combinations for a training set of n examples of length 3. In this sense, even n>=2 will render an exhaustive approach impractical. When the input length increases to 5 tokens, the number of valid traces increases to 50549580, and 33871565610 for 7. We can observe that the search space grows exponentially as input length increases. We hope this can give you an intuitive idea of the difficulty of the problem that we are working on. We will provide a more detailed explanation of the problem complexity in our revision.\n\n2. The challenge does not come from the length of the execution trace --- it is mostly linear to the length of the input and output. The main difficulty comes from the volume of the search space, which can grow exponentially large as the length of the execution trace increases.\n\n3. We believe our technique can be generalized to other tasks, but the evaluation on other tasks will not be easy. The main challenge is that we need to design a new domain-specific machine, a neural program architecture, so that we can test our RL-based strategy. This could result in a workload of an entirely new research. Notice that it is also not trivial to adapt RobustFill to other tasks, as we mentioned in our paper. Meanwhile, since Microsoft will not release the FlashFill test data, it can be hard to make a fair comparison on their task. Thus, we will choose to tone down our claim and make our contribution more specific to the parsing problem.\n\n4. We will add a section to include some running examples and failure modes in our revision, either in the main body or in the appendix depending on what reviewers prefer.\n", "Thank you for your review!\n\nAbout the generality of our approach, we would like to mention that CALL, RETURN and FINAL instructions are general and used in the design of non-differentiable machines for a wide range of different programs, e.g., in Neural Programmer-Interpreter (NPI). The unique instructions of the LL machine are SHIFT and REDUCE, which are two fundamental operations to build a parser, and they are also used in Shift-Reduce machines proposed in NLP field. Although these two instructions are for parser per se, we would like to emphasize that the LL machine is generic enough to handle a wide spectrum of grammars. We will revise our paper to make this point clearer.\n\nFor baseline models, we will train them on datasets with 1M samples including longer inputs. We may not be able to finish training on inputs of length 100 due to our hardware limitation, but will at least train on inputs of length 20, and we will update the results once we finish our experiments. However, we would like to point out that for some baseline models, it is already hard to fit to current training set with 10K samples. For example, the training accuracies of Stack-LSTM and DeQue-LSTM on LAMBDA dataset are below 3%, and the training accuracies of seq2seq on the two standard training sets are below 95%. We will open-source our code for replication after the double-blind review period. The code can also replicate the experiments in the original papers of the baseline models.\n\nFor the training curriculum, the average lengths of training samples are 5.6 (Lambda) and 9.3 (WHILE), which are not long. Training samples in the curriculum are manually generated, which are used to test our manually written parser. The reason why generating a small set of training samples is faster than writing a parser is twofold. First, debugging the parser takes a long time, and it could take longer without the help of the training curriculum. Second, there are only a few samples in the curriculum, and these samples are short, thus it does not take long to generate them.\n\nMeanwhile, as we mentioned in our paper, our model relies heavily on the training curriculum to find the correct traces. In order to fit to longer samples, we need to train the model to fit to all samples in previous lessons with shorter samples first. At the beginning, the model can only find traces for the simplest input samples, e.g., x + y. Then the model gradually learns to fit to samples of length 5, 7, etc., in the training curriculum. If we randomly initialize the model and then train it directly on samples of length 1000, then our model will completely fail to find any trace that leads to the correct output parse tree. We will update our paper to explain more about the curriculum as well.\n\nThe choice of hyper-parameters M1, M2 and M3 is based on our empirical results. Their values are not adaptively tuned to make sure that the training algorithm can search for candidate traces for enough time. For example, we can not simply stop the algorithm after finding 3 candidate traces, because we are not sure whether it can still find the 4th trace or more.\n\nFor the training algorithm, without the inner loop in phase 1, the model will trap into a local minimum without finding the whole set of traces. Most likely, the traces found in this case are wrong ones.\n\nThank you for your advice on writing! We defer these details to the Appendix to shorten the main body of our paper. Following your suggestions, we will add a section to include some running examples, and provide more descriptions of our training algorithm. Also, we will move some important details in the Appendix to the main body of the paper.", "The paper presents an impressive result on the task of learning a parser, but those achievements rest upon the design of a task-specific \"machine\", specifically devised for parsing. This blatantly misses the point of program induction (which the title suggests is the ultimate goal, i.e. learning \"complex programs from input-output examples\"). Maybe am I mistaken and learning a parser is all the authors are interested in ? In that case the title is misleading at best.\n\nOtherwise, is the designed \"machine\" Turing-complete, such that there is a remote possibility of it learning non-parsing tasks ? Such a prospective extension of this work would anyway warrant additional experiments and supporting evidence. The operation of the \"machine\" for performing other tasks than parsing would likely require different inductive biases." ]
[ 8, 7, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Skp1ESxRZ", "iclr_2018_Skp1ESxRZ", "iclr_2018_Skp1ESxRZ", "HJlWhynmM", "HJKbiwmMG", "SJcoYPXff", "iclr_2018_Skp1ESxRZ", "r1atCHsbG", "SyKcRn9xf", "HJ7k9-5ef", "Hyp9KYPez", "iclr_2018_Skp1ESxRZ" ]
iclr_2018_S1WRibb0Z
Expressive power of recurrent neural networks
Deep neural networks are surprisingly efficient at solving practical tasks, but the theory behind this phenomenon is only starting to catch up with the practice. Numerous works show that depth is the key to this efficiency. A certain class of deep convolutional networks – namely those that correspond to the Hierarchical Tucker (HT) tensor decomposition – has been proven to have exponentially higher expressive power than shallow networks. I.e. a shallow network of exponential width is required to realize the same score function as computed by the deep architecture. In this paper, we prove the expressive power theorem (an exponential lower bound on the width of the equivalent shallow network) for a class of recurrent neural networks – ones that correspond to the Tensor Train (TT) decomposition. This means that even processing an image patch by patch with an RNN can be exponentially more efficient than a (shallow) convolutional network with one hidden layer. Using theoretical results on the relation between the tensor decompositions we compare expressive powers of the HT- and TT-Networks. We also implement the recurrent TT-Networks and provide numerical evidence of their expressivity.
accepted-poster-papers
This paper offers a theoretical and empirical analysis of the expressivity of RNNs, in particular in comparison to TT decomposition. The reviewers argued the results was interesting and important, although there were issues with clarity of some of the explanations. More critical reviewers argued the comparison basis with CP networks was not "fair" in that their shallowness restricted their expressivity w.r.t. TT. The experiments could be strengthened by making the explanations surrounding the set up clearer. This paper is borderline acceptable, and would have benefited from a more active discussion between the reviewers and the author. From reading the reviews and the author responses, I am leaning towards recommending acceptance to the main conference rather than the workshop track, as it is important to have theoretical work of this nature discussed at ICLR.
val
[ "ryxBhTjgM", "SJr9X58lz", "HyjYq-DgM", "HJgCQmtQG", "S1KW7QFQG", "r17DGXtXf", "rk4NWQt7M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The authors of this paper first present a class of networks inspired by various tensor decomposition models. Then they focus on one particular decompostion known as the tensor train decomposition and points out an analogy between tensor train networks and recurrent neural networks. Finally the authors show that almost all tensor train networks (exluding a set of measure zero) require exponentially large width to represent in CP networks, which is analogous to shallow networks.\n\nWhile I enjoyed reading the gentle introduction, nice overview of past work, and the theoretical analysis that relates the rank of tensor train networks to that of CP netowkrs, I wasn't sure how to translate the finding into the corresponding neural network models, namely, recurrent neural networks and shallow MLPs.\n\nFor example, \n * How does the \"bad\" example (low TT-rank but exponentially large CP-rank) translate into a recurrent neural network?\n * For both TT-networks and CP-networks, there are multilinear interaction of the inputs/previous hidden states. How precise is the analogy? Can we somehow restrict the interactions to additive ones so that we can exactly recover MLPs or RNNs?\n\nI also did not find the experiments illuminating. First of all the authors need to provide more details about how CP or TT networks are applies to MNIST and CIFAR-10 datasets. For example, the number of input patches and the number of hidden units, etc. In addition, I would like to see the performance of RNNs and MLPs with the same number of units/rank in order to validate the analogy between these networks. Finally I think it makes sense to try some sequence datasets for which RNNs are typically used.\n\nMinor comments:\n * In p7 it would help readers to point out that B^{(s,t)} is an algebraic subset because it is an intersection of M_r and the set of matrices of rank at most q^{d/2} - 1, which is known to be algebraic.", "In this paper, the expressive power of neural networks characterized by tensor train (TT) decomposition, a chain-type tensor decomposition, is investigated. Here, the expressive power refers to the rank of tensor decomposition, i.e., the number of latent components. The authors compare the complexity of TT-type networks with networks structured by CP decomposition, which corresponds to shallow networks. It is proved that the space of TT-type networks with rank O(r) can be complex as the same as the space of CP-type networks with rank poly(r).\n\nThe paper is clearly written and easy to follow. \n\nThe contribution is clear and it is distinguished from previous studies.\n\nThough I enjoyed reading this paper, I have several concerns.\n\n1. The authors compare the complexity of TT representation with CP representation (and HT representation). However, CP representation does not have universality (i.e., some tensors cannot be expressed by CP representation with finite rank, see [1]), this comparison may not make sense. It seems the comparison with Tucker-type representation makes much more sense because it has universality. \n\n2. Connecting RNN and TT representation is a bit confusing. Specifically, I found two gaps.\n (a) RNNs reuse the same parameter against all the input x_1 to x_d. This means that G_1 to G_d in Figure 1 are all the same. That's why RNNs can handle size-varying sequences. \n (b) Standard RNNs do not use the multilinear units shown in Figure 3, but use a simple addition of an input and the output from the previous layer (i.e., h_t = f(Wx_t + Vh_{t-1}), where h_t is the t-th hidden unit, x_t is the t-th input, W and V are weights, and f is an activation function.) \nDue to the gaps, the analysis used in this paper seems not applicable to RNNs. If this is true, the story of this paper is somewhat misleading. Or, is your theory still applicable?\n\n[1] Hackbusch, Wolfgang. Tensor spaces and numerical tensor calculus. Vol. 42. Springer Science & Business Media, 2012.", "This paper investigates an expressive power of the tensor train decomposition relative to the CP-decomposition. The result of this paper is interesting and also important from a viewpoint on analysis for the tensor train decomposition.\n\nHowever, I think there is some room for improvement on this paper. Comments are as follow.\n\nC1.\nCould you describe more details about the importance of an irreducible algebraic variety? Especially, it will be nice if authors provide practical examples of tensors in $\\mathcal{M}_r$ and tensors not in $\\mathcal{M}_r$. The present description about $\\mathcal{M}_r$ is too simple and thus I cannot judge whether the restriction on $\\mathcal{M}_r$ is critical or not.\n\nC2. \nI wonder that the experiment for comparing TT-decomposition and CP-decomposition is fair, since CP-decomposition does not have the universal approximation property. Is it possible to conduct numerical experiments for comparing the ranks directly? For example, given a tensor with known CP-rank, could you measure the TT-rank of the tensor? Such experiments will improve persuasiveness of the main result presented in this paper.", ">The authors compare the complexity of TT representation with CP representation (and HT representation). However, CP representation does not have universality (i.e., some tensors cannot be expressed by CP representation with finite rank, see [1]), this comparison may not make sense.\n\nWe believe that any tensor admits finite CP-rank which for a tensor A of dimension d and mode size n is bounded by n^d. This worst case scenario is obtained by writing A = \\sum_{i1 i2 .. id } A_{i1 i2 .. id }e_{i1) \\otimes e_{i2) … \\otimes e_{id}, that is we write A as a sum of elementary tensors (tensor product basis).\n\n>NNs reuse the same parameter against all the input x_1 to x_d. This means that G_1 to G_d in Figure 1 are all the same. That's why RNNs can handle size-varying sequences. \n\nThank you for raising this point. We believe that this statement will also hold true and verified it numerically -- in all the experiments with randomly generated tensor in TT format with shared parameters the same permutation as in the proof of Theorem 1 gave us a matrix of maximal rank. We have added a small discussion on this issue to the paper and provided details of the numerical experiment. https://ibb.co/ic0T4w\n\n>Standard RNNs do not use the multilinear units shown in Figure 3, but use a simple addition of an input and the output from the previous layer (i.e., h_t = f(Wx_t + Vh_{t-1}), where h_t is the t-th hidden unit, x_t is the t-th input, W and V are weights, and f is an activation function.) \nDue to the gaps, the analysis used in this paper seems not applicable to RNNs. If this is true, the story of this paper is somewhat misleading. Or, is your theory still applicable?\n\nAs we noted in the related work section, [Wu et al, 2016] recently explored RNNs with multiplicative interactions and found them to be quite effective. We can interpret TT-network as a multiplicative RNN from [Wu et a.l, 2016] with two differences: 1) we don’t use an activation function for the recurrent connection 2) we use a general 3-dimensional map defined by a TT-core tensor, while the map in [Wu et al., 2016] can be interpreted as a low-rank approximation of what we used.\nAs for the activation function, we think that even without it multiplicative RNNs can be flexible enough to be used in practice and thus their analysis can shed light on the behavior of RNNs in general. Also note, that although the recurrent connection doesn’t have an activation function, the feature map Ф can be arbitrarily complex.\nWe also believe that ReLU can be added to the analysis eventually (following the steps of [Cohen et al., 2016] who proved the exponential expressive power of HT-format and then followed up [Cohen and Shashua, 2016] with a generalization of the proof for the networks with activation function), and leave it as a future work.\n", "> Could you describe more details about the importance of an irreducible algebraic variety? Especially, it will be nice if authors provide practical examples of tensors in $\\mathcal{M}_r$ and tensors not in $\\mathcal{M}_r$. The present description about $\\mathcal{M}_r$ is too simple and thus I cannot judge whether the restriction on $\\mathcal{M}_r$ is critical or not.\n\nThank you for raising this point. The question of which tensors admit low-rank decompositions is very interesting and nontrivial. Typically tensors are obtained as the values of a function sampled on some uniform grid. For many functions such as polynomials, sin, exp there exist theoretical bounds on the magnitude of the TT-ranks of the resulting tensor, showing that they are small, and if one constructs a linear combinations of such functions we can estimate that TT-ranks (A + B) <= TT-ranks(A) + TT-ranks(B). \nIn general, when we sample a smooth function, the smoother function is the lower TT-ranks will be. Moreover if one introduces some small rounding parameter eps, for many tensors in practice it is possible to find a TT decomposition with the relative accuracy eps, but with much smaller ranks. White noise, on the other hand, will have the maximal TT-rank (with probability 1) because of the lack of smoothness or structure.\nThis can be thought as an analogy to Fourier series, where to approximate a smooth function with some accuracy only small amount of summands is required. In many applications, TT-ranks are modest and allow for computations with tensors which would be impossible to store explicitly (e.g. they might have 10^30 entries in full format).\n\n\n>I wonder that the experiment for comparing TT-decomposition and CP-decomposition is fair, since CP-decomposition does not have the universal approximation property. Is it possible to conduct numerical experiments for comparing the ranks directly? For example, given a tensor with known CP-rank, could you measure the TT-rank of the tensor? Such experiments will improve persuasiveness of the main result presented in this paper.\n\nThank you for this suggestion. First of all we would like to note that an arbitrary d-dimensional tensor A with mode size n admits canonical decomposition in the worst case of the rank n^d, which can be obtained in the form\n\\sum_{i1 i2 .. id } A_{i1 i2 .. id }e_{i1) \\otimes e_{i2) … \\otimes e_{id}, that is we just write A in the tensor product basis, which implies that CP-format also has universal approximation property (however CP-rank n^d is clearly impractical).\nAs for a comparison between CP-ranks and TT-ranks it can be noted that if CP-rank = R then TT-ranks are bounded by R. This can be explained by the fact that if CP-rank = R then rank of any matricization of the tensor is <= R, and TT-ranks are equal to matrix ranks of particular matricizations. We briefly state it in the beginning of Section 5 and in Table 2. \nTensors we work with in this paper are too large to be formed explicitly and estimate their CP-rank (although their TT-ranks are small). For small tensors e.g. of size 3 x 3 x 3 x 3 with given TT-ranks we have performed numerous experiments estimating their CP-rank and all the cases we got that they have maximal rank (as claimed in the paper). If you think that this analysis is necessary we will extend Section 6 with the details of this experiment.\n", "> How does the \"bad\" example (low TT-rank but exponentially large CP-rank) translate into a recurrent neural network?\nThank you for this question, we added the interpretation from the neural network point of view into the updated paper. https://ibb.co/eAeZeb\nBut note that the particular example is not that important since we proved that the statement of the theorem holds for almost all tensors (i.e. for a set of tensors of measure 1).\n\n> For both TT-networks and CP-networks, there are multilinear interaction of the inputs/previous hidden states. How precise is the analogy? Can we somehow restrict the interactions to additive ones so that we can exactly recover MLPs or RNNs?\nAs we noted in the related work section, [Wu et al, 2016] recently explored RNNs with multiplicative interactions and found them to be quite effective. We can interpret TT-network as a multiplicative RNN from [Wu et a.l, 2016] with two differences: 1) we don’t use an activation function for the recurrent connection 2) we use a general 3-dimensional map defined by a TT-core tensor, while the map in [Wu et al., 2016] can be interpreted as a low-rank approximation of what we used.\nAs for the activation function, we think that even without it multiplicative RNNs can be flexible enough to be used in practice and thus their analysis can shed light on the behavior of RNNs in general. Also note, that although the recurrent connection doesn’t have an activation function, the feature map Ф can be arbitrarily complex.\nWe also believe that ReLU can be added to the analysis eventually (following the steps of [Cohen et al., 2016] who proved the exponential expressive power of HT-format and then followed up [Cohen and Shashua, 2016] with a generalization of the proof for the networks with activation function), and leave it as a future work.\n\nIf you think this clarification is important for the understanding, we will extend the paragraph in the related work section dedicated to this issue.\n\n> I also did not find the experiments illuminating. First of all the authors need to provide more details about how CP or TT networks are applies to MNIST and CIFAR-10 datasets. For example, the number of input patches and the number of hidden units, etc.\n\nIn our experiments we chose patch size to be 8 x 8, feature maps to be affine maps followed by the ReLU activation and we set the number of such feature maps to 4. We have added this information to the experiments section.\n\n> In addition, I would like to see the performance of RNNs and MLPs with the same number of units/rank in order to validate the analogy between these networks.\n\nWe report the obtained accuracy with respect to the rank (6a) and the number of units (6b) in the paper.\n\n> Finally I think it makes sense to try some sequence datasets for which RNNs are typically used.\n\nWe agree that this experiment would be a good check, however since the focus of the current work is theoretical analysis, we decided to postpone it to the future work.\n\nWe also would like to note that some results of the architectures similar to the proposed on sequential datasets can be found in [Wu et al., 2016, Fig. 2] and an ICLR 2018 submission https://openreview.net/forum?id=HJJ0w--0W\n\n > * In p7 it would help readers to point out that B^{(s,t)} is an algebraic subset because it is an intersection of M_r and the set of matrices of rank at most q^{d/2} - 1, which is known to be algebraic.\nThank you for this remark, we have added this point to the proof.\n", "We would like to thank the reviewers for their time and effort to make our work better. To address the raised concerns we answered each reviewer in individual messages below and updated the paper in the following ways:\n\n1) We have added a less formal explanation of the example constructed in the proof of Theorem 1.\n2) We have added values of the hyperparameters used for the numerical experiments.\n3) We have added a discussion on generalizing Theorem 1 to the case of shared TT-cores.\n" ]
[ 6, 6, 6, -1, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1, -1 ]
[ "iclr_2018_S1WRibb0Z", "iclr_2018_S1WRibb0Z", "iclr_2018_S1WRibb0Z", "SJr9X58lz", "HyjYq-DgM", "ryxBhTjgM", "iclr_2018_S1WRibb0Z" ]
iclr_2018_rJlMAAeC-
Improving the Universality and Learnability of Neural Programmer-Interpreters with Combinator Abstraction
To overcome the limitations of Neural Programmer-Interpreters (NPI) in its universality and learnability, we propose the incorporation of combinator abstraction into neural programing and a new NPI architecture to support this abstraction, which we call Combinatory Neural Programmer-Interpreter (CNPI). Combinator abstraction dramatically reduces the number and complexity of programs that need to be interpreted by the core controller of CNPI, while still allowing the CNPI to represent and interpret arbitrary complex programs by the collaboration of the core with the other components. We propose a small set of four combinators to capture the most pervasive programming patterns. Due to the finiteness and simplicity of this combinator set and the offloading of some burden of interpretation from the core, we are able construct a CNPI that is universal with respect to the set of all combinatorizable programs, which is adequate for solving most algorithmic tasks. Moreover, besides supervised training on execution traces, CNPI can be trained by policy gradient reinforcement learning with appropriately designed curricula.
accepted-poster-papers
This paper present a functional extension to NPI, allowing the learning of simpler, more expressive programs. Although the conference does not put explicit bounds on the length of papers, the authors pushed their luck with their initial submission (a body of 14 pages). It is clear, from the discussion and the reviews, however, that the authors have sought to substantially reduce the length of their paper while improving its clarity. Reviewers found the method and experiments interesting, and two out of three heartily recommend it for acceptance to ICLR. I am forced to discount the score of the third reviewer, which does not align with the content of their review. I had discussed the issue of length with them, and am disappointed that they chose not to adjust their score to reflect their assessment of the paper, but rather their displeasure at the length of the paper (which, as stated above, does push the boundary a little). Overall, I recommend accepting this paper, but warn the authors that this is a generous decision, heavily motivated by my appreciation for the work, and that they should be careful not to try such stunts in future conference in order to preserve the fairness of the submission process.
train
[ "Syil6yaVf", "rkRojIHxz", "BkswAkLlG", "ByvgbYFeG", "rJ7yRfj7M", "HkCyUXWMG", "ryjjDGbMM", "SyKHGQbfz", "rJQJbQWff", "S1HgOM-Gz", "HkID7QZfz", "H15q-7WGM", "ryggtIU-f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "I have now also read the revised version of the paper, i.e., the 12-page version. \n\nThe paper is very interesting to read, however slightly hard to digest when you are not familiar with NPI. \n\nThe paper presents a clear contribution in addition to previous work, i.e., identifying and proposing a set of four combinators that improves the universality of neural programmer-interpreters. An algorithmic framework is presented for inference and example executions are provided. \n\nThe analysis and evaluation of the approach are appropriate, both from a theoretical and experimental point of view. However, the execution examples seem very small and it is hard to predict the generality and scalability of the approach (at least for me). How does the technique scale for large problems / applications?\n\nThe paper is very well written and clearly highlights its contribution. However, in my opinion the paper breaks the submission guidelines. The paper is too long, 12 pages (+refs and appendix, in total 17 pages), while the page limit is 8 pages (+refs and appendix). The paper is more suitable for a journal, where page limit is less of an issue.\n\nI have generally a problem with papers that are far too long. The page limits are there for a reason, e.g., all papers should be given an equal amount of space to express the ideas and evaluate them. Although the page limit (8 pages) is a recommendation at this conference, this is the first time I see a paper that breaks / stretches the limit so significantly. I think many of the papers submitted could have been of higher quality, have better evaluations, etc. if they also had stretched the page limits by 50%. I think all papers should be judged based on the same restrictions/limitation, scope, etc. \n", "The paper is interesting to read and gives valuable insights. \n\nHowever, the paper clearly breaks the submission guidelines. The paper is far too long, 14 pages (+refs and appendix, in total 19 pages), while the page limit is 8 pages (+refs and appendix). Therefore, the paper should be rejected. I can not foresee how the authors should be able to squeeze to content into 8 pages. The paper is more suitable for a journal, where page limit is less of an issue.", "Quality\nThe paper is very interesting and clearly motivated. The idea of importing concepts from functional programming into neural programming looks very promising, helping to address a bit the somewhat naive approach taken so far in the deep learning community towards program induction. However, I found the model description difficult to fully understand and have significant unresolved questions - especially *why* exactly the model should be expected to have better universality compared to NPI and RNPI, given than applier memory is unbounded just like NPI/RNPI program memories are unbounded.\n\nClarity\nThe paper does a good job of summarizing NPI and motivating the universality property of the core module. \n\nI had a lot of questions while reading:\n\nWhat is the purpose of detectors? It is not clear what is being detected. From the context it seems to be encoding observations from the environment, which can vary according to the task and change during program execution. The detector memory is also confusing. In the original NPI, it is assumed that the caller knows which encoder is needed for each program. In CNPI, is this part learned or more general in some way?\n\nAppliers - is it the case that *every* program apart from the four combinators must be written as an applier? For example ADD1, BSTEP, BUBBLESORT, etc all must be implemented as an applier, and programs that cannot be implemented as appliers are not expressible by CNPI?\n\nMemory - combinator memory looks like a 4-way softmax over the four combinators, right? The previous NPI program memory is analogous then to the applier memory.\n\nEqn 3 - binarizing the detector output introduces a non-differentiable operation. How is the detector then trained e.g. from execution traces? Later I see that there is a notion of a “correct condition” for the detector to regress on, which makes me confused again about what exactly the output of a detector means.\n\nComputing the next subprogram - since the size of applier memory is unbounded, the core still needs to be aware of an unlimited number of subprograms. I must be missing something here - how does the proposed model therefore achieve better universality than the original NPI and RNPI models?\n\nAnalysis - for the claim of perfect generalization, I think this will not generally hold true for perceptual inputs. Will the proposed model only be useful in discrete domains for algorithmic tasks, or could it be more broadly applicable, e.g. to robotics tasks?\n\nOriginality\nThis methods proposed in this paper are quite novel and start to bridge an important gap between neural program induction and functional programming, by importing the concept of combinator abstraction into NPI.\n\nSignificance\nThe paper will be significant to people interested in NPI-related models and neural program induction generally, but on the other hand, there is currently not yet a “killer application” to this line of work. \n\nThe experiments appear to show significant new capabilities of CNPI compared to NPI and RNPI in terms of better generalization and universality, as well as being trainable by reinforcement learning.\n\nPros\n- Learns new programs without catastrophic forgetting in the NPI core, in particular where previous NPI models fail.\n- Detector training is decoupled from core and memory training, so that perfect generalization does not have to be re-verified after learning new behaviors.\n\nCons\n- So far lacking useful applications in the real world. Could the techniques in this paper help in robotics extensions to NPI? (see e.g. https://arxiv.org/abs/1710.01813)\n- Adds a significant amount of further structure into the NPI framework, which could potentially make broader applications more complex to implement. Do the proposed modifications reduce generality in any way?\n", "The authors propose a variant of the neural programmer-interpreter that can support so called combinators for composing an d structuring computations. In a sense, programs in this variant are at a higher level than those in the original neural programmer-interpreter. The distinguishing aspect of the neural programmer-interpreter is that it learns a generic core (which in the variant of the paper corresponds to an interpreter of the programming language) and programs for concrete tasks simultaneously. Increasing the expressivity of the language with combinators has a danger of making the training of core very difficult. The authors avoids this pitfall by carefully re-designing the deterministic part of the core. For instance, they separate out the evaluation of the detector from the LSTM used for the core. Also, they use a fixed routine for parsing the applier instruction. The authors describe two ways of training their variant of the neural programmer-interpreter. The first is similar to the existing methods, and trains the variant using traces. The second is different and trains the variant using just input-output pairs but under carefully designed curriculum. The authors experimentally show that their approach leads to a more stable core of the neural programmer-interpreter that is close to being universal, in the sense that the core knows how to interpret commands.\n\nI found the new architecture of the neural programmer-interpreter very interesting. It is carefully crafted so as to support expressive combinators without making the learning more difficult. I can't quite judge how strong their experimental evaluations are, but I think that learning a neural programmer-interpreter from just input-output pairs using RL techniques is new and worth being pursued further. I am generally positive about accepting this paper to ICLR'18.\n\nI have three complaints, though. First, the paper uses 14 pages well over 8 pages, the recommended limit. Second, it has many typos. Third, the authors claim universality of the approach. When I read this claim, I expected a theorem initially but later I realized that the claim was mostly about informal understanding and got disappointed slightly. I hope that the authors consider these complaints when they revise the paper.\n\n* abstract, p1: is is universal -> is universal\n* p2: may still intractable to provable -> may still be intractable to prove\n* p2: import abstraction -> important abstraction\n* p2: a_(t+1)are -> a_(t+1) are\n* p2: Algorithm 1 The -> Algorithm 1. The\n* Algorithm1, p3: f_lstm(c,p,h) -> f_lstm(s,p,h)\n* p3: learn to interpreting -> learn to interpret\n* p3: it it common -> it is common\n* p3: The two program share -> The two programs share\n* p3: that server as -> that serve as\n* p3: be interpret by -> be interpreted by\n* p3: (le 9 in our -> (<= 9 in our\n* Figure 1, p4: the type of linrec is wrong.\n* p6: f_d et -> f_det\n* p8: it+1 -> i_(t+1)\n* p8: detector. the -> detector. The\n* p9: As I mentioned, I suggest you to make clear that the claim about universality is mostly based on intuition, not on theorem.\n* p9: to to -> to\n* p10: the the set -> the set\n* p11: What are DETs?", "In response to reviewers' comments, we have made the following changes in the revision:\n1. We have shortened the paper from 14 to 12 pages (+refs and appendix, from 19 to 17 pages) while preserving most of the important contents by: \n1) abbreviating the description of NPI and removing NPI's inference algorithm (Algorithm 1 in the original version) in Section 2.1; \n2) rewriting some paragraphs (especially those in Section 5) to make them more succinct; \n3) substantial re-typesetting (e.g. placing some figures and tables side by side, which is also common practice in other submissions).\n\n2. We have added a theorem and a proposition on the universality of CNPI in Section 4, along with a proof of the theorem in Appendix C.\n\n3. We have added a discussion section to discussion one potential application of CNPI besides solving algorithmic tasks, namely natural language understanding as inferring and executing programs.\n\n4. We have corrected a number of typos.", "Thanks very much for your comments. The original version is indeed too long. We have uploaded a revision. We have shortened the paper from 14 to 12 pages (+refs and appendix, from 19 to 17 pages) while preserving most of the important contents by:\n1) abbreviating the description of NPI and removing NPI's inference algorithm (Algorithm 1 in the original version) in Section 2.1;\n2) rewriting some paragraphs (especially those in Section 5) to make them more succinct;\n3) substantial re-typesetting (e.g. placing some figures and tables side by side, which is also common practice in other submissions). In order to present the somewhat intricate idea as clear as possible, we use in this paper quite a few figures and tables. The bad typesetting of them in the original version made the manuscript unnecessarily long.\nConsidering the dense contents of the paper this is the best that we can do. \n\nWe'd like to mention that the 12-page revision is in fact shorter than a number of other submissions, e.g.:\n1. Modular Continual Learning in a Unified Visual Environment (https://openreview.net/forum?id=rkPLzgZAZ), 14 pages\n2. Towards Synthesizing Complex Programs From Input-Output Examples (https://openreview.net/forum?id=Skp1ESxRZ), 16 pages\n3. Sobolev GAN (https://openreview.net/forum?id=SJA7xfb0b), 15 pages\n4. N2N learning: Network to Network Compression via Policy Gradient Reinforcement Learning (https://openreview.net/forum?id=B1hcZZ-AW), 13 pages", "Thanks for your very constructive feedback. We have uploaded a revision to incorporate your suggestions. We will try to answer your questions and concerns one by one below.\n> First, the paper uses 14 pages well over 8 pages, the recommended limit.\nRe: We have shortened the paper from 14 to 12 pages (+refs and appendix, from 19 to 17 pages) while preserving most of the important contents by:\n1) abbreviating the description of NPI and removing NPI's inference algorithm (Algorithm 1 in the original version) in Section 2.1;\n2) rewriting some paragraphs (especially those in Section 5) to make them more succinct;\n3) substantial re-typesetting (e.g. placing some figures and tables side by side, which is also common practice in other submissions). In order to present the somewhat intricate idea as clear as possible, we use in this paper quite a few figures and tables. The bad typesetting of them in the original version made the manuscript unnecessarily long.\nWe'd like to mention that the 12-page revision is in fact shorter than a number of other submissions, e.g.:\n1. Modular Continual Learning in a Unified Visual Environment (https://openreview.net/forum?id=rkPLzgZAZ), 14 pages\n2. Towards Synthesizing Complex Programs From Input-Output Examples (https://openreview.net/forum?id=Skp1ESxRZ), 16 pages\n3. Sobolev GAN (https://openreview.net/forum?id=SJA7xfb0b), 15 pages\n4. N2N learning: Network to Network Compression via Policy Gradient Reinforcement Learning (https://openreview.net/forum?id=B1hcZZ-AW), 13 pages\n\n> Second, it has many typos.\nRe: We have corrected these and some other typos in the revision. We apologize for the carelessness leading to so many typos and thank you very much for the effort of pointing them out.\n* Figure 1, p4: the type of linrec is wrong.\nDo you mean that linrec has fewer arguments than shown in Figure 2? The pseudo-code in Figure 1 is only for illustration purpose. We deliberately use a simpler version of linrec to make its connection with ADD and BSTEP more apparent.\n* p11: What are DETs?\nDETs stand for detectors. The abbreviation is defined in paragraph 1 of Section 3.1.", "> Analysis - for the claim of perfect generalization, I think this will not generally hold true for perceptual inputs. Will the proposed model only be useful in discrete domains for algorithmic tasks, or could it be more broadly applicable, e.g. to robotics tasks? Also related to:\n> So far lacking useful applications in the real world. Could the techniques in this paper help in robotics extensions to NPI?\nRe: We are not very familiar with robotics, but CNPI does has the potential capability of augmenting intelligent agents trained by RL to follow instructions and do tasks, which may have applications in robotics domain. We discuss this below. Though in this paper we only demonstrate the capability of CNPI in algorithm domain, we believe that the proposed approach is quite general and can potentially be applied to other domains. One such domains is the recent work of treating natural language understanding as inferring and executing programs, applied to semantic parsing for QA (e.g., Andreas et al. (2016), Liang et al. (2017)) and training agents by RL to follow instructions and generalize (e.g., Oh et al. (2017), Andreas et al. (2017), Denil et al. (2017)). Besides normal nouns and verbs, natural language contains ``higher-order'' words such as ``then'', ``if'' and ``until'', which play the critical role of controlling the ``execution'' of other verbs, substantially enhancing the expressive power of the language. Very recently, Anonymous (2018) shows empirically that the prevalent sequence-to-sequence models struggle at mapping instructions containing these words (e.g., ``twice'') to correct action sequence with good generalization. On the other hand, these words can readily be represented as combinators (e.g., def twice(a): a(); a()). By adding these words to the vocabulary and equipping the agent with CNPI-like components to interpret them as combinators, it would be possible to construct agents that display more complex and structured behavior following succinct instructions, and that generalize better due to the raised level of abstraction. We leave this for future work. We have replaced the conclusion section with a discussion section in the revision to discuss the potential applications of CNPI.\n--------\nJacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neural networks for question answering. In NAACL, 2016.\nChen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. In Annual Meeting of the Association for Computational Linguistics (ACL), 2017.\nJunhyuk Oh, Singh Satinder, Lee Honglak, and Kholi Pushmeet. Zero-shot task generalization with multi-task deep reinforcement learning. In International Conference on Machine Learning (ICML), 2017.\nJacob Andreas, Dan Klein, and Sergey Levine. Modular multitask reinforcement learning with policy sketches. In International Conference on Machine Learning (ICML), 2017.\nMisha Denil, Sergio Gómez Colmenarejo, Serkan Cabi, David Saxton, and Nando de Freitas. Programmable agents. arXiv preprint arXiv:1706.06383, 2017.\n\nWe hope that these replies and the revision resolve your questions. Any additional questions and suggestions are welcome and we will try our best to make things as clear as possible.", "Thanks for your very constructive feedback. We have uploaded a revision to incorporate your suggestions. We will try to answer your questions and concerns one by one below.\n> especially *why* exactly the model should be expected to have better universality compared to NPI and RNPI, given than applier memory is unbounded just like NPI/RNPI program memories are unbounded. Also related to:\n> Computing the next subprogram - since the size of applier memory is unbounded, the core still needs to be aware of an unlimited number of subprograms. I must be missing something here - how does the proposed model therefore achieve better universality than the original NPI and RNPI models?\nRe: Applier memory is indeed unbounded. However, the core is in fact *not* aware of any actual applier programs. Let's take the BSTEP program in Figure 4 as an example (also see Figure 3 (c) and line 5-7 and 15-16 of Algorithm 1 in the revision). At the first execution step, the core does not directly call 'COMPSWAP' as the next subprogram. It calls 'a1'. Then the actual subprogram COMPSWAP's ID is looked up in the frame, which is constructed on the fly by the BSTEP applier when calling linrec. The _Parse function in Algorithm 1 and Lemma 1 in Appendix C guarantee that the frame will be filled with correct values.\nIn CNPI, the core is only responsible for interpreting combinators and is only aware of formal callable arguments. We offload the responsibility of interpreting appliers from the core to a parser. The two key facts are: 1) the execution of all appliers follows exactly the same pattern: call a combinator with a detector arguments and a fixed number of callable arguments and then return, and 2) the parser itself is a *fixed* program (see function _Parse in Algorithm 1) with no learning taking place at all. As a result, the parser can correctly interpret *any* applier with appropriately set program embeddings (according to equation (1)) regardless of how many applier programs are already stored in the program memory. We propose and prove a lemma (Lemma 1 in Appendix C) on the interpretation of appliers in the revision.\nThe distinguishing feature of CNPI that enables this separation of responsibility and that eventually provides the universality of CNPI is the dynamic binding of formal detectors and callable arguments to actual programs. We have rewritten the first half of Section 4 to explicitly propose a theorem and a proposition on the universality of CNPI and added Appendix C in the revision to prove the theorem. Please see the last part of our reply to Review 3's comments for more details.\n\n> Appliers - is it the case that *every* program apart from the four combinators must be written as an applier? For example ADD1, BSTEP, BUBBLESORT, etc all must be implemented as an applier, and programs that cannot be implemented as appliers are not expressible by CNPI?\nRe: Yes. Actually we have proposed a \"combinatory programing language\" for CNPI where programs are composed by iteratively defining appliers from the bottom up. We give a formal definition of combinatory programs in Appendix C in the revision. We propose a proposition in Section 4 stating that any recursive program is combinatorizable, i.e., can be converted to a combinatory equivalent.\nThis proposition shows that the set of all combinatory programs is adequate for solving most algorithmic tasks, considering that most, if not all, algorithmic tasks have a recursive solution. Instead of giving a formal proof of it, we propose a concrete algorithm for combinatorizing any program set expressing an recursive algorithm in Appendix B. Although we believe that the proposition is true (effectively, it says that the combinatory programming language is Turing-complete), we think that a formal proof of it would be too tedious to be included in this paper. The intuition behind is that during the execution of a combinatory programs, combinators and appliers call each other to form an alternating call sequence until reaching a ACT. Arbitrarily complex program structures can be expressed in this way (see the last paragraph of Section 3.1 and Figure 3 (c) and (d)). We'd like to point out that the circle formed by the mutual invocation of combinators and appliers is a very fundamental construct in the interpretation of functional program languages. It can be seen as a \"neural equivalent\" of the eval-apply circle that lies at the heart of a LISP evaluator. The book \"Structure and Interpretation of Computer Programs (2nd edition)\" has a good discussion on this (Section 4.1.1, Figure 4.1: The eval-apply cycle exposes the essence of a computer language. https://mitpress.mit.edu/sicp/full-text/book/book-Z-H-26.html#%_sec_4.1.1). The expressive power (Turing-completeness) of functional programming languages like LISP has been well recognized. Anyway, we admit that this is a weakness regarding the theoretical rigor of this paper, which could be improved by future work.", "> Third, the authors claim universality of the approach. When I read this claim, I expected a theorem initially but later I realized that the claim was mostly about informal understanding and got disappointed slightly.\nRe: The original version did lack a theorem, which is a major drawback regarding the completeness of the paper. In the revision we state the universality of CNPI with the following theorem and proposition in Section 4 (In fact we already mentioned the proposition in the original version. In the revision we propose it more explicitly):\n--------\nTheorem 1. If 1) the core along with the program embeddings of the set of four combinators and the built-in combinator _mapself are trained and verified before being fixed, and 2) the detectors for a new task are trained and verified, then CNPI can 1) interpret the combinatory programs of the new task correctly with perfect generalization (i.e. with any input complexity) by adding appliers to the program memory, and 2) maintain correct interpretation of already learned programs.\nProposition 1. Any recursive program is combinatorizable, i.e., can be converted to a combinatory equivalent.\n--------\nTheorem 1 states that CNPI is universal with respect to the set of all combinatorizable programs and that appliers can be continually added to the program memory to solve new tasks. Proposition 1 shows that this set of programs is adequate for solving most algorithmic tasks, considering that most, if not all, algorithmic tasks have a recursive solution. We give an induction proof of Theorem 1 in Appendix C, which is newly added in the revision. The proof is in fact quite straightforward. The distinguishing feature of CNPI that enables this proof is the dynamic binding of formal detectors and callable arguments to actual programs, which makes verification of combinator's execution (by the core) and verification of their invocation (by appliers) independent of each other. In contrast, it is impossible to conduct such a proof with NPI and RNPI which lack this feature.\nFor Proposition 1, instead of giving a formal proof, we propose a concrete algorithm for combinatorizing any program set expressing an recursive algorithm in Appendix B. Although we believe that Proposition 1 is true (effectively, it says that the combinatory programming language is Turing-complete), we think that a formal proof of it would be too tedious to be included in this paper. The intuition behind is that during the execution of a combinatory programs, combinators and appliers call each other to form an alternating call sequence until reaching a ACT. Arbitrarily complex program structures can be expressed in this way (see the last paragraph of Section 3.1 and Figure 3 (c) and (d)). We'd like to point out that the circle formed by the mutual invocation of combinators and appliers is a very fundamental construct in the interpretation of functional program languages. It can be seen as a \"neural equivalent\" of the eval-apply circle that lies at the heart of a LISP evaluator. The book \"Structure and Interpretation of Computer Programs (2nd edition)\" has a good discussion on this (Section 4.1.1, Figure 4.1: The eval-apply cycle exposes the essence of a computer language. https://mitpress.mit.edu/sicp/full-text/book/book-Z-H-26.html#%_sec_4.1.1). The expressive power (Turing-completeness) of functional programming languages like LISP has been well recognized. Anyway, we admit that this is a weakness regarding the theoretical rigor of this paper, which could be improved by future work.\nWe have rewritten the first half of Section 4 on the universality of CNPI to state our claims more clearly. We have also replaced the conclusion section with a discussion section in the revision to discuss the potential applications of CNPI.\n\nThanks again for the reviewing effort and any additional comments and suggestions are welcome.", "We have shortened the paper from 14 to 12 pages. We also added a discussion Section in the revision to discuss the potential applications of CNPI in other \"more real\" domains. Any additional questions or feedback are welcome.", "> What is the purpose of detectors? It is not clear what is being detected. From the context it seems to be encoding observations from the environment, which can vary according to the task and change during program execution. The detector memory is also confusing. In the original NPI, it is assumed that the caller knows which encoder is needed for each program. In CNPI, is this part learned or more general in some way? Also related to:\n> Eqn 3 - binarizing the detector output introduces a non-differentiable operation. How is the detector then trained e.g. from execution traces? Later I see that there is a notion of a “correct condition” for the detector to regress on, which makes me confused again about what exactly the output of a detector means.\nRe: Your understanding of detectors is basically correct. As described in Section 3.1, the detector, as a lightweight and more \"specialized\" version of the encoder in NPI, detects some condition (e.g. a pointer P2 reaching the end of array) in the environment and provides signals for the combinator to condition its execution. It outputs 0 if the condition satisfies, otherwise 1. As with the confusion about detector memory, you mentioned \"In the original NPI, it is assumed that the caller knows which encoder is needed for each program.\" This way of saying is not very precise. In the original NPI paper, encoders are constructed and used on a per task basis, rather than per program. All programs of a task use the same encoder, which is predetermined. Once a particular task has been given, the single shared encoder integrates tightly with the core and effectively becomes part of the monolithic model, not subject to any dynamic selection by the core. So no such thing as an encoder memory is needed in NPI. On the other hand, in CNPI it is not until the interpretation of an applier that which detector is needed for the next combinator to call is determined. This detector is then loaded from the detector memory and \"attached\" to the core. Different programs for the same task may use different detectors (e.g. COMPSWAP and BSTEP for bubble sort task in Figure 3). This architecture is more flexible and promotes the reusability of detectors *across* tasks. For example, a detector detecting pointer P2 reaching the end of array can be used in both grade-school addition task by ADD program and bubble sort task by BSTEP program (see Figure 1). This level of reusability is not easy to achieve in NPI. Reusable detectors can be continually added to the detector memory, just as appliers are added to the program memory, during the lifelong learning process of CNPI to enhance its capability.\n\n> Memory - combinator memory looks like a 4-way softmax over the four combinators, right? The previous NPI program memory is analogous then to the applier memory.\nRe: Whether to use two separate memories for combinators and appliers respectively or to use a single program memory is an implementation issue. While both approaches are feasible, in our current implementation we choose to use a single program memory to store both combinators and appliers and use a flag in the program embeddings to differentiate the two types. Considering this, Figure 1 is a little bit misleading. Anyway, this figure is only for illustration purpose.\n\n> Adds a significant amount of further structure into the NPI framework, which could potentially make broader applications more complex to implement. Do the proposed modifications reduce generality in any way?\nRe: CNPI does add a few essential structures (mainly the frames and the detector memory) into the NPI framework and make the model more complex. But as both the analytical and the experimental results show, the proposed modifications significantly *increase* universality and generality for algorithmic tasks. The limitation is perhaps that we do not discuss how to deal with perceptual input, for which the binary output of detectors may be not sufficient. More detectors types may be needed to extend CNPI to support perceptual inputs, but the proposed detector memory architecture and the dynamic selection of detectors can still be used as basis for such extensions. Overall, we believe that the gains are worth the added complexity.", "I read through this submission and found very interesting and potentially applicable models in this work. I carefully learnt the above 3 reviewer's comments and agreed that this submission provided a solid contribution in the field of reinforcement learning. \n\nMore implications (possible real application scenarios) could be added briefly at last. The article was well organized in an explicit and transparent way, which is good for the community. The length problem could be addressed by moving some technique detailed illustration to the open-source platform to shorten the manuscript.\n\nIn all, considering the circulation period, in my opinion, I am very looking forward to discussing with the authors in the conference rather than journals." ]
[ -1, 3, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "rkRojIHxz", "iclr_2018_rJlMAAeC-", "iclr_2018_rJlMAAeC-", "iclr_2018_rJlMAAeC-", "iclr_2018_rJlMAAeC-", "rkRojIHxz", "ByvgbYFeG", "BkswAkLlG", "BkswAkLlG", "ByvgbYFeG", "ryggtIU-f", "BkswAkLlG", "rkRojIHxz" ]
iclr_2018_HJvvRoe0W
An image representation based convolutional network for DNA classification
The folding structure of the DNA molecule combined with helper molecules, also referred to as the chromatin, is highly relevant for the functional properties of DNA. The chromatin structure is largely determined by the underlying primary DNA sequence, though the interaction is not yet fully understood. In this paper we develop a convolutional neural network that takes an image-representation of primary DNA sequence as its input, and predicts key determinants of chromatin structure. The method is developed such that it is capable of detecting interactions between distal elements in the DNA sequence, which are known to be highly relevant. Our experiments show that the method outperforms several existing methods both in terms of prediction accuracy and training time.
accepted-poster-papers
This paper addresses an important application in genomics, i.e. the prediction of chromatin structure from nucleotide sequences. The authors develop a novel method for converting the nucleotide sequences to a 2D structure that allows a CNN to detect interactions between distant parts of the sequence. The reviewers found the paper innovative, interesting and convincing. Two reviewers gave a 7 and there was one 6. The 6, however, indicated during rather lengthy discussion that they were willing to raise their scores if their comments were addressed. Hopefully the authors will address these comments in the camera ready version. Overall a solid application paper with novel insights and technical innovation.
val
[ "r1nyJDpZG", "r18RxrXlG", "BkZQ81QlG", "r19CoevgM", "r1IqLpczf", "Hyi4DyMzG", "r1bLW-MWz" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "public" ]
[ "Dear reviewer,\n\nThank you very much for your comment. We feel that you have made an excellent point here regarding the filter-size difference between the 1D-sequence CNN and the Hilbert CNN. Your suggestion has resulted in further insight into the particular advantages of the components of our approach, which, to our feeling, has led to non-negligible improvements of the manuscript. \n\nFirst, one comment on properties: while continuity and clustering properties are trivially preserved by keeping 1D sequence, the third property we are citing is not preserved by 1D sequence. Namely, different, separate subsequences going into the filter, are, on average, farthest apart within the sequence for Hilbert curves. So, beyond considering larger filter sizes, Hilbert curves can still have particular advantages because of mapping distal relationships (and are even optimal in that regard).\n\nTo investigate the interplay of the novelties we have been suggesting further, we ran the 1D-sequence CNN now with an identical number of parameters in all layers (e.g. a 7x7 convolution in the first layer of Hilbert-CNN corresponds to a 49x1). In addition, following a suggestion from one of the other reviewers, we computed recall, precision, area under precision-recall curve (AP) and AUC, where precision-recall and ROC curves were computed by filtering the softmax output. The corresponding additional results can be found in the updated version of the document, in tables 3, 4 and 5. \n\nWe find that :\n1. Both the 1D sequence input and the 2D image input significantly outperform the current state of the art (which, as we feel, justifies that our paper is discussed at a venue such as the ICLR).\n2. We further observe that the standard deviation of the networks across different training runs is much smaller for the 2D (e.g. 7x7) Hilbert curve representation than for the naive 1D (e.g. 49x1) one. \nWe find that the 2D representation yields improvement over the 1D sequence representation in terms of recall and precision, and, in terms of AP and AUC, even drastic improvements (raising performance by more than 5% on average over the 49x1 filters).\n\nWe conclude that:\n1. In terms of basic performance measures such as accuracy, recall, precision, the architecture of our network (which in comparison to prior work avoids large layers to precede the fully connected layers) is the decisive factor.\n2. Nevertheless, Hilbert curves enable \na. More robust predictions, as indicated by the significantly lower variance in accuracy across different runs.\nb. Filtering of the output for optimizing in terms of precision-recall - tradeoffs (which the 1D output does not reliably allow to do).\n\nOverall, these findings suggest that there is particular advantage with respect to using 2D representations in the form of space filling (in particular Hilbert) curves, while there is also substantial power in the proposed network structure (which, as a novelty, more decidedly reduces numbers of hidden nodes in layers preceding the fully connected layers). \n\nWe reworked the introduction and discussion to more properly reflect this finding and adapted the title accordingly. ", "Dear editors,\n\nthe authors addressed all of my comments and clearly improved their manuscript over multiple iterations. I therefore increased my rating from ‘6: Marginally above acceptance threshold’ to ‘7: Good paper, accept’.\nPlease note that the authors made important edits to their manuscript after the ICLR deadline and could hence not upload their most current version, which you can from https://file.io/WIiEw9. If you decided to publish the manuscript, I hence highly suggest using this (https://file.io/WIiEw9) version.\n\nBest,\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nThe authors present Hilbert-CNN, a convolutional neural network for DNA sequence classification. Unlike existing methods, their model does not use the raw one-dimensional (1D) DNA sequence as input, but two-dimensional (2D) images obtained by mapping sequences to images using spacing-filling Hilbert-Curves. They further present a model (Hilbert-CNN) that is explicitly designed for Hilbert-transformed DNA sequences. The authors show that their approach can increase classification accuracy and decrease training time when applied to predicting histone-modification marks and splice junctions. \n\nMajor comments\n=============\n1. The motivation of transforming sequences into images is unclear and claimed benefits are not sufficiently supported by experiments. The essence of deep neural networks is to learn a hierarchy of features from the raw data instead of engineering features manually. Using space filling methods such as Hilbert-curves to transform (DNA) sequences into images can be considered as unnecessary feature-engineering. \n\nThe authors claim that ‘CNNs have proven to be most powerful when operating on multi-dimensional input, such as in image classification’, which is wrong. Sequence-based convolutional and recurrent models have been successfully applied for modeling natural languages (translation, sentiment classification, …), acoustic signals (speech recognition, audio generation), or biological sequences (e.g. predicting various epigenetic marks from DNA as reviewed in Angermueller et al). \n\nThey further claim that their method can ‘better take the spatial features of DNA sequences into account’ and can better model ‘long-term interactions’ between distant regions. This is not obvious since Hilbert-curves map adjacent sequence characters to pixels that are close to each other as described by the authors, but distant characters to distant pixels. Hence, 2D CNN must be deep enough for modeling interactions between distant image features, in the same way as a 1D CNN.\n\nTransforming sequences to images has several drawbacks. 1) Since the resulting images have a small width and height but many channels, existing 2D CNNs such as ResNet or Inception can not be applied, which also required the authors to design a specific model (Hilbert-CNN). 2) Hilbert-CNN requires more memory due to empty image regions. 3) Due to the high number of channels, convolutional filters have more parameters. 4) The sequence-to-image transformation makes model-interpretability hard, which is in particular important in biology. For example, motifs of the first convolutional layers can not be interpreted as sequence motifs (as described in Angermueller et al) and it is unclear how to analyze the influence of sequence characters using attention or gradient-based methods.\n\nThe authors should more clearly motivate their model in the introduction, tone-down the benefit of sequence-to-image transformations, and discuss drawbacks of their model. This requires major changes of introduction and discussion.\n\n2. The authors should more clearly describe which and how they optimized hyper-parameters. The authors should optimize the most important hyper-parameters of their model (learning rate, batch size, weight decay, max vs. average pooling, ELU vs. ReLU, …) and baseline models on a holdout validation set. The authors should also report the validation accuracy for different sequence lengths, k-mer sizes, and space filling functions. Can their model be applied to longer sequences (>= 1kbp) which had been shown to improve performance (e.g. 10.1101/gr.200535.115)? Does Figure 4 show the performance on the training, validation, or test set?\n\n3. It is unclear if the performance gain is due the proposed sequence-to-image transformation, or due to the proposed network architecture (Hilbert-CNN). It is also unclear if Hilbert-CNNs are applicable to DNA sequence classification tasks beyond predicting chromatin states and splice junctions. To address these points, the authors should compare Hilbert-CNN to models of the same capacity (number of parameters) and optimize hyper-parameters (k-mer size, convolutional filter size, learning rate, …) in the same way as they did for Hilbert-CNN. The authors should report the number of parameters of all models (Hilbert-CNN, Seq-CNN, 1D-sequence-CNN (Table 5), and LSTM (Table 6), …) in an additional table. The authors should also compare Hilbert-CNN to the DanQ architecture on predicting epigenetic markers using the same dataset as reported in the DanQ publication (DOI: 10.1093/nar/gkw226). The authors should also compare Hilbert-CNNs to gapped-kmer SVM, a shallow model that had been successfully applied for genomic prediction tasks.\n\n4. The authors should report the AUC and area under precision-recall curve (APR) in additional to accuracy (ACC) in Table 3.\n\n5. It is unclear how training time was measured for baseline models (Seq-CNN, LSTM, …). The authors should use the same early stopping criterion as they used for training Hilber-CNNs. The authors should also report the training time of SVM and gkm-SVM (see comment 3) in Table 3.\n\n\nMinor comments\n=============\n1. The authors should avoid uninformative adjectives and clutter throughout the manuscript, for example ‘DNA is often perceived’, ‘Chromatin can assume’, ‘enlightening’, ‘very’, ‘we first have to realize’, ‘do not mean much individually’, ‘very much like the tensor’, ‘full swing’, ‘in tight communication’, ‘two methods available in the literature’.\n\nThe authors should point out in section two that k-mers can be overlapping.\n\n2. Section 2.1: One-hot vectors is not the only way for embedding words. The authors should also mention Glove and word2vec. Similar approaches had been applied to protein sequences (DOI: 10.1371/journal.pone.0141287)\n\n3. The authors should more clearly describe how Hilbert-curves map sequences to images and how images are cropped. What does ‘that is constructed in a recursive manner’ mean? Simply cropping the upper half of Figure 1c would lead to two disjoint sequences. What is the order of Figure 1e?\n\n4. The authors should consistently use ‘channels’ instead of ‘full vector of length’ to denote the dimensionality of image pixels.\n\n5. The authors should use ‘Batch norm’ instead of ‘BN’ in Figure 2 for clarification.\n\n6. Hilber-CNN is similar to ResNet (DOI: 10.1371/journal.pone.0141287), which consists of multiple ‘residual blocks’, where each block is a sequence of ‘residual units’. A ‘computational block’ in Hilbert-CNN contains two parallel ‘residual blocks’ (Figure 3) instead of a sequence of ‘residual units’. The authors should use ‘residual block’ instead of ‘computational block’, and ‘residual units’ as in the original ResNet publication. The authors should also motivate why two residual units/blocks are applied parallely instead of sequentially.\n\n7. Caption table 1: the authors should clarify if ‘Output size’ is ‘height, width, channels’, and explain the notation in ‘Description’ (or refer to the text.)", "There are functional elements attached on the DNA sequence, such as transcription factors and different kinds of histones as stated in this ms. A hidden assumption is that the binding sites of these functional elements over the genome share some common features. It is therefore biologically interesting to predict if a new DNA sequence could be a binding site. Naturally this is is classification problem where the input is the DNA sequence and the output is whether the give sequence is a binding site.\n\nThis ms makes a novel way to transform the DNA sequence into a 3-dimensional tensor which could be easily utilised by CNN for images. The DNA sequence is first made into a a list of 4-mers. Then then each 4-mer is coded as a 4^4=256 dimensional vector. The order of the 4-mers is then coded into a image using Hilbert curve which presumably has nice properties to keep spatial information.\n\nI am not familiar with neural networks and do not comment on the methods but rather from the application point of view. \n\nFirst to my best knowledge, it is still controversial if the binding sites of different histones carries special features. I mean it could be possible that the assumption I mentioned in the beginning may not hold for this special application, especially for human data. I feel this method is more suitable for transcription factor motif data. see https://www.nature.com/articles/nbt.3300\n\nSecond, the experiments data in 2005 is measured using microarray, which uses probes of 500bp long. But the whole binding site for a nucleosome (or histone complex) is 147bp, which is much shorter than the probe. Nowadays we have more accurate sequencing data for nucleosome (check https://www.ncbi.nlm.nih.gov/pubmed/26411474). I am not sure whether this result will generalised to some other similar dataset. \n\nThird, the results only list the accuracy, it will be interesting to see the proportion of false negatives.\n\nIn general I feel the transformation is quite useful, it nicely reserves the spatial information, also can be seen from the improved results over all datasets. The result, in my opinion, is not sufficient to support the assumption that we could predict the DNA structures solely base on the sequence.\n\n", "The authors of this manuscript transformed the k-mer representation of DNA fragments to a 2D image representation using the space-filling Hilbert curves for the classification of chromatin occupancy. In generally, this paper is easy to read. The components of the proposed model mainly include Hilbert curve theory and CNN which are existing technologies. But the authors make their combination useful in applications. Some specific comments are:\n\n1. In page 5, I could not understand the formula d_kink < d_out. d_link ;\n2. There may exist some new histone modification data that were captured by the next-generation sequencing (e.g. ChIP-seq) and are more accurate; \n3. It seems that the authors treat it as a two-class problem for each data set. It would be more useful in real applications if all the data sets are combined to form a multi-class problem.\n", "Dear Akash and team,\n\nThank you for your interest in our paper, we appreciate your effort in reproducing the results.\n\nWe thoroughly read your report, and noted some differences between your implementation and ours which are likely to have caused the differences in results. First, the early stopping approach that is used differs: the use of GL0 stops the training process earlier than then the GL2 we used. Additionally, we used a combination of GL2 and No-improvement-in-N-steps. Second, the output is of a different form: instead of using 0/1 values for class prediction, we used one-hot vectors as the output. This generally improves prediction accuracy (see also https://stackoverflow.com/questions/17469835/why-does-one-hot-encoding-improve-machine-learning-performance). Third, we used self-defined droput and normalization which improves efficiency and accuracy. Finally, our k-mer representation is indeed different from the one you are using: the value of a k-mer is based on the occurrence of the subsequence in the dataset (higher occurence - lower number (Bojian: or higher?)). The latter however is probably not causing a significant difference, according to some tests we ran. We will add the missing information in the next version of our manuscript in as far as space allows.\n\nWe hope this answers your questions. If you have any further questions, feel free to contact us.", "Hello authors,\n\nWe found this paper really interesting and therefore took this opportunity to fully understand your work by reproducing the results. Please refer to the link below to review the challenges that one can face while using your proposed paper for further innovations.\nLink: \n\nhttps://drive.google.com/file/d/1ged4-Yxisl8HKltGEvBMf936BZCJ4SQZ/view?usp=sharing\n\nThanks and regards\nAkash Singh\[email protected]\n(on behalf of team)\n", "Hi authors,\n\nThe usage of a Hilbert space filling curve for chromatin structure classification is not clearly motivated to me. If I understand the method correctly, the goal is to perform some deterministic mapping from the 1D structure to the 2D space filling curve in order to leverage the spatial structure of popular CNN methods. However, since the Hilbert space filling curve does not leverage any biologically motivated information to construct the 2D image, it is unclear how this would help. The Hilbert space filling curve has nice properties such as (continuity, clustering, etc), but the original 1D sequence also possesses all of these properties. It seems simply using a larger filter size on the 1D sequence should achieve similar results since the 1D sequence itself should be able to achieve the continuity, clustering, etc properties of the Hilbert space filling curve. \n\nIn addition, as the below reviewers have mentioned, the success of CNNs are not specific to 2D images, and there is not really a theoretical argument for converting a 1D structure to a 2D structure. Am I missing something here regarding the problem of chromatin structure classification or the representational power of Hilbert space filling curves?\n\nFurthermore, the results section has not fully convinced me that the Hilbert-CNN is more powerful than applying a direct 1D CNN on the sequence. The method compared against in Nguyen et al (2016) only uses a 7 x 1 filter size in the first layer while the Hilbert-CNN uses a 7x7 filter size in the first layer. Perhaps a more convincing comparison would be comparing against a method that uses the same number of parameters (i.e. a 1D method that uses a filter size of 49 x 1 in the first layer) to demonstrate the utility of converting a 1D sequence to a 2D image.\n\nI would be very interested if the Hilbert space filling curve was indeed a superior representation than its 1D counterpart." ]
[ -1, 7, 7, 7, -1, -1, -1 ]
[ -1, 5, 3, 5, -1, -1, -1 ]
[ "r1bLW-MWz", "iclr_2018_HJvvRoe0W", "iclr_2018_HJvvRoe0W", "iclr_2018_HJvvRoe0W", "Hyi4DyMzG", "iclr_2018_HJvvRoe0W", "iclr_2018_HJvvRoe0W" ]
iclr_2018_rydeCEhs-
SMASH: One-Shot Model Architecture Search through HyperNetworks
Designing architectures for deep neural networks requires expert knowledge and substantial computation time. We propose a technique to accelerate architecture selection by learning an auxiliary HyperNet that generates the weights of a main model conditioned on that model's architecture. By comparing the relative validation performance of networks with HyperNet-generated weights, we can effectively search over a wide range of architectures at the cost of a single training run. To facilitate this search, we develop a flexible mechanism based on memory read-writes that allows us to define a wide range of network connectivity patterns, with ResNet, DenseNet, and FractalNet blocks as special cases. We validate our method (SMASH) on CIFAR-10 and CIFAR-100, STL-10, ModelNet10, and Imagenet32x32, achieving competitive performance with similarly-sized hand-designed networks.
accepted-poster-papers
This paper proposes a method for having a meta deep learning model generate the weights of a main model given a proposed architecture. This allows the authors to search over the space of architectures efficiently. The reviewers agreed that the paper was very well composed, presents an interesting and thought provoking idea and provides compelling empirical analysis. An exploration of the failure modes of the approach is highly appreciated. The lowest score was also of quite low confidence, so the overall score should probably be one point higher. Pros: - Very well written and composed - "Thought provoking" - Some strong experimental results - Analysis of weaker experimental results (failure modes) Cons: - Some weak results (also in pros, however)
train
[ "SkmGrjvlz", "rJ200-5ez", "SycMimAgG", "HysNsD6XM", "ryK_yyjGM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "Summary of paper - This paper presents SMASH (or the one-Shot Model Architecture Search through Hypernetworks) which has two training phases (one to quickly train a random sample of network architectures and one to train the best architecture from the first stage). The paper presents a number of interesting experiments and discussions about those experiments, but offers more exciting ideas about training neural nets than experimental successes. \n\nReview - The paper is very well written with clear examples and an excellent contextualization of the work among current work in the field. The introduction and related work are excellently written providing both context for the paper and a preview of the rest of the paper. The clear writing make the paper easy to read, which also makes clear the various weaknesses and pitfalls of SMASH. \n\nThe SMASH framework appears to provide more interesting contributions to the theory of training Neural Nets than the application of said training. While in some experiments SMASH offers excellent results, in others the results are lackluster (which the authors admit, offering possible explanations). \n\nIt is a shame that the authors chose to push their section on future work to the appendices. The glimmers of future research directions (such as the end of the last paragraph in section 4.2) were some of the most intellectually exciting parts of the paper. This choice may be a reflection of preferring to highlight the experimental results over possible contributions to theory of neural nets. \n\n\nPros - \n* Strong related work section that contextualizes this paper among current work\n* Very interesting idea to more efficiently find and train best architectures \n* Excellent and thought provoking discussions of middle steps and mediocre results on some experiments (i.e. last paragraph of section 4.1, and last paragraph of section 4.2)\n* Publicly available code \n\nCons - \n* Some very strong experimental results contrasted with some mediocre results\n* The balance of the paper seems off, using more text on experiments than the contributions to theory. \n* (Minor) - The citation style is inconsistent in places. \n\n=-=-=-= Response to the authors\n\nI thank the authors for their thoughtful responses and for the new draft of their paper. The new draft laid plain the contribution of the memory bank which I had missed in the first version. As expected, the addition of the future work section added further intellectual excitement to the paper. \n\nThe expansion of section 4.1 addressed and resolved my concerns about the balance of the paper by effortless intertwining theory and application. I do have one question from this section - In table 1, the authors report p-values but fail to include them in their interpretation; what is purpose of including these p-values, especially noting that only one falls under the typical threshold for significance?\n", "This paper is about a new experimental technique for exploring different neural architectures. It is well-written in general, numerical experiments demonstrate the framework and its capabilities as well as its limitations. \n\nA disadvantage of the approach may be that the search for architectures is random. It would be interesting to develop a framework where the search for the architecture is done with a framework where the updates to the architecture is done using a data-driven approach. Nevertheless, there are so many different neural architectures in the literature and this paper is a step towards comparing various architectures efficiently. \n\nMinor comments:\n\n1) Page 7, \".. moreso than domain specificity.\" \n\nIt may be better to spell the word \"moreso\" as \"more so\", please see: https://en.wiktionary.org/wiki/moreso", "This paper tackles the problem of finding an optimal architecture for deep neural nets . They propose to solve it by training an auxiliary HyperNet to generate the main model. The authors propose the so called \"SMASH\" algorithm that ranks the neural net architectures based on their validation error. The authors adopt a memory-bank view of the network configurations for exploring a varied collection of network configurations. It is not clear whether this is a new contribution of this paper or whether the authors merely adopt this idea. A clearer note on this would be welcome. My key concern is with the results as described in 4.1.; the correlation structure breaks down completely for \"low-budget\" SMASH in Figure 5(a) as compared Figure (4). Doesn't this then entail an investigation of what is the optimal size of the hyper network? Also I couldn't quite follow the importance of figure 5(b) - is it referenced in the text? The authors also note that SMASH is saves a lot of computation time; some time-comparison numbers would probably be more helpful to drive home the point especially when other methods out-perform SMASH. \nOne final point, for the uninitiated reader- sections 3.1 and 3.2 could probably be written somewhat more lucidly for better access.", "We would like to thank each of the reviewers for their time and constructive feedback, which we have incorporated in this revision. Specifically:\n\n-We updated second and third parts of Section 4.1 to more thoroughly investigate the correlation between SMASH scores and resulting validation scores by examining scores for a variety of HyperNet architectures and ratios of generated vs. static weights. We examine the strength and significance of the correlation between SMASH scores and validation scores using Pearson's R. We have moved the two original experiments addressing the breakdown of the correlation to the appendix, and updated them to properly reference the previously unreferenced figure.\n\n-We have slightly improved writing throughout, fixing the noted typos, and changing our wording to make clear that the memory bank view is a novel development which we are introducing in this work.\n\n-We have moved the Future Work section from the appendix into the main body of the paper at the suggestion of Reviewer 2. This section had previously been relegated to the appendix at the behest of a previous review cycle for an earlier revision of the paper.\n \n-We have added some simple runtime numbers in Table 2 for comparison to other architecture search methods\n\nThanks again for your reviews.\n\nBest,\n\nPaper1 Authors\n\n", "Hi Reviewer2,\n\nThank you for your detailed and constructive feedback. We are preparing a revision and a complete response but would like a quick bit of clarification as to specifically which experimental results fall under the lackluster banner.\n\nThanks,\n\nPaper1 Authors" ]
[ 7, 7, 6, -1, -1 ]
[ 3, 4, 2, -1, -1 ]
[ "iclr_2018_rydeCEhs-", "iclr_2018_rydeCEhs-", "iclr_2018_rydeCEhs-", "iclr_2018_rydeCEhs-", "SkmGrjvlz" ]
iclr_2018_ByBAl2eAZ
Parameter Space Noise for Exploration
Deep reinforcement learning (RL) methods generally engage in exploratory behavior through noise injection in the action space. An alternative is to add noise directly to the agent's parameters, which can lead to more consistent exploration and a richer set of behaviors. Methods such as evolutionary strategies use parameter perturbations, but discard all temporal structure in the process and require significantly more samples. Combining parameter noise with traditional RL methods allows to combine the best of both worlds. We demonstrate that both off- and on-policy methods benefit from this approach through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks.
accepted-poster-papers
This paper proposes adding noise to the parameters of a deep network when taking actions in deep reinforcement learning to encourage exploration. The method is simple but the authors demonstrate its effectiveness through thorough empirical analysis across a variety of reinforcement learning tasks (i.e. DQN, DDPG, and TRPO). Overall the paper is clear, well written and the reviewers enjoyed it. However, a common trend among the reviews was that the authors overstated their claims and contributions. The reviewers called out some statements in particular (e.g. the discussion of ES and RL) which the authors appear to have addressed when comparing their revisions (thank you). Overall, a clear, well written paper conveying a simple but effective idea for exploration that often works across a variety of RL tasks. The authors also released open-source code along with their paper for reproducibility (as evidenced by the reproducibility study below), which is appreciated. Pros: - Clear and well written - Thorough experiments across deep RL domains - A simple strategy for exploration that is effective empirically Cons: - Not a panacea for exploration (although nothing really is) - Claims are somewhat overstated - Lacks a strong justification for the method other than that it is empirically effective and intuitive
train
[ "S1Zd1FUEM", "By9jfdZkM", "r1gBpq_gG", "ryVd3dFgf", "HJxWJ0eEG", "HyUtjoTmG", "HJjO3r97f", "HJUU9kZzf", "ByiX9J-zf", "SkaJ5JWMG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "author", "author" ]
[ "I read the author's response and other reviews. I think the author's have addressed most concerns (I'm still curious about the discrepancy in DDPG result). My rating was already positive so I've left it unchanged.", "This paper explores the idea of adding parameter space noise in service of exploration. The paper is very well written and quite clear. It does a good job of contrasting parameter space noise to action space noise and evolutionary strategies.\n\nHowever, the results are weak. Parameter noise does better in some Atari + Mujoco domains, but shows little difference in most domains. The domains where parameter noise (as well as evolutionary strategies) does really well are Enduro and the Chain environment, in which a policy that repeatedly chooses a particular action will do very well. E-greedy approaches will always struggle to choose the same random action repeatedly. Chain is great as a pathological example to show the shortcomings of e-greedy, but few interesting domains exhibit such patterns. Similarly for the continuous control with sparse rewards environments – if you can construct an environment with sparse enough reward that action-space noise results in zero rewards, then clearly parameter space noise will have a better shot at learning. However, for complex domains with sparse reward (e.g. Montezuma’s Revenge) parameter space noise is just not going to get you very far.\n\nOverall, I think parameter space noise is a worthy technique to have analyzed and this paper does a good job doing just that. However, I don’t expect this technique to make a large splash in the Deep RL community, mainly because simply adding noise to the parameter space doesn’t really gain you much more than policies that are biased towards particular actions. Parameter noise is not a very smart form of exploration, but it should be acknowledged as a valid alternative to action-space noise.\n\nA non-trivial amount of work has been done to find a sensible way of adding noise to parameter space of a deep network and defining the specific distance metrics and thresholds for (dual-headed) DQN, DDPG, and TRPO.\n", "In recent years there have been many notable successes in deep reinforcement learning. However, in many tasks, particularly sparse reward tasks, exploration remains a difficult problem. For off-policy algorithms it is common to explore by adding noise to the policy action in action space, while on-policy algorithms are often regularized in the action space to encourage exploration. This work introduces a simple, computationally straightforward approach to exploring by perturbing the parameters (similar to exploration in some evolutionary algorithms) of policies parametrized with deep neural nets. This work argues this results in more consistent exploration and compares this approach empirically on a range of continuous and discrete tasks. By using layer norm and adaptive noise, they are able to generate robust parameter noise (it is often difficult to estimate the appropriate variance of parameter noise, as its less clear how this relates to the magnitude of variance in the action space).\n\nThis work is well-written and cites previous work appropriately. Exploration is an important topic, as it often appears to be the limiting factor of Deep RL algorithms. The authors provide a significant set of experiments using their method on several different RL algorithms in both continuous and discrete cases, and find it generally improves performance, particularly for sparse rewards.\n\nOne empirical baseline that would helpful to have would be a stochastic off-policy algorithm (both off-policy algorithms compared are deterministic), as this may better capture uncertainty about the value of actions (e.g. SVG(0) [3]).\n\nAs with any empirical results with RL, it is a challenging problem to construct comparable benchmarks due to minor variations in implementation, environment or hyper-parameters all acting as confounding variables [1]. It would be helpful if the authors are able to make their paper reproducible by releasing the code on publication. As one example, figure 4 of [1] seems to show DDPG performing much better than the DDPG baseline in this work on half-cheetah.\n\nMinor points:\n- The definition of a stochastic policy (section 2) is unusual (it is defined as an unnormalized distribution). Usually it would be defined as $\\mathcal{S} \\rightarrow \\mathcal{P}(\\mathcal{A})$\n\n- This work extends DQN to learn an explicitly parametrized policy (instead of the greedy policy) in order to useful perturb the parameters of this policy. Instead of using a single greedy target, you could consider use the relationship between the advantage function and an entropy-regularized policy [2] to construct a target.\n\n[1] Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., & Meger, D. (2017). Deep reinforcement learning that matters. arXiv preprint arXiv:1709.06560.\n\n[2] O'Donoghue, B., Munos, R., Kavukcuoglu, K., & Mnih, V. (2016). Combining policy gradient and Q-learning.\n\n[3] Heess, N., Wayne, G., Silver, D., Lillicrap, T., Erez, T., & Tassa, Y. (2015). Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems (pp. 2944-2952).", "This paper proposes a method for parameter space noise in exploration.\nRather than the \"baseline\" epsilon-greedy (that sometimes takes a single action at random)... this paper presents an method for perturbations to the policy.\nIn some domains this can be a much better approach and this is supported by experimentation.\n\nThere are several things to like about the paper:\n- Efficient exploration is a big problem for deep reinforcement learning (epsilon-greedy or Boltzmann is the de-facto baseline) and there are clearly some examples where this approach does much better.\n- The noise-scaling approach is (to my knowledge) novel, good and in my view the most valuable part of the paper.\n- This is clearly a very practical and extensible idea... the authors present good results on a whole suite of tasks.\n- The paper is clear and well written, it has a narrative and the plots/experiments tend to back this up.\n- I like the algorithm, it's pretty simple/clean and there's something obviously *right* about it (in SOME circumstances).\n\nHowever, there are also a few things to be cautious of... and some of them serious:\n- At many points in the paper the claims are quite overstated. Parameter noise on the policy won't necessarily get you efficient exploration... and in some cases it can even be *worse* than epsilon-greedy... if you just read this paper you might think that this was a truly general \"statistically efficient\" method for exploration (in the style of UCRL or even E^3/Rmax etc).\n- For instance, the example in 4.2 only works because the optimal solution is to go \"right\" in every timestep... if you had the network parameterized in a different way (or the actions left/right were relabelled) then this parameter noise approach would *not* work... By contrast, methods such as UCRL/PSRL and RLSVI https://arxiv.org/abs/1402.0635 *are* able to learn polynomially in this type of environment. I think the claim/motivation for this example in the bootstrapped DQN paper is more along the lines of \"deep exploration\" and you should be clear that your parameter noise does *not* address this issue.\n- That said I think that the example in 4.2 is *great* to include... you just need to be more upfront about how/why it works and what you are banking on with the parameter-space exploration. Essentially you perform a local exploration rule in parameter space... and sometimes this is great - but you should be careful to distinguish this type of method from other approaches. This must be mentioned in section 4.2 \"does parameter space noise explore efficiently\" because the answer you seem to imply is \"yes\" ... when the answer is clearly NOT IN GENERAL... but it can still be good sometimes ;D\n- The demarcation of \"RL\" and \"evolutionary strategies\" suggests a pretty poor understanding of the literature and associated concepts. I can't really support the conclusion \"RL with parameter noise exploration learns more efficiently than both RL and evolutionary strategies individually\". This sort of sentence is clearly wrong and for many separate reasons:\n - Parameter noise exploration is not a separate/new thing from RL... it's even been around for ages! It feels like you are talking about DQN/A3C/(whatever algorithm got good scores in Atari last year) as \"RL\" and that's just really not a good way to think about it.\n - Parameter noise exploration can be *extremely* bad relative to efficient exploration methods (see section 2.4.3 https://searchworks.stanford.edu/view/11891201)\n\n\nOverall, I like the paper, I like the algorithm and I think it is a valuable contribution.\nI think the value in this paper comes from a practical/simple way to do policy randomization in deep RL.\nIn some (maybe even many of the ones you actually care about) settings this can be a really great approach, especially when compared to epsilon-greedy.\n\nHowever, I hope that you address some of the concerns I have raised in this review.\nYou shouldn't claim such a universal revolution to exploration / RL / evolution because I don't think that it's correct.\nFurther, I don't think that clarifying that this method is *not* universal/general really hurts the paper... you could just add a section in 4.2 pointing out that the \"chain\" example wouldn't work if you needed to do different actions at each timestep (this algorithm does *not* perform \"deep exploration\").\n\nI vote accept.", "I would like to thank you for conducting this study and evaluating the behavior of parameter space noise with duelling networks and prioritized replay. It is interesting that both seem to not help in this case.\n\nI would also like to note that the bug that you mention was only introduced while we refactored our code to be releasable (this was necessary due to the original code having a heavy dependance on our internal infrastructure). The experiments presented in our paper were not affected by it and the scaling was handled correctly.", "We performed a brief reproducibility study for this paper. The objective was to examine the potential benefits of parameter noise and measure its robustness to changes in hyperparameters and network configuration.\n\nWe found that the implementation we used (which is linked in the paper below) currently has a bug present in the master branch which disrupts the adaptive scaling algorithm for Q-learning, causing explosion of the noise variance in some environments (namely, Zaxxon). This is fixed by pull request #143 of the implementation repository and was later noted as issue #157, both of which are open as of the time of writing. Our results were generated by a patched version which merges pull request #143 with the master branch.\n\nTo summarize the report:\n\n(1) For the initial regime of the continuous control environments, we observed the behaviour mentioned in this paper where action noise policies will only flip the HalfCheetah agent on its back and slowly work their way forward, while the parameter noise policy will eventually learn a more realistic and better performing gait. Owing to time constraints, these experiments were left as incomplete; we are unable to present a quantitative evaluation of parameter noise with respect to continuous control.\n\n(2) Two of the improvements which had been inferred to be orthogonal to parameter space noise (dueling networks and prioritized replay), at least in certain ALE environments, appear to improve the performance of epsilon-greedy policies without improving the performance of policies which use parameter noise. So, on certain environments, adding parameter space noise may not improve an epsilon-greedy policy that already uses dueling networks and prioritized replay. While our results on ALE were limited by time and computational constraints, the implementation authors provide their own results (which are linked in section 4.1 of our report) that corroborate this point more strongly. This effect is especially visible in the Enduro environment where the unimproved parameter noise policy dominated the unimproved epsilon-greedy policy by a considerable margin (as displayed in Figure 1 of the paper under examination) but the improved epsilon-greedy policy performed either better or near-identically to both the improved and unimproved parameter noise policies (as displayed in both the implementation authors' and our results).\n\n(3) From the ablation studies on Walker2D, we saw that removing layer normalization may degrade performance of the parameter noise policy. We also observed that varying the noise scale parameter by any more than an order of magnitude in either direction causes loss of the ability to learn.\n\nThe full report is available here:\nhttps://github.com/c-connors/param-noise-repr/blob/master/parameter-space-noise.pdf", "Dear reviewers, a revised manuscript that takes your feedback into account has been uploaded.", "We would like to thank the reviewer for the insightful comments and suggestions.\n\nWe do agree that parameter noise alone is not going to solve exploration in reinforcement learning. However, we do feel that it provides an interesting alternative to the still de-facto standard of exploration, which is action space noise like epsilon-greedy or additive Gaussian noise. We think that our paper demonstrates that parameter noise exhibits different behavior that can often result in superior exploration that such simple action space noise exploration methods while being conceptually similarly simple. Furthermore, many recent exploration strategies like intrinsic motivation or count-based approaches augment the reward function with a bonus to encourage exploration but still rely on action space noise for “low-level” exploration. We think that parameter noise could also be an interesting replacement for this low level exploration.\n\nThat being said, we do agree that the paper can often seem to overstate the exploration properties of our proposed method. We will carefully revise the manuscript to better present parameter space noise as an interesting alternative to action space noise while emphasizing that it by no means resolves the exploration problem universally.", "We would like to thank the reviewer for the insightful comments and suggestions.\n\nWe agree that reproducibility is an important consideration. The code for DQN and DDPG has already been open-sourced. Unfortunately we cannot directly link to it in the paper due to the double-blind review process. The final version of the paper will include a link to the source code.\n\nWe further agree that a stochastic off-policy algorithm such as SVG(0) would be an interesting addition. However, we feel like DQN, DDPG, and TRPO already cover a significant spectrum and we would therefore leave the evaluation with other algorithms like SVG(0) and PPO to future work.\n\nWe will also revise the definition of a stochastic policy in the continuous case as suggested by the reviewer.", "We would like to thank the reviewer for the insightful comments and suggestions.\n\nWe will update section 4.2 to better reflect the limitations of our proposed method and to clarify that parameter noise is by no means a universally applicable strategy with guarantees. In particular, we will include a paragraph in the chain environment discussion to highlight that parameter noise works well here due to the simplicity of the optimal strategy. We will further clarify that this experiment was intended to highlight the difference in behavior between epsilon-greedy exploration and parameter noise and that it is clearly a toy problem that should not be interpreted as a claim that parameter noise exploration results in universally better exploration.\n\nWe will also revise the text that is concerned with the discussion of ES and RL. We do agree that parameter noise in general is by no means a novel concept and will revise accordingly. We will further clarify that the scope of the proposed approach is to make parameter noise work in the context of deep reinforcement learning and that our comparison is meant to highlight the advances in sample complexity compared to the method proposed by Salimans et al. (2017).\n\nGenerally speaking, we will revise our paper to better reflect what parameter noise really is: A conceptually simple replacement for simple exploration strategies like epsilon-greedy that often results in better exploration than these baselines. However, by no means is parameter noise a universally applicable method with guarantees like RLSVI or E^3. We will add language to clearly state this." ]
[ -1, 6, 7, 7, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 4, 5, -1, -1, -1, -1, -1, -1 ]
[ "ByiX9J-zf", "iclr_2018_ByBAl2eAZ", "iclr_2018_ByBAl2eAZ", "iclr_2018_ByBAl2eAZ", "HyUtjoTmG", "iclr_2018_ByBAl2eAZ", "iclr_2018_ByBAl2eAZ", "By9jfdZkM", "r1gBpq_gG", "ryVd3dFgf" ]
iclr_2018_r1VVsebAZ
Synthesizing realistic neural population activity patterns using Generative Adversarial Networks
The ability to synthesize realistic patterns of neural activity is crucial for studying neural information processing. Here we used the Generative Adversarial Networks (GANs) framework to simulate the concerted activity of a population of neurons. We adapted the Wasserstein-GAN variant to facilitate the generation of unconstrained neural population activity patterns while still benefiting from parameter sharing in the temporal domain. We demonstrate that our proposed GAN, which we termed Spike-GAN, generates spike trains that match accurately the first- and second-order statistics of datasets of tens of neurons and also approximates well their higher-order statistics. We applied Spike-GAN to a real dataset recorded from salamander retina and showed that it performs as well as state-of-the-art approaches based on the maximum entropy and the dichotomized Gaussian frameworks. Importantly, Spike-GAN does not require to specify a priori the statistics to be matched by the model, and so constitutes a more flexible method than these alternative approaches. Finally, we show how to exploit a trained Spike-GAN to construct 'importance maps' to detect the most relevant statistical structures present in a spike train. Spike-GAN provides a powerful, easy-to-use technique for generating realistic spiking neural activity and for describing the most relevant features of the large-scale neural population recordings studied in modern systems neuroscience.
accepted-poster-papers
This paper proposes a novel application of generative adversarial networks to model neural spiking activity. Their technical contribution, SpikeGAN, generates neural spikes that accurately match the statistics of real recorded spiking behavior from a small number of neurons. The paper is controversial among the reviewers with a 4, a 6 and an 8. The 6 is short and finds the idea exciting but questions the utility of the proposed approach in terms of actually studying neural spiking. The 4 and 8 are both quite thorough reviews. 4 seems to mostly question the motivation of using a GAN over a MaxEnt model and demands empirical comparison to other approaches. 8 applauds the paper as a well-executed pure application paper, applying recent innovations in machine learning to an important application with some technical innovation. Overall the reviewers found the paper clear and easy to follow and agree that the application of GANs to neural spiking activity is novel. In general, I find that such high variance in scores (with thorough reviews) indicate that the paper is exciting, innovative and might stir up some interesting discussion. As such, and under the belief that ICLR is made stronger with interesting application papers, I feel inclined to accept as a poster. Pros: - A novel application of GANs to neural spiking data - Addresses an important and highly studied application area (computational neuroscience) - Clearly written and well presented - The approach appears to model well real neural spiking activity from salamander retina Cons: - Known pitfalls of GANs aren't really addressed in the paper (mode collapse, etc.) - The authors don't compare to state of the art models of neural spiking activity (although they compare to an accepted standard approach - MaxEnt) - Limited technical innovation over existing methods for GANs
train
[ "H1V6FdKgz", "SkQHi3FgM", "H1cZ1g0ef", "Hylb3fp7M", "BJIXLEYfM", "BkF3VEtff", "HyoJNNtGG", "r1hc5xv0Z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "[Summary of paper] The paper presents a method for simulating spike trains from populations of neurons which match empirically measured multi-neuron recordings. They set up a Wasserstein-GAN and train it on both synthetic and real multi-neuron recordings, using data from the salamander retina. They find that their method (Spike-GAN) can produce spike trains that visually look like the original data, and which have low-order statistics (firing rates, correlations, time-lagged-correlations, total sum of population activity) which matches those of the original data. They emphasize that their network architecture is 'semi-convolutional', i.e. convolutional in time but not across neurons. Finally, they suggest a way to analyse the fitted networks in order to gain insights into what the 'relevant' neural features are, and illustrate it on synthetic data into which they embedded these features.\n\n[Originality] This paper falls into the category of papers that do a next obvious thing (\"GANs have not been applied to population spike trains yet\"), and which do it pretty well: If one wants to create simulated neural activity data which matches experimentally observed one, then this method indeed seems to do that. As far as I know, this would be the first peer-reviewed application of GANs to multi-neuron recordings of neural data (but see https://arxiv.org/abs/1707.04582 for an arxiv paper, not cited here-- should be discussed at least). On a technical level, there is very little to no innovation here -- while the authors emphasise their 'semi-convolutional' network architecture, this is obviously the right architecture to use for multivariate time-series data, and in itself not a big technical novel. Therefore, the paper should really be evaluate as an `application' paper, and be assessed in terms of i) how important the application is, ii) how clearly it is presented, and iii) how convincing the results are relative to state of the art. \n\ni) [Importance of problem, potential significance] Finding statistical models for modelling and simulating population spike trains is a topic which is extensively studied in computational neuroscience, predominantly using model-based approaches using MaxEnt models, GLMs or latent variable models. These models are typically simple and restricted, and certainly fall short of capturing the full complexity of neural data. Thus, better, and more flexible solutions for this problem would certainly be very welcome, and have an immediate impact in this community. However, I think that the approach based on GANs actually has two shortcomings which are not stated by the authors, and which possibly limit the impact of the method: First, statistical models of neural spike trains are often used to compute probabilities e.g. for decoding analyses— this is difficult or impossible for GANs. Second, one most often does not want to simulate data which match a specific recording, but rather which have specified statistics (e.g. firing rates and correlations)— the method here is based on fitting a particular data-set, and it is actually unclear to me when that will be useful.\n\nii) [Clarity] The methods are presented and explained clearly and cleanly. In my view, too much emphasis is given to highlighting the ‘semi-convolutional’ network, and, conversely, practical issues (exact architectures, cost of training) should be explained more clearly, possibly in an appendix. Similarly, the method would benefit from the authors releasing their code.\n\niii) [Quality, advance over previous methods] The authors discuss several methods for simulating spike trains in the introduction. In their empirical comparisons, however, they completely focus on a particular model-class (maximum entropy models, ME) which they label being the ‘state-of-the-art’. This label is misleading— ME models are but one of several approaches to modelling neural spike trains, with different models having different advantages and limitations (there is no benchmark which can be used to rank them...). In particular, the only ‘gain’ of the GAN over ME models in the results comes from their ability of the GAN to match temporal statistics. Given that the ME models used by the authors are blind to temporal correlations, this is, of course (and as pointed out by the authors) hardly surprising. How does the GAN approach fair against alternative models which do take temporal statistics into account, e.g. GLMs, or simple moment-based method e.g. Krumin et al 2009, Lyamzin 2010, Gutnisky et al 2010— setting these up would be simple, and it would provide a non-trivial baseline for the ability of spike-GAN to outperform at least these models? While it true that GANs are much more expressive than the model-based approaches used in neuroscience, a clear demonstration would have been useful.\n\nMinor comments: \n - p.3: The abbreviation “1D-DCGAN” is never spelled out.\n - p.3: The architecture of Spike-GAN is never explicitely given.\n - p.3: (Sec. 2.2) Statistic 2) “average time course across activity patterns” is unclear to me -- how does one select the activity patterns over which to average? Moreover, later figures do not seem to use this statistic.\n - p.4: “introduced correlations between randomly selected pairs” -- How many such pairs were formed?\n - p.7 (just above Discussion) At the beginning of this section, and for Figs. 4A,B, the texts suggests that packets fire spontaneously with a given probability. For Figs. 4C-E, a particular packet responds to a particular input. Is then the neuron population used in these figures different from the one in Figs. 4A,B? How did the authors ensure that a particular set of neurons respond to their stimulus as a packet? What stimulus did they use?\n - p.8 (Fig. 4E) Are the eight neurons with higher importance those corresponding to the packet? This is insinuated but not stated.\n - p.12 (Appendix A) \n + The authors do not mention how they produced their “ground truth” data. (What was its firing rate? Did it include correlations? A refractory period?)\n + Generating samples from the trained Spike-GAN is ostensibly cheap. Hence it is unclear why the authors did not produce a large enough number of samples in order to obtain a 'numerical probability', just as they did for the ground truth data? \n + Fig. S1B: The figure shows that every sample has the same empirical frequency. This indicates more a lack of statistical power rather than any correspondence between the theoretical and empirical probabilities. This undermines the argument in the second paragraph of p.12. In the other hand, if the authors did approximate numerical probabilities for the Spike-GAN, this argument would no longer be required.\n - p.13 Fig. S1A,B: the abscissas mention “frequency”, while the ordinates mention “probability”\n - p.25 Fig. S4: This figure suggests that the first layer of the Spike-GAN critic sometimes recognizes the packet patterns in the data. However, to know whether this is true, we would need to compare this to a representation of the neurons reordered in the same way and identified by packet. I.e. one expects something something like figure like Fig. 4A, with the packets lining up with the recovered filters when neurons are ordered the same way.\n", "Summary:\n\nThe paper proposes to use GANs for synthesizing realistic neural activity patterns. Learning generative models of neural population activity is an important problem in computational neuroscience. The authors apply an established method (WGAN) to a new context (neuroscience). The work does not present a technical advance in itself, but it could be a valuable contribution if it created novel insights in neuroscience. Unfortunately, I do not see any such insights and am therefore not sure about the value of the paper.\n\n\n\nPros:\n\n- Using GANs to synthesize neural activity patterns is novel (to my knowledge).\n\n- Using GANs allows learning the crucial statistical patterns from data, which is more flexible than MaxEnt modeling, where one needs to define the statistics to be matched in advance.\n\n- Using the critic to learn something about what are the crucial population activity patterns is an interesting idea and could be a valuable contribution.\n\n\n\nCons:\n\n- Important architecture details missing: filter sizes, strides.\n\n- The only demonstrated advantage of the proposed approach over MaxEnt models is that it models temporal correlations. However, this difference has nothing to do with MaxEnt vs. GAN. A 64d MaxEnt model does not care whether you’re modeling 64 neurons in a single time bin or 8 neurons in 8 subsequent time bins. Thus, the comparison in Fig. 3E,F is not apples to apples. An apples-to-apples comparison would be to use a MaxEnt model that includes multiple time lags (if that’s infeasible with 16 neurons x 32 bins, use a smaller model). Given that the MaxEnt model does a better job at modeling neuron-to-neuron correlations, I would expect it to also outperform the GAN at modeling temporal correlations. There may well be a scalability issue of the MaxEnt model to large populations and long time windows, but that doesn’t seem to be the current line of argument.\n\n- GANs have well-known problems like mode collapse and low entropy of samples. Given the small amount of training examples (<10k) and large number of model parameters (3.5M), this issue is particularly worrisome. The authors do not address this issue, neither qualitatively nor quantitatively, although both would be possible:\n\n a) A quantitative approach would be to look at the entropy of the data, the MaxEnt model and the GAN samples. Given the binary and relatively low-dimensional nature of the observations, this may be feasible (in contrast to image data). One would potentially have to look at shorter segments and smaller subpopulations of neurons, where entropy estimation is feasible given the available amount of data, but it’s definitely doable\n\n b) Qualitative approaches include the typical one of showing the closest training example for each sample. \n\n- The novel idea of using the critic to learn something about the crucial population activity patterns is not fleshed out at all. I think this aspect of the paper could be a valuable contribution if the authors focused on it, studied it in detail and provided convincing evidence that it can be useful in practice (or, even better, actually produced some novel insight).\n\n a) Visualizing the filters learned by the critic isn’t really useful in practice, since the authors used their ground truth knowledge to sort the neurons. In practice, the (unsorted) filters will look just as uninterpretable as the (unsorted) population activity they show.\n\n b) Detection of the ‘packets’ via importance maps is an interesting idea to find potential temporal codes without explicitly prescribing their hypothesized structure. Unfortunately, the idea is not really fleshed out or studied in any detail. In particular, it’s not clear whether it would still work in a less extreme scenario (all eight neurons fire in exact sequence).\n\n- Poor comparison to state of the art. MaxEnt model is the only alternative approach tested. However, it is not clear that MaxEnt models are the state of the art. Latent variable models (e.g. Make et al. 2011) or more recent approaches based on autoencoders (Pandarinath et al. 2017; https://doi.org/10.1101/152884) are just among a few notable alternatives that the authors ignore.\n\n\n\nQuestions:\n\n- Can sequences of arbitrary length be generated or is it fixed to 32 samples? If the latter, then how do envision the algorithm’s use in real-world scenarios that you propose such as microstimulation?\n\n\n\nMinor:\n\n- There is nothing “semicomvolutional” here. Just like images are multi-channel (color channels) 2D (x, y) observation, here the observations are multi-channel (neurons) 1D (time) observations. \n\n- Fig. 3E is not time (ms), but time (x20 ms bins). Potentially the firing rates are also off by a factor of 20?\n\n- Why partition the training data into non-overlapping segments? Seems like a waste of training data not to use all possible temporal crops.", "The paper applies the GAN framework to learn a generative model of spike trains. The generated spike trains are compared to traditional model fitting methods, showing comparable or superior ability to capture statistical properties of real population activity.\n\nThis seems like an interesting exercise, but it’s unclear what it contributes to our understanding of neural circuits in the brain. The advantage of structured models is that they potentially correspond to underlying mechanisms and can provide insight. The authors point to the superior ability to capture temporal structure, but this does not seem like a fundamental limitation of traditional approaches.\n\nThe potential applicability of this approach is alluded to in this statement toward the end of the paper:\n\n“...be used to describe and interpret experimental results and discover the key units of neural information used for functions such as sensation and behavior.”\n\nIt is left for the reader to connect the dots here and figure out how this might be done. It would be helpful if the authors could at least sketch out a path by which this could be done with this approach.\n\nPerhaps the most compelling application is to perturbing neural activity, or intervening to inject specific activity patterns into the brain.\n", "We have uploaded an updated version of the paper including an extra supplementary figure (Fig. S10) that provides an alternative visualization of the data shown in Fig. S2E and also extra information on how the importance maps analysis works with smaller training datasets.\nWe have also included a small paragraph in the discussion section briefly commenting a recent paper by Pandarinath et al., in which the authors use variational autoencoders to model the activity of a population of neurons. \n", "Thank you very much for your comments and suggestions. We have tried to address them in the revised manuscript (see also reply to reviewers 1 and 2). Briefly: \n\nWe have removed the reference to the ‘semi-convolutional’ architecture from the tittle and abstract and only mention it in the methods (Section 2.1, second paragraph, last four lines; see also reply to reviewer 1).\n\nWe thank you for the reference Arakaki et al. We now mention it in the Discussion section (second paragraph).\n\nWe have modified Fig. 1 in order to provide all details about the architecture of Spike-GAN.\n\nWe have now compared Spike-GAN with a method that does take into account temporal statistics (Lyamzin et al. 2010) (Section 3.2). We show in Fig. 3 that this method fits very well the statistics of the retinal data. However, as the authors mention in their paper, it struggles to fit negative correlations as the ones shown in Fig. 1 (panel F) that are due to the refractory period (Fig. S6). This would constitute an important shortcoming if e.g. one wants to fit the activity of a neural population including inhibitory neurons, which will be most likely anti-correlated with other neurons.\n\nWe now comment in the Discussion section (last paragraph) the pros and cons of Spike-GAN in comparison to the MaxEnt and DG models. \n\nThe abbreviation “1D-DCGAN” is now spelled out (Section 2.1, second paragraph).\n\nAverage time courses are the probability of firing in each bin, divided by the bin duration (measured in seconds). We have tried to clarify the explanation in Section 2.2 (second paragraph). This statistic is shown in Figs. 2E and 3D.\n\nIn Fig.2 there were 8 pairs of correlated neurons. We have now made this information explicit in the text (Section 3.1, first paragraph).\n\nThe idea behind the data shown in 4C-E was to clearly show how Spike-GAN could be used to identify the packets. Initially Spike-GAN is trained with a dataset in which samples show all packets (Fig. 4A) but disordered and cluttered by background spikes (Fig. 4B). Then we investigate the capacity of Spike-GAN to detect those packets in a separate simulated dataset in which the neural population responds to a hypothetical stimulus with only one of the packets (Fig. 4C). We have now tried to clarify the text explaining Fig.4C-E (Section 3.3, third paragraph). We have also added a new supp. figure (Fig. S2) showing a potential application of the importance maps (see also the reply to the other two reviewers).\n\nWe now explicitly state that the eight neurons with higher importance are those corresponding to the packet (Section 3.3, third paragraph).\n\nAs you suggested we obtained a larger dataset from Spike-GAN for Fig. S1 and compare the probabilities inferred from it with the numerical probabilities. We have thus simplified Section A1 and Fig. S1, since we believe the new panel A is enough to show that Spike-GAN is learning the underlying probability distribution. We have further computed the entropies of the ground truth and generated distributions (as suggested by reviewer 1), to check for the possibility of Spike-GAN producing low entropy samples. Finally we now specify the parameters used for the data shown in Fig. S1 (see caption) and changed the label in the abscissas.\n\nFig. S4 (now Fig. S7) was mainly shown to indicate that Spike-GAN had learned the packet structure. We realized this was not clear from the text so we have modified it (Section 3.3, first paragraph, last four lines).\n\nWe will make the code available on GitHub upon acceptance of the paper.\n\nFinally, the main paper got somewhat longer than 8 pages (the recommended length for ICLR submissions), but if so advised by the reviewers we would shorten the paper by moving Fig. 1 to the supplementary material.\n\nThank you again for these very useful comments. Please let us know if you have further comments/doubts.\n", "Thank you very much for this detailed review. Below, we reply to your comments and questions:\n\nWe modified Fig.1 to show the whole architecture of the critic (the generator is just the mirror image). Filters sizes, strides, etc are provided in the figure as well. We hope this new figure provides all the information that was missing in the previous version of the paper.\n\nWe now compare the method to an alternative approach based on the dichotomized Gaussian (DG) framework (a latent variable model) (Section 3.2). In particular, we have applied an extension of the method developed by Macke et al. 2009 which was suggested by reviewer 3 (Lyamzin et al. 2010). Importantly, this alternative method takes into account the temporal structure of the retinal dataset and fits to a great extent all considered statistics (new Fig. 3). However, as the authors of the paper mention, the method fails to fit negative correlations and produces a flat autocorrelogram when spike trains present a refractory period (Fig. S6). We thus believe that the flexibility of Spike-GAN in fitting all kinds of spatio-temporal structures commonly found in real data makes it an attractive alternative to more constrained methods like the MaxEnt and the DG methods.\n\nTo check for signs of mode collapse or low entropy samples we did the following:\n1)\tIn Section A1 and Fig. S1 we compare, for a relatively low dimensional dataset, the entropies of the ground truth distribution to that of the distribution generated by Spike-GAN.\n2)\tWe have included a figure showing, for each of the considered methods (Spike-GAN, k-pairwise and DG) 10 generated samples with their closest ones in the retinal dataset (see Section 3.2, third paragraph on page 6 and Fig. S5). \n\nPlease see the response to reviewer 2 regarding the application of the importance maps to investigate the strategies used by neural populations to encode and transmit the information about a set of stimuli. In the example we describe in Section A2 and Fig. S2, we provide a possible way in which importance maps could be used in combination with a microstimulation technique such as optogenetics. Furthermore, using the same example, we have tested the importance maps analysis in the presence of noise affecting the packets (Fig. S9) and found that the procedure can still provide useful information about the neurons encoding each presented stimulus and their relative timing.\n\nThe length of the patterns generated by Spike-GAN is bounded from above by the length of the data samples that were used for training the network. Nevertheless, apart from considerations of computational cost and data availability, there are no a priori limits on the duration of the training samples. Hence, if sequences of a specific length are needed for an experimental protocol, they can be generated by training a suitably-sized Spike-GAN. Alternatively, multiple shorter samples could be generated with Spike-GAN and successively stitched together (even though this would, of course, fail to capture any long-range correlations spanning across multiple samples).\n\nVisualizing the filters learnt by the first layer of the critic is useful to evaluate to which extent the network has learnt about the packets and that is the main reason we added this figure (it is true that without knowing the ground truth structure, they are of less help). We have modified the text to make this clearer (Section 3.3, first paragraph, last four lines).\n\nAlthough we agree that there is nothing technically challenging in the “Semi-convolutional” Spike-GAN architecture, we deemed useful to give a name to this approach since we feel many people in neuroscience (outside the machine learning community) associate convNets with its application to images and do not consider applying it to neural recordings. Nevertheless, we have reduced the emphasis on the semi-convolutional operation (we now only mention it in the methods (Section 2.1, second paragraph, last four lines)).\n\n We have fixed the typo of Fig. 3E.\n\nPartitions were made non-overlapping to avoid redundancy in the training dataset, but it is true that using all possible temporal windows would be a trivial way of augmenting the data. We have tested this alternative approach with the retinal data (Fig. 3) and did not see any improvement on the fitting of the data. However, we agree that this alternative way of building the training dataset could be advantageous in cases when the number of samples is a more critical factor. We mention this in the first paragraph of Section 3.2.\n\nFinally, the main paper got somewhat longer than 8 pages (the recommended length for ICLR submissions), but if so advised by the reviewers we would shorten the paper by moving Fig. 1 to the supplementary material.\n\nThank you again for this very useful feedback which helped us to considerably improve the quality of our paper. Please let us know if you have any other comment or question. \n", "Thank you for your helpful comments. We addressed your request to sketch out a path by which Spike-GAN could be used to discover the key features of population activity that mediate behavior. We comment on this in the Discussion section (third and fourth paragraph). We also elaborate in more detail in a new supplementary section (Section A2, with Fig. S2), where we describe an experimental paradigm (involving techniques nowadays used by many laboratories), which could greatly benefit from the importance maps produced by Spike-GAN. \n\nBriefly, the importance maps obtained from Spike-GAN allow inferring the set of neurons participating in the encoding of the stimulus information (Fig. S2F) and the spatio-temporal structure of the packets elicited by each stimulus (Fig. S2E). Inferred packets could then be altered in a meaningful and precise way and then applied to the population of neurons using interventional techniques such as 2P optogenetics. This would allow to causally test still unanswered questions about the way populations of neurons encode and transmit information – for instance, the role that spike timing plays in the encoding of stimulus information. \n\nFinally, the main paper got somewhat longer than 8 pages (the recommended length for ICLR submissions), but if so advised by the reviewers we would shorten the paper by moving Fig. 1 to the supplementary material.\n\nWe hope that Section A2 and Fig. S2 are clear and address your concern about the contribution of our work to the investigation of how neural populations in the brain process and communicate the information they receive. Please let us know if you have further comments or questions.\n", "We provide below a few comments aiming at correcting/clarifying some sentences: \n\n* First sentence in Section 2.1. should read: We adapted the Generative Adversarial Networks described by Goodfellow et al. (2014) to produce samples that simulate the spiking activity of a population of N neurons as binary vectors of length T (spike trains, Fig. S2).\n\n* In Section 2.1., second paragraph, WGAN-GP refers to the Wasserstein-GAN Gradient Penalization developed by Gulrajani et al. 2017.\n\n* In Section 2.1., 3rd paragraph, point 4, the two citations of Odena et al. 2016 refer to: Augustus Odena, Vincent Dumoulin, and Chris Olah. Deconvolution and checkerboard artifacts. Distill, 319 1 (10):e3, 2016.\n\n* Panels 2F and 3E show the **normalized** number of spikes.\n\n* Figure 4, caption panel B should read: Realistic neural population pattern (gray spikes do not participate in\nany **packet**).\n\n* Figure 4, caption: errorbars indicate standard error.\n" ]
[ 8, 4, 6, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1VVsebAZ", "iclr_2018_r1VVsebAZ", "iclr_2018_r1VVsebAZ", "iclr_2018_r1VVsebAZ", "H1V6FdKgz", "SkQHi3FgM", "H1cZ1g0ef", "iclr_2018_r1VVsebAZ" ]
iclr_2018_BJ8c3f-0b
Auto-Encoding Sequential Monte Carlo
We build on auto-encoding sequential Monte Carlo (AESMC): a method for model and proposal learning based on maximizing the lower bound to the log marginal likelihood in a broad family of structured probabilistic models. Our approach relies on the efficiency of sequential Monte Carlo (SMC) for performing inference in structured probabilistic models and the flexibility of deep neural networks to model complex conditional probability distributions. We develop additional theoretical insights and introduce a new training procedure which improves both model and proposal learning. We demonstrate that our approach provides a fast, easy-to-implement and scalable means for simultaneous model learning and proposal adaptation in deep generative models.
accepted-poster-papers
This work develops importance weighted autoencoder-like training but with sequential Monte Carlo. The paper is interesting, well written and the methods are very timely (there are two highly related concurrent papers - Naesseth et al. and Maddison et al.). Initially, the reviewers shared concerns about the technical details of the paper, but the authors appear to addressed those and two reviews have been raised accordingly. There is one outlier review (two 7s and one 3). The 3 is the least thorough and has the lowest confidence (2) so that review is being weighted accordingly. This appears to be a timely and interesting paper that is interesting to the community and warrants publication at ICLR. Pros: - Well written and clear - An interesting approach - Neat technical innovations - Generative deep models are of great interest to the community (e.g. Variational Autoencoders) Cons: - Could include a better treatment of recent related literature - Leaves a variety of open questions about specific details (i.e. from the reviews)
train
[ "rkN3iHo4f", "r132qrjEG", "Syi1Z34ZG", "HkqLp4GeM", "BkaCH2KlG", "SybDOZMEz", "BkgqKr6Xf", "S1Eei4pQM", "H1Uo_Namf", "ByvVq0m-G" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Thank you for the further comments.\n\n%%% You argue that increased K ... %%%\n\n>>> The argument about increasing K being detrimental to optimizing q(z|x) is actually distinct to the bound potentially not being tight. Increasing K can be detrimental because of undermining our ability to reliably estimate the gradients, but is not detrimental in the true gradient direction itself. The reason the bound does not become tight is that the “best” SMC proposal is generally not for q(x|y)=p(x|y). Thus is it still important to optimize q(x|y), it is just that this optimum is distinct to p(x|y). Increasing K can still be detrimental to this process because of reducing the signal to noise ratio of the gradient estimates, thus harming our ability to learn the optimal SMC proposal.\n\n%%% You show that for finite time ... This relates to another ... %%%\n\n>>> We have run the experiment in the LGSSM example for more iterations (Figure 3 updated in latest revision) to characterize this further. Our result now show that SMC-10 appears to be converging to a worse proposal that ALT (in terms of the marginal posterior), while SMC-1000 is still yet to converge.\nFrom the perspective of the theoretical limit of infinite computation in a generic stochastic gradient ascent scheme than our results on the SNR (which are the motivation for ALT) are inconsequential because they only relate to variance of the estimator. However, the assumptions required for this convergence are typically not satisfied for neural network training anyway, so the impact of the SNR may still be felt in the limit for real neural net training. Moreover, this theoretical converge limit is typically very distinct to the point at which the training saturates - training can appear to have converged far more quickly than it truly has.\n\nAside to our SNR arguments, there are also differences in the proposal achieved by ALT in the limit because of the effects previously discussed about the optimal importance sampling and SMC proposals being distinct (note the optimal SMC proposal also varies with K). It can be hard to assess which of these is “better” because there is no objective metric for what a good q is. For example, the KL tends to over-prefer low variance q’s for use as proposal distributions. To try and investigate the relative metrics of different proposals, we have done an empirical investigation into training \\phi for a given \\theta as discussed next.\n\n%%% I think it would be very illustrative ... %%%\n\n>>> We have run additional experiments (included in a revision in Appendix E.1.) which confirm that if we want to optimize just \\phi, then running IWAE with low number of particles is the best. In the experiment in Appendix E.1., we sweep through all possibilities of train_algorithm x train_number_of_particles x test_algorithm x test_number_of_particles where\n - train_algorithm = test_algorithm = {IS, SMC}\n - train_number_of_particles = test_number_of_particles = {10, 100, 1000}\n\nIn the experiment, we confirm that using the IWAE objective with low number of particles results in a better \\phi (in terms of inference) than optimizing AESMC with higher number of particles. Investigating other axes of variation:\n - Inference performance worsens when we increase K_train.\n - Inference performance improves when we increase K_test.\n - The worst inference performance happens when we train with SMC with a lot of particles and test with IS with few particles.\n - The best inference performance happens when we train with IS with few particles and test with SMC with a lot of particles.\n\n%%% Could this also be related to the bias in the AESMC gradients? %%%\n\n>>> As explained in our earlier comment, we think this is mostly tangential to the fact that the optimal q(x|y) for SMC is not the marginal posterior.\n\n%%% In Figure 4 you make mention to max(IS, SMC), in the experiments which one has been picked in each case? Does the ALT algorithm tend to pick one over the other?%%%\n\n>>> We currently don’t have data for this but will include it in a later revision.\n", "Thank you for your further consideration and bumping up your score.", "Update:\nOn further consideration (and reading the other reviews), I'm bumping my rating up to a 7. I think there are still some issues, but this work is both valuable and interesting, and it deserves to be published (alongside the Naesseth et al. and Maddison et al. work).\n\n-----------\n\nThis paper proposes a version of IWAE-style training that uses SMC instead of classical importance sampling. Going beyond the several papers that proposed this simultaneously, the authors observe a key issue: the variance of the gradient of these IWAE-style bounds (w.r.t. the inference parameters) grows with their accuracy. They therefore propose using a more-biased but lower-variance bound to train the inference parameters, and the more-accurate bound to train the generative model.\n\nOverall, I found this paper quite interesting. There are a few things I think could be cleared up, but this seems like good work (although I'm not totally up to date on the very recent literature in this area).\n\nSome comments:\n\n* Section 4: I found this argument extremely interesting. However, it’s worth noting that your argument implies that you could get an O(1) SNR by averaging K noisy estimates of I_K. Rainforth et al. suggest this approach, as well as the approach of averaging K^2 noisy estimates, which the theory suggests may be more appropriate if the functions involved are sufficiently smooth, which even for ReLU networks that are non-differentiable at a finite number of points I think they should be.\n\nThis paper would be stronger if it compared with Rainforth et al.’s proposed approaches. This would demonstrate the real tradeoffs between bias, variance, and computation. Of course, that involves O(K^2) or O(K^3) computation, which is a weakness. But one could use a small value of K (say, K=5).\n\nThat said, I could also imagine a scenario where there is no benefit to generating multiple noisy samples for a single example versus a single noisy sample for multiple examples. Basically, these all seem like interesting and important empirical questions that would be nice to explore in a bit more detail.\n\n* Section 3.3: Claim 1 is an interesting observation. But Propositions 1 and 2 seem to just say that the only way to get a perfectly tight SMC ELBO is to perfectly sample from the joint posterior. I think there’s an easier way to make this argument:\n\nGiven an unbiased estimator \\hat{Z} of Z, by Jensen’s inequality E[log \\hat{Z}] ≤ log Z, with equality iff the variance of \\hat{Z} = 0. The only way to get an SMC estimator’s variance to 0 is to drive the variance of the weights to 0. That only happens if you perfectly sample each particle from the true posterior, conditioned on all future information.\n\nAll of which is true as far as it goes, but I think it’s a bit of a distraction. The question is not “what’s it take to get to 0 variance” but “how quickly can we approach 0 variance”. In principle IS and SMC can achieve arbitrarily high accuracy by making K astronomically large. (Although [particle] MCMC is probably a better choice if one wants extremely low bias.)\n\n* Section 3.2: The choice of how to get low-variance gradients through the ancestor-sampling choice seems seems like an important technical challenge in getting this approach to work, but there’s only a very cursory discussion in the main text. I would recommend at least summarizing the main findings of Appendix A in the main text.\n\n* A relevant missing citation: Turner and Sahani’s “Two problems with variational expectation maximisation for time-series models” (http://www.gatsby.ucl.ac.uk/~maneesh/papers/turner-sahani-2010-ildn.pdf). They discuss in detail some examples where tighter variational bounds in state-space models lead to worse parameter estimates (though in a quite different context and with a quite different analysis).\n\n* Figure 1: What is the x-axis here? Presumably phi is not actually 1-dimensional?\n\nTypos etc.:\n\n* “learn a particular series intermediate” missing “of”.\n\n* “To do so, we generate on sequence y1:T” s/on/a/, I think?\n\n* Equation 3: Should there be a (1/K) in Z?", "Overall:\nI had a really hard time reading this paper because I found the writing to be quite confusing. For this reason I cannot recommend publication as I am not sure how to evaluate the paper’s contribution. \n\nSummary\nThe authors study state space models in the unsupervised learning case. We have a set of observed variables Y, we posit a latent set of variables X, the mapping from the latent to the observed variables has a parametric form and we have a prior over the parameters. We want to infer a posterior density given some data.\n\nThe authors propose an algorithm which uses sequential Monte Carlo + autoencoders. They use a REINFORCE-like algorithm to differentiate through the Monte Carlo. The contribution of this paper is to add to this a method which uses 2 different ELBOs for updating different sets of parameters.\n\nThe authors show the AESMC works better than importance weighted autoencoders and the double ELBO method works even better in some experiments. \n\nThe proposed algorithm seems novel, but I do not understand a few points which make it hard to judge the contribution. Note that here I am assuming full technical correctness of the paper (and still cannot recommend acceptance).\n\nIs the proposed contribution of this paper just to add the double ELBO or does it also include the AESMC (that is, should this paper subsume the anonymized pre-print mentioned in the intro)? This was very unclear to me.\n\nThe introduction/experiments section of the paper is not well motivated. What is the problem the authors are trying to solve with AESMC (over existing methods)? Is it scalability? Is it purely to improve likelihood of the fitted model (see my questions on the experiments in the next section)? \n\nThe experiments feel lacking. There is only one experiment comparing the gains from AESMC, ALT to a simpler (?) method of IWAE. We see that they do better but the magnitude of the improvement is not obvious (should I be looking at the ELBO scores as the sole judge? Does AESMC give a better generative model?). The authors discuss the advantages of SMC and say that is scales better than other methods, it would be good to show this as an experimental result if indeed the quality of the learned representations is comparable.", "[After author feedback]\nI think the approach is interesting and warrants publication. However, I think some of the counter-intuitive claims on the proposal learning are overly strong, and not supported well enough by the experiments. In the paper the authors also need to describe the differences between their work and the concurrent work of Maddison et al. and Naesseth et al. \n\n[Original review]\nThe authors propose auto-encoding sequential Monte Carlo (SMC), extending the VAE framework to a new Monte Carlo objective based on SMC. The authors show that this can be interpreted as standard variational inference on an extended space, and that the true posterior can only be obtained if we can target the true posterior marginals at each step of the SMC procedure. The authors argue that using different number of particles for learning the proposal parameters versus the model parameters can be beneficial.\n\nThe approach is interesting and the paper is well-written, however, I have some comments and questions:\n\n- It seems clear that the AESMC bound does not in general optimize for q(x|y) to be close to p(x|y), except in the IWAE special case. This seems to mean that we should not expect for q -> p when K increases?\n- Figure 1 seems inconclusive and it is a bit difficult to ascertain the claim that is made. If I'm not mistaken K=1 is regular ELBO and not IWAE/AESMC? Have you estimated the probability for positive vs. negative gradient values for K=10? To me it looks like the probability of it being larger than zero is something like 2/3. K>10 is difficult to see from this plot alone.\n- Is there a typo in the bound given by eq. (17)? Seems like there are two identical terms. Also I'm not sure about the first equality in this equatiion, is I^2 = 0 or is there a typo?\n- The discussion in section 4.1 and results in the experimental section 5.2 seem a bit counter-intuitive, especially learning the proposals for SMC using IS. Have you tried this for high-dimensional models as well? Because IS suffers from collapse even in the time dimension I would expect the optimal proposal parameters learnt from a IWAE-type objective will collapse to something close to the the standard ELBO. For example have you tried learning proposals for the LG-SSM in Section 5.1 using the IS objective as proposed in 4.1? Might this be a typo in 4.1? You still propose to learn the proposal parameters using SMC but with lower number of particles? I suspect this lower number of particles might be model-dependent.\n\nMinor comments:\n- Section 1, first paragraph, last sentence, \"that\" -> \"than\"?\n- Section 3.2, \"... using which...\" formulation in two places in the firsth and second paragraph was a bit confusing\n- Page 7, second line, just \"IS\"?\n- Perhaps you can clarify the last sentence in the second paragraph of Section 5.1 about computational graph not influencing gradient updates?\n- Section 5.2, stochastic variational inference Hoffman et al. (2013) uses natural gradients and exact variational solution for local latents so I don't think K=1 reduces to this?", "Thank you for the clarifying comments, I will adjust my review accordingly.\n\n%%% It seems clear that the AESMC bound does not, in general, optimize for q(x|y) to be close to p(x|y)... %%%\nYou argue that increased K is detrimental to optimize q(z|x) to be close to p(z|x). But if the method is not designed to optimize the proposal to be close to the posterior, which you seem to agree with above, why should this be an issue? \n\nYou show that for finite time the alternate seems to perform better, but do you have any results on whether this extends to when the high-sample version has actually converged? I'm referring to the LGSSM example, where the SMC-1000 hasn't yet converged. Paraphrased will the alternate still be better in the limit of infinite computation?\n\nThis relates to another whether if it is possible that further gains on the alternate version can be achieved by an increasing schedule of particles for the proposal learning.\n\n%%% The discussion in section 4.1 and results in the experimental section 5.2 seem a bit counter-intuitive ... %%%\nI think it would be very illustrative if there were experiments where the focus is actually on learning only \\phi, instead of both \\phi and \\theta. Now the claim seems to be that learning proposals (\\phi) with the IWAE objective and a low number of particles is actually a better way of learning proposals for AESMC with a higher number of particles, rather than directly optimizing the AESMC objective. Could this also be related to the bias in the AESMC gradients?\n\nIn Figure 4 you make mention to max(IS, SMC), in the experiments which one has been picked in each case? Does the ALT algorithm tend to pick one over the other?", "We thank the reviewer for taking the time to review our paper and for their helpful feedback.\n\n%%% Relationship with Rainforth et al %%%\n\n> We would like to clarify that Rainforth et al do not propose any algorithmic approach, they only express the IWAE and VAE objectives in a general form and then carry out a theoretical analysis on which we build. One can of course always use more Monte Carlo samples (i.e. increase M in their notation) to increase the fidelity of estimates. The interesting approach which you are suggesting of using K^2 estimates here could be achieved by running K SMC sweeps of K samples. We agree that this could be an interesting further extension on top of our suggested approach, but we did not have the time to actively investigate it for this revision.\n\n\n%%% Propositions 1 and 2 seem to just say that the only way to get a perfectly tight SMC ELBO is to perfectly sample from the joint posterior... %%%\n\n> Your intuition here is correct - our proof relates to demonstrating this more formally and highlighting the fact that this requires a particular factorization of the model. Proving the “if” part of propositions 1, 2 is trivial. However, proving the “only if” part of proposition 2 requires somewhat more care to show that the variance of the estimator can indeed only be zero when the variance of the weights is zero and that the variance of the weights can indeed only be zero if the intermediate targets incorporate all future information, implying a particular factorization of the generative model.\n\n\n%%% .... The question is not “what’s it take to get to 0 variance” but “how quickly can we approach 0 variance”... %%%\n\n> Though we agree with your sentiment that the speed of with 0 variance is approached is of critical importance and note that we provide empirical investigation of this through the experiments, we would like to reiterate that the key point of the result is to show that for 1<K<inf one will never learn a perfect estimator unless a particular factorization can be achieved. This is at odds to cases where K=1, for which infinite training iterations should always lead to q becoming the posterior if q is expressive enough to encode it and the problem is convex. In other words, in the IS case we can achieve exact posterior samples at test time with a finite K and a perfectly trained q for any model that can be represented by the inference network, but it the SMC case this is only possible if we also learn the optimal factorization of the model (as per proposition 2).\n\n\n%%% Section 3.2 %%%\n\nWe believe that in practice, the bias introduced by ignoring this term is only very small. We have added a short summary of the results from Appendix A as suggested.\n\n\n%%% Relevant missing citation: Turner and Sahani %%%\n\n> Thank you, we have duly updated the paper to include this reference.\n\n\n%%% Figure 1: What is the x-axis here? Presumably phi is not actually 1-dimensional?\n\n> In general, we are considering each dimension of the gradient separately in our assessment and so this should be read as \\nabla_{\\phi_1}. Note that each dimension of the gradient being equally likely to be positive or negative corresponds to the overall gradient taking a completely random direction.\n\n\n%%% Typos etc. %%%\n\n* “learn a particular series intermediate” missing “of”.\n\n> Thank you, now fixed.\n\n* “To do so, we generate on sequence y1:T” s/on/a/, I think?\n\n> Thank you, now fixed.\n\n* Equation 3: Should there be a (1/K) in Z?\n\n> Thank you, now fixed.\n", "We thank the reviewer for taking the time to read through our paper and for their helpful feedback.\n\nWe would like to reiterate the main contributions of the paper:\n- Re-introduction of the AESMC algorithm which was first introduced by Anon (2017) alongside the similar approaches of Maddison et al. (2017), and Naesseth et al. (2017). We reiterate that our previous work Anon (2017) is only a preprint and so this work still constitutes the first introduction of AESMC to the literature.\n\n- Additional theoretical insights about the ELBOs used for AESMC and the IWAE, in particular demonstrating that increasing the number of particles K can be detrimental to proposal learning.\n\n- Introducing the alternating EBLO algorithm to ameliorate the problems about proposal learning that our theoretical insights highlight can occur for the original AESMC and IWAE algorithms.\n\nRegarding the comments about the experiments, we ran three experiments to illustrate our points:\n- The experiment described in section 5.1 provides evidence that the AESMC algorithm works on a time-series model where we know how to evaluate and maximize the log marginal likelihood exactly. Figure 2 demonstrates that AESMC works better than IWAE.\n\n- In section 5.2 we empirically investigate our claims about the adverse effect of increasing number of particles K on learning q(x|y) (Figure 3, left). We then run the ALT algorithm to ameliorate this effect on a time series data for which the experiments are in Figure 3 (middle, right).\n\n- Finally, we run both IWAE, AESMC and ALT on a neural network model where it is impossible to evaluate the log marginal likelihood exactly and we must resort to max(ELBO_IS, ELBO_SMC) as a proxy. This is a common practice in evaluating deep generative models.\n", "We thank the reviewer for taking the time to read through our paper and their insightful questions.\n\nMinor comments have been incorporated in the revision of our submission.\n\nWe address the specific questions in turn:\n\n%%% It seems clear that the AESMC bound does not, in general, optimize for q(x|y) to be close to p(x|y)... %%%\n\n> This is true, but is effectively equivalent to the fact that the perfect importance sampler is better than the perfect SMC sampler, even though the latter is generally much more powerful when not perfectly optimized, as is almost always the case and hence why SMC is typically a superior inference algorithm. As we show in the set of equations (12-13) for IWAE and (14-15) for AESMC, these objectives can be decomposed to a log marginal likelihood term and a KL term on an extended space. In propositions 1 and 2, we prove that while we should expect q(x|y) = p(x|y) when we optimize the IWAE objective perfectly, this is not the case for AESMC.\n\n\n%%% Figure 1 seems inconclusive and it is a bit difficult to ascertain the claim that is made... %%%\n\n> The second paragraph of section 4 (and the associated Figure 1) is supposed to give an intuition for why using more K might result in a worse q(x|y). The case K=1 is regular ELBO (which is a special case of the IWAE) and the other cases are IWAE with the corresponding number of particles. We formalize this intuition by introducing the notion of a signal-to-noise ratio.\n\nThe probabilities of the \\grad_\\phi ELBO estimator are estimate using 10000 Monte Carlo samples to be:\nK=1: 0.8704\nK=10: 0.6072\nK=100: 0.5226\nK=1000: 0.5036\nWe believe that this is quite conclusive for this simple example, but the aim here is not the assumption that this simple example will generalize, just to show that increase K can be harmful. However, our theoretical result on the signal-to-noise ratio does provide a generalization to general problems and shows that this effect must always manifest for sufficiently large K for any problem.\n\n\n%%% Is there a typo in the bound given by eq. (17)... %%%\n\n> Neither are typos. The repeated term is because one gets an identical term in the bias and variance components of the bound, one of which goes to the numerator and one the denominator when we calculate the SNR. It is indeed the case that I^2=0. \n\n\n%%% The discussion in section 4.1 and results in the experimental section 5.2 seem a bit counter-intuitive ... %%%\n\n> We agree that discussion and results with regards to the ALT algorithm are at first counter-intuitive, but we believe this is because of the counter-intuitive nature of our observation that increasing K actually harms the training of the proposal q(x|y). Given this novel realization, using fewer particles to train the proposals becomes a natural thing to do, while our empirical results verify that it can lead to improvements in performance.\n\nWe have indeed tried running ALT where we update \\theta (generative parameters) using SMC with 1000 particles and \\phi (proposal parameters) using IS with 10 particles. The results of this are described in the third paragraph of section 5.2. and the results are in Figure 3 (middle, right). We have also run the experiment on a neural network based model described in section 5.3 and Figure 4.", "I have reviewed the paper as including the AESMC, so I would be interested in the answer to whether this is intended as well." ]
[ -1, -1, 7, 3, 7, -1, -1, -1, -1, -1 ]
[ -1, -1, 3, 2, 4, -1, -1, -1, -1, -1 ]
[ "SybDOZMEz", "Syi1Z34ZG", "iclr_2018_BJ8c3f-0b", "iclr_2018_BJ8c3f-0b", "iclr_2018_BJ8c3f-0b", "H1Uo_Namf", "Syi1Z34ZG", "HkqLp4GeM", "BkaCH2KlG", "HkqLp4GeM" ]
iclr_2018_HJewuJWCZ
Learning to Teach
Teaching plays a very important role in our society, by spreading human knowledge and educating our next generations. A good teacher will select appropriate teaching materials, impact suitable methodologies, and set up targeted examinations, according to the learning behaviors of the students. In the field of artificial intelligence, however, one has not fully explored the role of teaching, and pays most attention to machine \emph{learning}. In this paper, we argue that equal attention, if not more, should be paid to teaching, and furthermore, an optimization framework (instead of heuristics) should be used to obtain good teaching strategies. We call this approach ``learning to teach''. In the approach, two intelligent agents interact with each other: a student model (which corresponds to the learner in traditional machine learning algorithms), and a teacher model (which determines the appropriate data, loss function, and hypothesis space to facilitate the training of the student model). The teacher model leverages the feedback from the student model to optimize its own teaching strategies by means of reinforcement learning, so as to achieve teacher-student co-evolution. To demonstrate the practical value of our proposed approach, we take the training of deep neural networks (DNN) as an example, and show that by using the learning to teach techniques, we are able to use much less training data and fewer iterations to achieve almost the same accuracy for different kinds of DNN models (e.g., multi-layer perceptron, convolutional neural networks and recurrent neural networks) under various machine learning tasks (e.g., image classification and text understanding).
accepted-poster-papers
The paper addresses the problem of learning a teacher model which selects the training samples for the next mini-batch used by the student model. The proposed solution is to learn the teacher model using policy gradient. It is an interesting training setting, and the evaluation demonstrates that the method outperforms the baseline. However, it remains unclear how the method would scale to larger datasets, e.g. ImageNet. I would strongly encourage the authors to extend their evaluation to larger datasets and state-of-the-art models, as well as include better baselines, e.g. from Graves et al.
train
[ "SkCogB4Hf", "rkECpdo7M", "HJoIn0tEz", "BkMvqjYgG", "rJPwQZYgM", "SkV5DeTlG", "ByKbNaRWG", "Hy4tuUsmf", "B1GMHpCbM", "HJfPE60-M", "SkDirp7mf", "SJ3_maCWG", "rktvBfRWz" ]
[ "public", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public" ]
[ "Dear the authors,\n\nI really like this work, it's great and insightful. \nI have some questions about the implementation of the model.\n\n1). Could you provide the exact setting of hyper parameter: T' (the maximum iteration number)?\n\n2). In section 5.1.2, you mentioned that \"the base neural network model will not be updated\nuntil M un-trained, yet selected data instances are accumulated.\". So, when the teacher model selected more than M samples (say N ) for training, you just drop the N-M samples?\n\n3). I am not pretty sure how you train the teacher network. Based on my understanding, the teacher network scan the training data and selects data for training the student network, and once it has collected $M$ data points, then update the student network using those $M$ data points. After the update, we test the student network on Dev set, and check if it reaches the expected accuracy. And once it reaches the expected accuracy or exceeds the max iteration number, the reward is either 0 or $-log i_{\\tau}/T'$. Then, we reinitialize the student network randomly for the next round? Is the mini-batch index i\u001c_{\\tau} is the number of batches fed into the student network? \n\n<del>4). Could you provide the number of images (for both cifar-10 and mnist) in D^{teacher}_{train} and D_{train}? How do you split the datasets, just randomly? </del> (Solved)\n\n5). Could you provide the exact time for training the teacher network, e.g. using one NVIDIA Tesla K40 GPU, on CIFAR-10\n\n6). The results on Figure 6 is quite surprising to me, I don't understand why the model features (only the iteration number, averaged training loss, and best validation loss) are so important. In my view, those features are not so informative as data features, right? Quite confused..\n\n7). How did you schedule the learning rate of training the teacher network, and the learning of training the student network when training the teacher network? How did you normalize the three signals in model features to be in the interval [0,1] (i.e. what's the pre-defined maximum number to constrain their values in the interval [0,1])? Is the best validation loss is computed on the Dev set?\n\nThanks for your excellent work, I really enjoy reading it!!\n\n", "Dear reviewers,\n\nWe modified our paper in that:\n\n1) A new subsection 5.4 is added to show the performance of L2T in improving accuracy, as well as other rewards setup, as suggested by reviewer 1 and 3;\n\n2) A new subsection Appendix 7.4 is added to show the convergence property of teacher model training, as suggested by reviewer 3;\n\n3) Added the missed references in section 2, and modified some inappropriate statements as suggested by reviewer 2.\n\nWe also updated our initial response to all of you, for the sake of clearer clarifications and looping in the latest manuscript changes. We hope all these can make our paper more comprehensive and remove your corresponding concerns. Thanks!\n", "Dear Reviewer,\n\nDo you have any further questions/concerns towards our new paper/rebuttal?\n\nBest Regards,\nThe Authors", "This paper focuses on the problem of \"machine teaching\", i.e., how to select a good strategy to select training data points to pass to a machine learning algorithm, for faster learning. The proposed approach leverages reinforcement learning by defining the reward as how fast the learner learns, and use policy gradient to update the teacher parameters. I find the definition of the \"state\" in this case very interesting. The experimental results seem to show that such a learned teacher strategy makes machine learning algorithms learn faster. \n\nOverall I think that this paper is decent. The angle the authors took is interesting (essentially replacing one level of the bi-level optimization problem in machine teaching works with a reinforcement learning setup). The problem formulation is mostly reasonable, and the evaluation seems quite convincing. The paper is well-written: I enjoyed the mathematical formulation (Section 3). The authors did a good job of using different experiments (filtration number analysis, and teaching both the same architecture and a different architecture) to intuitively explain what their method actually does. \n\nAt the same time, though, I see several important issues that need to be addressed if this paper is to be accepted. Details below. \n\n1. As much as I enjoyed reading Section 3, it is very redundant. In some cases it is good to outline a powerful and generic framework (like the authors did here with defining \"teaching\" in a very broad sense, including selecting good loss functions and hypothesis spaces) and then explain that the current work focuses on one aspect (selecting training data points). However, I do not see it being the case here. In my opinion, selecting good loss functions and hypothesis spaces are much harder problems than data teaching - except maybe when one use a pre-defined set of possible loss functions and select from it. But that is not very interesting (if you can propose new loss functions, that would be way cooler). I also do not see how to define an intuitive set of \"states\" in that case. Therefore, I think this section should be shortened. I also think that the authors should not discuss the general framework and rather focus on \"data teaching\", which is the only focus of the current paper. The abstract and introduction should also be modified accordingly to more honestly reflect the current contributions. \n2. The authors should do a better job at explaining the details of the state definition, especially the student model features and the combination of data and current learner model. \n3. There is only one definition of the reward - related to batch number when the accuracy first exceeds a threshold. Is accuracy stable, can it drop back down below the threshold in the next epoch? The accuracy on a held-out test set is not guaranteed to be monotonically increasing, right? Is this a problem in practice (it seems to happen on your curves)? What about other potential reward definitions? And what would they potentially lead to? \n4. Experimental results are averaged over 5 repeated runs - a bit too small in my opinion. \n5. Can the authors show convergence of the teacher parameter \\theta? I think it is important to see how fast the teacher model converges, too. \n6. In some of your experiments, every training method converges to the same accuracy after enough training (Fig.2b), while in others, not quite (Fig. 2a and 2c). Why is this the case? Does it mean that you have not run enough iterations for the baseline methods? My intuition is that if the learner algorithm is convex, then ultimately they will all get to the same accuracy level, so the task is just to get there quicker. I understand that since the learner algorithm is an NN, this is not the case - but more explanation is necessary here - does your method also reduces the empirical possibility to get stuck in local minima? \n7. More explanation is needed towards Fig.4c. In this case, using a teacher model trained on a harder task (CIFAR10) leads to much improved student training on a simpler task (MNIST). Why?\n8. Although in terms of \"effective training data points\" the proposed method outperforms the other methods, in terms of time (Fig.5) the difference between it and say, NoTeach, is not that significant (especially at very high desired accuracy). More explanation needed here. \n\nRead the rebuttal and revision and slightly increased my rating.", "The authors define a deep learning model composed of four components: a student model, a teacher model, a loss function, and a data set. The student model is a deep learning model (MLP, CNN, and RNN were used in the paper). The teacher model learns via reinforcement learning which items to include in each minibatch of the data set. The student model learns according to a standard stochastic gradient descent technique (Adam for MLP and CNN, Momentum-SGD for RNN), appropriate to the data set (and loss function), but only uses the data items of the minibatch chosen by teacher model. They evaluate that their method can learn to provide learning items in an efficient manner in two situations: (1) the same student model-type on a different part of the same data set, and (2) adapt the teaching model to teach a new model-type for a different data set. In both circumstances, they demonstrate the efficacy of their technique and that it performs better than other reasonable baseline techniques: self-paced learning, no teaching, and a filter created by randomly reordering the data items filtered out from a teaching model.\n\nThis is an extremely impressive manuscript and likely to be of great interest to many researchers in the ICLR community. The research itself seems fine, but there are some issues with the discussion of previous work. Most of my comments focuses on this.\n\nThe authors write that artificial intelligence has mostly overlooked the role of teaching, but this claim is incorrect. There is a long history of research on teaching in artificial intelligence. Two literatures of note are intelligent tutoring and machine teaching in the computational learnability literature. A good historical hook to intelligent tutoring is Anderson, J. R., Boyle, C. F., & Reiser, B. J. (1985). Intelligent tutoring systems. Science, 228. 456-462. The literature is still healthy today. One offshoot of it has its own society with conferences and a journal devoted to it (The International Artificial intelligence in Education Society: http://iaied.org/about/). \n\nFor the computational learnability literature, complexity analysis for teaching has a subliterature devoted to it (analogous to the learning literature). Here is a hook into that literature: Goldman, S., & Kerns. M. (1995). On the complexity of teaching. Journal of Computer and Systems Sciences, 50(1), 20-31.\n\nOne last related literature is pedagogical teaching from computational cognitive science. This one is a more recent development. Here are two articles, one that provides a long and thorough discussion that is a definitive start to the literature, and another that is most relevant to the current paper, on applying pedagogical teaching to inverse reinforcement learning (a talk at NIPS 2016).\n\nShafto, P., Goodman, N. D., & Griffiths, T. L. (2014). A rational account of pedagogical reasoning: Teaching by, and learning from, examples. Cognitive Psychology, 71, 55-89.\n\nHo, M. K., Littman, M., MacGlashan, J., Cushman, F., & Austerweil, J. L. (NIPS 2016). \n\nI hope all of this makes it clear to the authors that it is inappropriate to claim that artificial intelligence has “largely overlooked” or “largely neglected”. \n\nOne other paper of note given that the authors train a MLP is an optimal teaching analysis of a perceptron: (Zhang, Ohannessian, Sen, Alfeld, & Zhu, 2016; NIPS).\n\n\n", "This paper suggests a \"learning to teach\" framework. Following a similar intuition as self-paced learning and curriculum learning, the authors suggest to learn a teaching strategy, corresponding to choices over the data presented to the learner (and potentially other decisions about the learner, such as the algorithm used). The problem is framed as RL problem, where the state space corresponds to learning configurations, and teacher actions change the state. Supervision is obtained by observing the learner's performance. \n\nI found it very difficult to understand the evaluation. \nFirst, there is quite a bit of recent work on learning to teach and curriculum learning. It would be helpful if there are comparisons to these models, and use similar datasets. It's not clear if an evaluation on the MNIST data set is particularly meaningful. The implementation of SPL seems to hurt performance in some cases (slower convergence on the IMDB dataset), can you explain it? In other text learning task (e.g., [1]) SPL showed improved performance. The results over the IMDB dataset in the original paper [2] are higher than the ones reported here, using a simple model (BoW). \nSecond, in non-convex problems, one can expect curriculum learning approaches to also perform better, not just converge faster. This aspect is not really discussed. Finally, I'm not sure I understand the X axis in Figure 2, the (effective) number of examples is much higher than the size of the dataset. Does it indicate the number of iterations over the same dataset? \n\nI would also like to see some analysis of what's actually being learned by the teacher. Some qualitative analysis, or even feature ablation study would be helpful.\n\n[1] Easy Questions First? A Case Study on Curriculum Learning for Question Answering. Sachan et-al.\n[2] Learning Word Vectors for Sentiment Analysis. Maas et-al.", "Thank you for your review comments. We would like to make several points for the sake of clarification: \n\n (1) As the usage of MNIST, although we have reported the convergence results on this dataset (Fig. 2(a)), what we would like to emphasize is that the learnt teaching policy can be transferred across datasets and model structures: from simple dataset (e.g., MNIST) to relatively difficult dataset (e.g., CIFAR10, in Fig. 4(b)), and from simple models (e.g., MLP) to advance models (e.g., ResNet, in Fig. 4(b) ), for the sake of **improving convergence**. We agree that in terms of **improving final accuracy**, it is meaningful to compare with some recent literature on curriculum learning such as [1]. We leave it as future work and has not reported it currently because: a) To show that L2T also works for improving accuracy, we have provided a new experiment on improving IMDB classification accuracy in subsection 5.4 where SPL has gain over baseline; b) we have not found public code implementation of [1]. Given that there are quite a few domain specific heuristics in [1] and we have not worked on QA tasks before, it takes more efforts to exactly reproduce their results.\n\n(2) For SPL, we guarantee that our implementation is correct (details reported in section 5.1.2). We also noticed that SPL performs not very well on the IMDB dataset, and our explanation is that SPL may not be very robust when working with LSTM (actually, by analyzing different filtered data patterns in Fig. 3(a-c), we can observe this kind of incompatibility to some extent). However, we do not think our experimental finding is inconsistent with [2]. Please note that the better number (88.89%) on IMDB in [2] was obtained with additional unlabeled data. With labeled data only, their number is 88.33%, which is even worse than our implementation (88.5% as reported in Appendix 7.1.3). Furthermore, we never aim to propose a model to achieve state-of-the-art numbers on this task (i.e., IMDB). Our goal is to demonstrate the generality of L2T in achieving better convergence for various neural networks training tasks, and thus we take IMDB as a testbed for LSTM networks. Another important note: in the beginning of section 5.2, we have pointed out that we train our teacher model on the first half of the data and apply it to training the student model with the second half. This is to guarantee that we do not mix the data in the training and testing phases of the teacher model. As a result, the curves on IMDB in Fig 2(c) were all obtained using half of the standard training data, and it is understandable that the corresponding results are a little worse (about 85% accuracy). \n\n(3) We agree that sometimes better accuracy could be the goal of L2T. In this case, we just need to change the reward function from “the batch number when accuracy first exceeds a threshold” to “the final accuracy on a held-out set”. Please check our new subsection 5.4 for more results (also mentioned in our point (1)).\n\n(4) For Fig. 2, ‘effective training instances’ in the X axis include the instances used for training the model, which may contain multiple instances of the same data sample. It is possible that this number will exceed the size of the training dataset because one usually needs to sweep the dataset for multiple epochs during the training process. \n\n(5) For your last point, i.e., ‘some qualitative analysis, or even feature ablation study would be helpful’, we actually have already done this in our paper – please refer to Section 7.3 (State Feature Analysis’) of the Appendix. In that section, we do not only list the detailed features, but also conduct comprehensive ablation study on the effects of different features. \n\n", "Thanks very much for your interests and comments to our work! \n\n(1)\tWe have not claimed, and do not think that our experiments show that ‘SPL doesn’t work at all’. First, in terms of convergence speed, SPL slightly outperform baseline (NoTeach) on CIFAR-10 (Fig.2. (b)). Furthermore, it also works for the initialization process of LSTM on IMDB (Fig. 2(c)), but after the initiation process, the static pattern (gradually include the data) of SPL makes the data usage highly inefficient; Second, in terms of final accuracy, SPL is better than NoTeach, please refer to our new subsection 5.4. \n\n(2)\tThank you for the suggestion of changing the position of Appendix 7.3.2. The figure shows that most of the work done is based on **both combined features and model features**, not only combined features. Furthermore, the key difference of L2T and ‘ heuristics considered previously’ is the weights of these features in L2T are automatically learnt and transferred.\n\n(3)\tWe appreciate the suggestions of trying other signals in curriculum learning such as those used in [1]. As you say, some of the signals (prediction gain) are essentially similar to the loss values used in SPL, and they can be also included into our L2T feature space.\n", "We thank you very much for letting us know so many literatures about teaching, both from the view of cognitive science and computational learnability. All these works are related with ours and we have included them to make our manuscript more comprehensive. We have also removed some improper statements. \nHere are several preliminarily discussions on the difference of the listed literature and L2T:\n\n1.\tAs we pointed in the manuscript, some of the works hold strong assumption of an oracle existence for the teacher model, such as (Zhang, Ohannessian, Sen, Alfeld, & Zhu, 2016; NIPS) and (Goldman, S., & Kerns. M. On the complexity of teaching. Journal of Computer and Systems Sciences, 1995).\n\n2.\tThe literature of pedagogical teaching, especially its application to IRL (Ho, M. K., Littman, M., MacGlashan, J., Cushman, F., & Austerweil, J. L. ,NIPS 2016) is much closer to our setting in that the teacher adjusts its behavior in order to facilitate student learning, by communicating with student (i.e., showing not doing). Apart from some differences in experimental setup and application scenarios, the key difference between the two works is in L2T we are using automatically learnt strategies for the teacher model that can be transferred between tasks. Furthermore, applications of pedagogical teaching in IRL implies that the teacher model is still much stronger than student, somehow similar to the oracle existence assumption in point 1 since there is an expert in IRL that gives the (state, action) trajectories based on the optimal policy.\n", "Thank you very much for your detailed comments and suggestions. Here are several of our responses:\n\n(1)\tThank you for pointing the writing issues in section 3, as well as the suggestions on loss function design. Yes, in current manuscript we are demonstrating the effects of L2T in the scenario of data selection. Meanwhile we are exploring the potential of L2T for designing better loss functions for neural machine translation. We will update the paper once we get meaningful results.\n\n(2)\tFor the details of state definition, please refer to section 7.3 of the Appendix. In 7.3.1, we list the feature details and in 7.3.2 we conduct the ablation study of different features.\n\n(3)\tThe current reward function is designed for guiding the reinforcement learning process to achieve better convergence. It is true that we cannot guarantee the monotonicity of reward during the training of the teacher model. However, the general increasing trend can be observed. Please refer to our new Appendix 7.4 for more details (as well as teacher model parameter convergence). Furthermore, it is for sure that we can use other rewards such as the final accuracy on a held-out dev set when our aim is to improve the accuracy. We have included in the new version (subsection 5.4) and please have a check.\n\n(4)\tFor different final accuracies (your point 6), your intuition is right and it is simply an issue of figure drawing: for example, in Fig. 2(c), we will have to use a very wide figure if we want to draw the entire curves of SPL, given that SPL converges very slowly. In terms of final accuracy, different teaching strategies are roughly the same, with SPL on IMDB a little bit higher. On the other hand, in L2T if we use a new reward that indicates the final accuracy, not the convergence speed, we can achieve better final accuracy. All these facts are reflected in our new subsection 5.4.\n\n(5)\tFor Fig. 4(c), our explanations are as follows: first, CIFAR10 (colored images, diverse objects) contains more information than MNIST (black/white images, only digits objects); second, the ResNet model on CIFAR10 is much more powerful than the simple MLP model on MNIST. These two factors make the teacher model trained on CIFAR10 more precise and useful. Intuitively speaking, simple tasks are easy to solve and learning a teacher model becomes not critically necessary (and as a result, the learning of the teacher model on simple tasks might not be sufficient). In contrast, harder tasks are not easy to solve without the teacher model, and therefore the teacher model could be learned more sufficiently with more useful feedback signals.\n\n(6)\tFor the wall-clock time analysis in Fig.5, our current implementation of L2T is not optimal and the comparison is a little unfair to L2T. Actually, due to the limitation of Theano, since the computational graph is pre-built, we cannot directly implement the idea of L2T and will have to use some redundant computations. Specifically, our current implementation of L2T contains two rounds of feedforward (ff) process for the selected data via our teacher model. Ideally, we only need one round of feedforward, since the loss values out of this feedforward process do not only constitute the state features but also directly derive the gradients for backpropagation. However, given the limitation of Theano, we have to go through another round of ff: after the data is selected by the teacher model, they will re-enter the pre-built computational graph, leading to another round of (wasteful) feedforward. It is a pity that we have to bear such an inefficient implementation at this moment, and we plan to try more flexible deep learning toolkits in the future. So in theory, L2T will be significantly more efficient in terms of wall-clock time, although the current experiments could not show that significance. \n", "Dear Reviewer,\n\nWe've added a new experiment towards verifying the effectiveness of L2T in terms of improving accuracy (see subsection 5.4). We hope that the new results, together with our previous rebuttal for clarifications, can remove several of your concerns. Thanks.\n\nBest,\nThe Authors", "Dear Reviewers,\n\nWe thank all your constructive review comments, which definitely help the improvement of the paper! We are providing the first-round feedback for clarifications, without revisions of the current manuscript. After that, we will provide a new round of paper in the next several weeks. \n\nBest", "Thank you for a very interesting paper! It was very clearly written and I especially enjoyed the thorough discussion of related work.\n\nI was wondering if the authors tried a simpler (and potentially better) baselines than SPL? For example, some of the baselines considered in Graves et al. [1] should be very trivial to implement: prediction gain (I think this is very similar to SPL), gradient prediction gain, etc. Relatedly, any thoughts on why SPL doesn't work at all?\n\nFrom Appendix 7.3.2 (I recommend upgrading this to the main part of the paper, by the way, since I found it to be one of the most illuminating sections) it seems clear that most of the work done is based on the combined features, which look very similar to heuristics considered previously. \n\n[1] https://arxiv.org/pdf/1704.03003.pdf" ]
[ -1, -1, -1, 8, 9, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 4, 3, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJewuJWCZ", "iclr_2018_HJewuJWCZ", "SkV5DeTlG", "iclr_2018_HJewuJWCZ", "iclr_2018_HJewuJWCZ", "iclr_2018_HJewuJWCZ", "SkV5DeTlG", "rktvBfRWz", "rJPwQZYgM", "BkMvqjYgG", "SkV5DeTlG", "iclr_2018_HJewuJWCZ", "iclr_2018_HJewuJWCZ" ]
iclr_2018_Syhr6pxCW
PixelNN: Example-based Image Synthesis
We present a simple nearest-neighbor (NN) approach that synthesizes high-frequency photorealistic images from an ``incomplete'' signal such as a low-resolution image, a surface normal map, or edges. Current state-of-the-art deep generative models designed for such conditional image synthesis lack two important things: (1) they are unable to generate a large set of diverse outputs, due to the mode collapse problem. (2) they are not interpretable, making it difficult to control the synthesized output. We demonstrate that NN approaches potentially address such limitations, but suffer in accuracy on small datasets. We design a simple pipeline that combines the best of both worlds: the first stage uses a convolutional neural network (CNN) to map the input to a (overly-smoothed) image, and the second stage uses a pixel-wise nearest neighbor method to map the smoothed output to multiple high-quality, high-frequency outputs in a controllable manner. Importantly, pixel-wise matching allows our method to compose novel high-frequency content by cutting-and-pasting pixels from different training exemplars. We demonstrate our approach for various input modalities, and for various domains ranging from human faces, pets, shoes, and handbags.
accepted-poster-papers
The paper proposes a novel method for conditional image generation which is based on nearest neighbor matching for transferring high-frequency statistics. The evaluation is carried out on several image synthesis tasks, where the technique is shown to perform better than an adversarial baseline.
train
[ "SJt3bbKgz", "BJ4AfUoeG", "SJt9X9kWz", "BJxxa-XzG", "HJ9SiWmfM", "ry2edZQMf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Overall I like the paper and the results look nice in a diverse set of datasets and tasks such as edge-to-image, super-resolution, etc. Unlike the generative distribution sampling of GANs, the method provides an interesting compositional scheme, where the low frequencies are regressed and the high frequencies are obtained by \"copying\" patches from the training set. In some cases the results are similar to pix-to-pix (also in the numerical evaluation) but the method allows for one-to-many image generation, which is a important contribution. Another positive aspect of the paper is that the synthesis results can be analyzed, providing insights for the generation process. \n\nWhile most of the paper is well written, some parts are difficult to parse. For example, the introduction has some parts that look more like related work (that is mostly a personal preference in writting). Also in Section 3, the paragraph for distance functions do not provide any insight about what is used, but it is included in the next paragraph (I would suggest either merging or not highlighting the paragraphs).\n\nQ: The spatial grouping that is happening in the compositional stage, is it solely due to the multi-scale hypercolumns? Would the result be more inconsistent if the hypercolumns had smaller receptive field?\n\nQ: For the multiple outputs, the k neighbor is selected at random?\n", "This paper presents a pixel-matching based approach to synthesizing RGB images from input edge or normal maps. The approach is compared to Isola et al’s conditional adversarial networks, and unlike the conditional GAN, is able to produce a diverse set of outputs.\n\nOverall, the paper describes a computer visions system based on synthesizing images, and not necessarily a new theoretical framework to compete with GANs. With the current focus of the paper being the proposed system, it is interesting to the computer vision community. However, if one views the paper in a different light, namely showing some “blind-spots” of current conditional GAN approaches like lack of diversity, then it can be of much more interest to the broader ICLR community.\n\nPros: \nOverall the paper is well-written\nMakes a strong case that random noise injection inside conditional GANs does not produce enough diversity\nShows a number of qualitative and quantitative results\n\nConcerns about the paper:\n1.) It is not clear how well the proposed approach works with CNN architectures other than PixelNet\n2.) Since the paper used “the pre-trained PixelNet to extract surface normal and edge maps” for ground-truth generation, it is not clear whether the approach will work as well when the input is a ground-truth semantic segmentation map.\n3.) Since the paper describes a computer-vision image synthesis system and not a new theoretical result, I believe reporting the actual run-time of the system will make the paper stronger. Can PixelNN run in real-time? How does the timing compare to Isola et al’s Conditional GAN?\n\nMinor comments:\n1.) The paper mentions making predictions from “incomplete” input several times, but in all experiments, the input is an edge map, normal map, or low-resolution image. When reading the manuscript the first time, I was expecting experiments on images that have regions that are visible and regions that are masked out. However, I am not sure if the confusion is solely mine, or shared with other readers.\n\n2.) Equation 1 contains the norm operator twice, and the first norm has no subscript, while the second one has an l_2 subscript. I would expect the notation style to be consistent within a single equation (i.e., use ||w||_2^2, ||w||^2, or ||w||_{l_2}^2)\n\n3.) Table 1 has two sub-tables: left and right. The sub-tables have the AP column in different places.\n\n4.) “Dense pixel-level correspondences” are discussed but not evaluated.\n", "This paper proposes a compositional nearest-neighbors approach to image synthesis, including results on several conditional image generation datasets. \n\nPros:\n- Simple approach based on nearest-neighbors, likely easier to train compared to GANs.\n- Scales to high-resolution images.\n\nCons:\n- Requires a potentially costly search procedure to generate images.\n- Seems to require relevant objects and textures to be present in the training set in order to succeed at any given conditional image generation task.", "We thank the reviewer for their feedback.\n\n1. \"Requires a potentially costly search procedure to generate images.\" -\n\nWe agree that this approach could be computationally expensive in its naive form. However, the use of optimized libraries such as FAISS, FLAWN etc. can be used to reduce the run-time. Similar to CNNs, the use of parallel processing modules such as GPUs could drastically reduce the time spent on search procedure.\n\n\n2. \"Seems to require relevant objects and textures to be present in the training set in order to succeed at any given conditional image generation task.\"\n\nWe agree. However, this criticism could also be applied to most learning-based models (including CNNs and GANs, as R3 points out). \n", "We thank the reviewer for their comments and suggestions, and appreciate their effort to highlight our work for a broader ICLR community. We will incorporate the suggestions provided in the reviews.\n\n1. \"It is not clear how well the proposed approach works with CNN architectures other than PixelNet\"\n\nWe will add experiments with other architectures. However, we believe that our approach is agnostic of a pixel-level CNN used for regression. We used PixelNet because it had been shown to work well for the various pixel-level tasks, particularly the inverse of our synthesis problems (i.e., predicting surface normals and edges from images). The use of a single network architecture for our various synthesis problems reduces variability due to the regressor and lets us focus on the nearest neighbor stage. \n\n2. \"Since the paper used “the pre-trained PixelNet to extract surface normal and edge maps” for ground-truth generation, it is not clear whether the approach will work as well when the input is a ground-truth semantic segmentation map.\n\nThis is an interesting question. We have initial results that synthesize faces from the Helen Face dataset (Smith et al, CVPR 2013) from ground-truth segmentation masks. We see qualitatively similar behaviour. In many cases we even see better performance because the input signal (i.e., the ground-truth segmentation labels) are of higher quality than the edges/normals we condition on. We will add such an analysis and discussion.\n\n3. \"Since the paper describes a computer-vision image synthesis system and not a new theoretical result, I believe reporting the actual run-time of the system will make the paper stronger. Can PixelNN run in real-time? How does the timing compare to Isola et al’s Conditional GAN?\" \n\nOur approximate neighbor neighbor search (described on Page 6) takes .2 fps. We did not optimize our approach for speed. Importantly, we make use of a single CPU to perform our nearest neighbor search, while Isola et al makes use of a GPU. We posit that GPU-based nearest-neighbor libraries (e.g., FAISS) will allow for real-time performance comparable to Isola’s. We will add a discussion.\n", "We thank the reviewer for the suggestion to improve the writing, and will incorporate these suggestions in our final version.\n\n1. \"The spatial grouping that is happening in the compositional stage, is it solely due to the multi-scale hypercolumns? Would the result be more inconsistent if the hypercolumns had smaller receptive field?\" \n\nYes, we think so. We believe that much of the spatial grouping is due to the multi-scale hypercolumns. The results degrade with smaller receptive fields.\n\n2. \"For the multiple outputs, the k neighbor is selected at random?\" \n\nYes, the k-neighbors are selected at random as described in \"Efficient Search\" on page-6. We will clarify this.\n" ]
[ 8, 6, 7, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_Syhr6pxCW", "iclr_2018_Syhr6pxCW", "iclr_2018_Syhr6pxCW", "SJt9X9kWz", "BJ4AfUoeG", "SJt3bbKgz" ]
iclr_2018_B1l8BtlCb
Non-Autoregressive Neural Machine Translation
Existing approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English–German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 English–Romanian.
accepted-poster-papers
The paper proposes a novel method for training a non-autoregressive machine translation model based on a pre-trained auto-regressive model. The method is interesting and the evaluation is carried out well. It should be noted, however, that the relative complexity of the training procedure (which involves multiple stages and external supervision) might limit the practical applicability and impact of the technique.
train
[ "HJ-eMYYxz", "B1Zh3McgM", "rJKwhzhxM", "S1VYsm9mM", "B1oOt7qQf", "S1U8U7qXM", "Sk-GdGflM", "SJ7QvnZlM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "This work proposes non-autoregressive decoder for the encoder-decoder framework in which the decision of generating a word does not depends on the prior decision of generated words. The key idea is to model the fertility of each word so that copies for source words are fed as input to the encoder part, not the generated target words as inputs. To achieve the goal, authors investigated various techniques: For inference, sample fertility space for generating multiple possible translations. For training, apply knowledge distilation for better training followed by fine tuning by reinforce. Experiments for English/German and English/Romanian show comparable translation qualities with speedup by non-autoregressive decoding.\n\nThe motivation is clear and proposed methods are very sound. Experiments are carried out very carefully.\n\nI have only minor concerns to this paper:\n\n- The experiments are designed to achieve comparable BLEU with improved latency. I'd like to know whether any BLUE improvement might be possible under similar latency, for instance, by increasing the model size given that inference is already fast enough.\n\n- It'd also like to see other language pairs with distorted word alignment, e.g., Chinese/English, to further strengthen this work, though it might have little impact given that attention already capture sort of alignment.\n\n- What is the impact of the external word aligner quality? For instance, it would be possible to introduce a noise in the word alignment results or use smaller data to train a model for word aligner. \n\n- The positional attention is rather unclear and it would be better to revise it. Note that equation 4 is simply mentioning attention computation, not the proposed positional attention.", "This paper describes an approach to decode non-autoregressively for neural machine translation (and other tasks that can be solved via seq2seq models). The advantage is the possibility of more parallel decoding which can result in a significant speed-up (up to a factor of 16 in the experiments described). The disadvantage is that it is more complicated than a standard beam search as auto-regressive teacher models are needed for training and the results do not reach (yet) the same BLEU scores as standard beam search. \n\nOverall, this is an interesting paper. It would have been good to see a speed-accuracy curve which plots decoding speed for different sized models versus the achieved BLUE score on one of the standard benchmarks (like WMT14 en-fr or en-de) to understand better the pros and cons of the proposed approach and to be able to compare models at the same speed or the same BLEU scores. Table 1 gives a hint of that but it is not clear whether much smaller models with standard beam search are possibly as good and fast as NAT -- losing 2-5 BLEU points on WMT14 is significant. While the Ro->En results are good, this particular language pair has not been used much by others; it would have been more interesting to stay with a single well-used language pair and benchmark and analyze why WMT14 en->de and de->en are not improving more. Finally it would have been good to address total computation in the comparison as well -- it seems while total decoding time is smaller total computation for NAT + NPD is actually higher depending on the choice of s.\n ", "This paper can be seen as an extension of the paper \"attention is all you need\" that will be published at nips in a few weeks (at the time I write this review). \n\nThe goal here is to make the target sentence generation non auto regressive. The authors propose to introduce a set of latent variables to represent the fertility of each source words. The number of target words can be then derived and they're all predicted in parallel.\n\nThe idea is interesting and trendy. However, the paper is not really stand alone. A lot of tricks are stacked to reduce the performance degradation. However, they're sometimes to briefly described to be understood by most readers. \n\nThe training process looks highly elaborate with a lot of hyper parameters. Maybe you could comment on this. \n\nFor instance, the use fertility supervision during training could be better motivated and explained. Your choice of IBM 2 is wired since it doesn't include fertility. Why not IBM 4, for instance ? How you use IBM model for supervision. This a simple example, but a lot of things in this paper is too briefly described and their impact not really evaluated. ", "As explained in the response to Reviewer 2, we decided to standardize on a single model size for the WMT experiments, but acknowledge that an evaluation of comparative performance at different sizes would be a worthwhile follow-up. However, direct comparison of the NAT at one model size to an autoregressive Transformer at a different model size may not be especially informative, because our NAT relies on an autoregressive teacher with the same model size in order to initialize the encoder.\n\nThe presence of the model distillation step also suggests that a meaningful BLEU improvement from larger NAT model size is unlikely, since only the NPD step allows the NAT to outperform its autoregressive teacher, even in the best case.\n\nWe believe that the NAT's gap in performance between the English/German language pair and the English/Romanian language pair suggests that it is sensitive to the degree of reordering; we agree that it would be worthwhile to follow up on this hypothesis with pairs like Japanese/English or Turkish/English that exhibit even more reordering than German/English does.\n\nThe external aligner we used produces fairly noisy results; our experiments with using the attention weights from an autoregressive Transformer as a (potentially more powerful) alignment model resulted in somewhat worse performance, suggesting that the dependence on alignment quality may not be straightforward.\n\nWe can revise our description of the positional attention layer.", "The reviewer has brought up an interesting point about comparison of models at different sizes.\n\nWe agree that the gap on English/German WMT14 is large enough that a relatively smaller autoregressive Transformer, especially on short sentences and with highly optimized inference kernels, might achieve similar latency and accuracy to the NAT. But no amount of kernel optimization or model size reduction can change the sequential nature of autoregressive translation; the autoregressive latency will always be proportional to the sentence length and the NAT will be faster when sequences are sufficiently long. The non-autoregressive Transformer can also benefit from low-level optimizations like quantization; we believe we compared the two on an even footing by using similar implementations for both.\n\nAlso, while the original Transformer paper provided a strong set of baseline hyperparameters for the autoregressive architecture given a particular model size, we would need to conduct a significant amount of additional search to identify the right parameter settings for other model sizes. Instead we chose to focus our computational resources on the ablation study and more language pairs.\n\nWe think the difference between the EN<–>DE and EN<–>RO results may be the result of a greater need for long-distance (clause-level) reordering between English and German (which are closely related languages with significant differences in sentence structure) than between English and Romanian (which, while less closely related, have more similarities in word order); this is an interesting direction for future research.\n\nAs for computation time, we are making the assumption that a significant amount of parallelism is available and the primary metric is the latency on the critical path. This is not necessarily the case for every deployment of machine translation in practice, but it is a metric on which existing neural MT systems perform particularly poorly. Given that assumption, the additional computation needed for NPD, while potentially significant in terms of throughput, would not result in more than a doubling of latency.", "We agree that the methodology presented in this paper contains several moving parts and a few techniques, such as external alignment supervision, sequence-level knowledge distillation, and fine-tuning using reinforcement learning, that could be considered “tricks.” All of these techniques are targeted at solving the multi-modality problem introduced when performing non-autoregressive translation, but if there’s a particular part of the pipeline that you feel is unclear, we would be happy to improve the description and explanation.\n\nAlso, we would argue that our approach introduces relatively few additional hyperparameters over the original Transformer, primarily the inclusion of the various fine-tuning losses and the number of fertility samples used at inference time. While we did not conduct an exhaustive search over e.g. a range of possible values for the weights on each fine-tuning loss, we tried to present a reasonably comprehensive set of ablations to identify the effect of each part of our methodology. \n\nWe also agree that IBM 2 might not be the best possible choice of fertility inference model, since the model itself only considers fertility implicitly (as part of the alignment process) and not explicitly like IBM 3+. Our decision to use IBM 2 was based on the availability, performance, and ease of integration of a popular existing implementation (fast_align) of that particular alignment model. Meanwhile, the use of fertility supervision in the first place can be justified from two perspectives: from the variational inference perspective, an external alignment model provides a very simple and tractable proposal distribution; at a higher level, fertility supervision simply turns a difficult, unsupervised learning problem into an easier, supervised one.", "You're right about the first error; that's a typo. For the second point, the fertility sequence is [2, 0, 1] because our analysis (and the network) counts the period/full stop as a third source/target token.", "Page 3, Paragraph 1: \"The factorization by length introduced .... first and third property but not the **first.**\"\nPage 6, 4.1.: Why is the fertility sequence [2,0,1] ? If I understood fertility correctly, I think it should have been [2,0] since number of tokens in the source sentence is 2." ]
[ 7, 7, 6, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B1l8BtlCb", "iclr_2018_B1l8BtlCb", "iclr_2018_B1l8BtlCb", "HJ-eMYYxz", "B1Zh3McgM", "rJKwhzhxM", "SJ7QvnZlM", "iclr_2018_B1l8BtlCb" ]
iclr_2018_HJtEm4p6Z
Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning
We present Deep Voice 3, a fully-convolutional attention-based neural text-to-speech (TTS) system. Deep Voice 3 matches state-of-the-art neural speech synthesis systems in naturalness while training an order of magnitude faster. We scale Deep Voice 3 to dataset sizes unprecedented for TTS, training on more than eight hundred hours of audio from over two thousand speakers. In addition, we identify common error modes of attention-based speech synthesis networks, demonstrate how to mitigate them, and compare several different waveform synthesis methods. We also describe how to scale inference to ten million queries per day on a single GPU server.
accepted-poster-papers
The paper describes a production-ready neural text-to-speech system. The algorithmic novelty is somewhat limited, as the fully-convolutional sequence model with attention is based on the previous work. The main contribution of the paper is the description of the complete system in full detail. I would encourage the authors to expand on the evaluation part of the paper, and add more ablation studies.
train
[ "S1c4VEXWz", "r1Ps9aPez", "rJo8vWqgM", "SJVdGOpQG", "HJ-WJuTXf", "SJJktDpQz", "r1ciaLa7M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper provides an overview of the Deep Voice 3 text-to-speech system. It describes the system in a fair amount of detail and discusses some trade-offs w.r.t. audio quality and computational constraints. Some experimental validation of certain architectural choices is also provided.\n\nMy main concern with this work is that it reads more like a tech report: it describes the workings and design choices behind one particular system in great detail, but often these choices are simply stated as fact and not really motivated, or compared to alternatives. This makes it difficult to tell which of these aspects are crucial to get good performance, and which are just arbitrary choices that happen to work okay.\n\nAs this system was clearly developed with actual deployment in mind (and not purely as an academic pursuit), all of these choices must have been well-deliberated. It is unfortunate that the paper doesn't demonstrate this. I think this makes the work less interesting overall to an ICLR audience. That said, it is perhaps useful to get some insight into what types of models are actually used in practice.\n\nAn exception to this is the comparison of \"converters\", model components that convert the model's internal representation of speech into waveforms. This comparison is particularly interesting because some of the results are remarkable, i.e. Griffin-Lim spectrogram inversion and the WORLD vocoder achieving very similar MOS scores in some cases (Table 2). I wish there would be more of that kind of thing in the paper. The comparison of attention mechanisms is also useful.\n\nI'm on the fence as I think it is nice to get some insight into a practical pipeline which benefits from many current trends in deep learning research (autoregressive models, monotonic attention, ...), but I also feel that the paper is a bit meager when it comes to motivating all the architectural aspects. I think the paper is well written so I've tentatively recommended acceptance.\n\n\nOther comments:\n\n- The separation of the \"decoder\" and \"converter\" stage is not entirely clear to me. It seems that the decoder is trained to predict spectrograms autoregressively, but its final layer is then discarded and its hidden representation is then used as input to the converter stage instead? The motivation for doing this is unclear to me, surely it would be better to train everything end-to-end, including the converter? This seems like an unnecessary detour, what's the reasoning behind this?\n\n- At the bottom of page 2 it is said that \"the whole model is trained end-to-end, excluding the vocoder\", which I think is an unfortunate turn of phrase. It's either end-to-end, or it isn't.\n\n- In Section 3.3, the point of mixing of h_k and h_e is unclear to me. Why is this done?\n\n- The gated linear unit in Figure 2a shows that speaker embedding information is only injected in the linear part. Has this been experimentally validated to work better than simpler mechanisms such as adding conditioning-dependent biases/gains?\n\n- When the decoder is trained to do autoregressive prediction of spectrograms, is it autoregressive only in time, or also in frequency? I'm guessing it's the former, but this means there is an implicit independence assumption (the intensities in different frequency bins are conditionally independent, given all past timesteps). Has this been taken into consideration? Maybe it doesn't matter because the decoder is never used directly anyway, and this is only a \"feature learning\" stage of sorts?\n\n- Why use the L1 loss on spectrograms?\n\n- The recent work on Parallel WaveNet may allow for speeding up WaveNet when used as a vocoder, this could be worth looking into seeing as inference speed is used as an argument to choose different vocoder strategies (with poorer audio quality as a result).\n\n- The title heavily emphasizes that this model can do multi-speaker TTS with many (2000) speakers, but that seems to be only a minor aspect that is only discussed briefly in the paper. And it is also something that preceding systems were already capable of (although maybe it hasn't been tested with a dataset of this size before). It might make sense to rethink the title to emphasize some of the more relevant and novel aspects of this work.\n\n\n----\n\nRevision: the authors have adequately addressed quite a few instances where I feel motivations / explanations were lacking, so I'm happy to increase my rating from 6 to 7. I think the proposed title change would also be a good idea.", "This paper discusses a text-to-speech system which is based on a convolutional attentive seq2seq architecture. It covers experiments on a few datasets, testing the model's ability to handle increasing numbers of speakers.\n\nBy and large, this is a \"system\" paper - it mostly describes the successful application of many different existing ideas to an important problem (with some exceptions, e.g. the novel method of enforcing monotonic alignments during inference). In this type of paper, I typically am most interested in hearing about *why* a particular design choice was made, what alternatives were tried, and how different ideas worked. This paper is lacking in this regard - I frequently was left looking for more insight into the particular system that was designed. Beyond that, I think more detailed description of the system would be necessary in order to reimplement it suitably (another important potential takeaway for a \"system\" paper). Separately, I the thousands-of-speakers results are just not that impressive - a MOS of 2 is not really useable in the real-world. For that reason, I think it's a bit disingenuous to sell this system as \"2000-Speaker Neural Text-to-Speech\".\n\nFor the above reasons, I'm giving the paper a \"marginally above\" rating. If the authors provide improved insight, discussion of system specifics, and experiments, I'd be open to raising my review. Below, I give some specific questions and suggestions that could be addressed in future drafts.\n\n- It might be worth giving a sentence or two defining the TTS problem - the paper is written assuming background knowledge about the problem setting, including different possible input sources, what a vocoder is, etc. The ICLR community at large may not have this domain-specific knowledge.\n- Why \"softsign\" and not tanh? Seems like an unusual choice.\n- What do the \"c\" and \"2c\" in Figure 2a denote?\n- Why scale (h_k + h_e) by \\sqrt{0.5} when computing the attention value vectors?\n- \"An L1 loss is computed using the output spectrograms\" I assume you mean the predicted and target spectrograms are compared via an L1 loss. Why L1?\n- In Vaswani et al., it was shown that a learned positional encoding worked about as well as the sinusoidal position encodings despite being potentially more flexible/less \"hand-designed\" for machine translation. Did you also try this for TTS? Any insight?\n- Some questions about monotonic attention: Did you use the training-time \"soft\" monotonic attention algorithm from Raffel et al. during training and inference, or did you use the \"hard\" monotonic attention at inference time? IIUC the \"soft\" algorithm doesn't actually force strict monotonicity. You wrote \"monotonic attention results in the model frequently mumbling words\", can you provide evidence/examples of this? Why do you think this happens? The monotonic attention approach seems more principled than post-hoc limiting softmax attention to be monotonic, why do you think it didn't work as well?\n- I can't find an actual reference to what you mean by a \"wavenet vocoder\". The original wavenet paper describes an autoregressive model for waveform generation. In order to use it as a vocoder, you'd have to do conditioning in some way. How? What was the structure of the wavenet you used? Why? These details appear to be missing. All you write is the sentence (which seems to end without a period) \"In the WaveNet vocoder, we use mel-scale spectrograms from the decoder to condition a Wavenet, which was trained separated\".\n- Can you provide examples of the mispronunciations etc. which were measured for Table 1? Was the evaluation of each attention mechanism done blindly?\n- The 2.07 MOS figure produced for tacotron seems extremely low, and seems to indicate that something went wrong or that insufficient care was taken to report this baseline. How did you adapt tacotron (which as I understand is a single-speaker model) to the multi-speaker setting?\n- Table 3 begs the question of whether Deep Voice 3 can outperform Deep Voice 2 when using a wavenet vocoder on VCTK (or improve upon the poor 2.09 MOS score reported). Why wasn't this experiment run?\n- The paragraph and appendix about deploying at scale is interesting and impressive, but seems a bit out of place - it probably makes more sense to include this information in a separate \"systems\" paper.", "The paper presents a speech synthesis system based on convolution neural networks. The proposed approach is an end-to-end characters to spectrogram system, trained on a very large dataset. The paper also introduces a attention model and can be used with various waveform synthesis methods. The proposed model is shown to match the state -of-the-art approaches performance in speech naturalness. \n\nThe paper is clearly written and easy to follow. The relation to previous works is detailed and clear.\n\nThe contributions of the paper are significants and an important step towards practical and efficient neural TTS system. The ability to train on a large corpus of speaker 10 times faster than current models is impressive and important for deployment, as is the cost-effective inference and the monotonic attention model. \nThe experiments on naturalness (Table 2) are convincing and show the viability of the approach. However, the experiments on multi-speaker synthesis (Table 3) are not very strong. The proposed model seems to need to use Wavenet as a vocoder to possibly outperform Deep Voice 2, which will slow down the inference time, one of the strong aspect of the proposed model.\n\nOther comments:\n\n* In Section 2, it is mentioned that RNN-based approaches can leads to attention errors, can the authors elaborate more on that aspect ? It seems important as the proposed approach alleviates these issues, but it is not clear from the paper what these errors are and why they happen.\n\n* In Table 3 there seems to be missing models compared to Table 2, like Tacotron with Wavenet, the authors should explain why in the text. \n\n* The footnote 2 on page 3 looks important enough to be part of the main text.\n", "\n- \"Some questions about monotonic attention: ... \"\n* We have modified the paragraph to address these questions:\n\"Production-quality TTS systems have very low tolerance for attention errors. Hence, besides positional encodings, we consider additional strategies to minimize the cases of repeating or skipping words. One approach is to substitute the canonical attention mechanism with the monotonic attention mechanism introduced in Raffel et al. (2017), which approximates hard-monotonic stochastic decoding with soft-monotonic attention by training in expectation.\\footnote{The paper Raffel et al. (2017) also proposes hard monotonic attention process by sampling. It aims to improve the inference speed by only attending over states that are selected via sampling, and thus avoiding compute over future states. In our work, we did not benefit from such speedup, and we observed poor attention behavior in some cases, e.g. being stuck on the first or last character.} Despite the improved monotonicity, this strategy may yield a more diffused attention distribution. In some cases, several characters are attended at the same time and high quality speech couldn't be obtained. We attribute this to the unnormalized attention coefficients of the soft alignment, potentially resulting in weak signal from the encoder. Thus, we propose an alternative strategy of constraining attention weights only at inference to be monotonic, preserving the training procedure without any constraints. Instead of computing the softmax over the entire input, we instead compute the softmax only over a fixed window starting at the last attended-to position and going forward several timesteps~\\footnote{We use a window size of 3 in our experiments.}. The initial position is set to zero and is later computed as the index of the highest attention weight within the current window. This strategy also enforces monotonic attention distribution at inference, as shown in Fig. 4 and yields superior speech quality. \"\n\n- \"I can't find an actual reference to what you mean by a \"wavenet vocoder\" ... \"\n* We have added the following paragraph and footnotes for clarification:  \n\"We separately train a WaveNet to be used as a vocoder treating mel-scale log-magnitude spectrograms as vocoder parameters. These vocoder parameters are input as external conditioners to the network. The training procedure and the architecture besides the conditioner are similar to the WaveNet described in Deep Voice 2. While the WaveNet in Deep Voice 2 is conditioned with linear-scale log-magnitude spectrograms, we observed better performance with mel-scale spectrograms, which corresponds to a more compact representation of audio. To predict mel-scale spectrograms, L1 loss on linear-scale spectrograms are also used besides L1 loss on mel-scale spectrograms at decoder. \\footnote{A loss on the converter output can be considered in the context of multi-task learning in conjunction with decoder output, since the goal is to improve the estimation accuracy of the mel-scale spectrograms at the converter input.}\"\n\n- \"Can you provide examples of the mispronunciations etc ... \" \n* We add a footnote 11 to exemplify attention errors. Our error evaluation is done blindly.\n\n - \"The 2.07 MOS figure produced for tacotron seems extremely low ... How did you adapt tacotron to multi-speaker setting?\"\n* We use the multi-speaker Tacotron described in the Deep Voice 2 paper, which is only a proof-of-concept implementation. We include this result for completeness. We will clarify it in main text.\n\n- \"Table 3 begs the question of whether Deep Voice 3 can outperform Deep Voice 2 when using a wavenet vocoder ... \"\n* We didn't have sufficient time and resources to perform hyperparameter search for WaveNet on VCTK and LibriSpeech datasets. We leave it for future work.  \n\n - \"The paragraph and appendix about deploying at scale is interesting and impressive ... a separate \"systems\" paper.\"\n* Thanks for your suggestion! We don't think the contributions for deployment at scale warrants a separate paper, so we prefer to keep the majority of content in the appendix in case other engineers and researchers may benefit from it.", "As another reviewer also raised a similar concern, we have paid significant attention to improve the explanations to motivate the model architecture choices in this paper.  For example: \n1) We have added a new section (\"Convolution Blocks for Sequential Processing\") to motivate the architecture design choices for the convolution blocks used in our model. \n2) In the \"Encoder\" section, we have added the motivation behind mixing key vector h_k and embedding h_e. (This is in regards to your question about mixing h_k and h_e.)\n3) In the \"Decoder\" section, we have expanded the explanation of query generation for attention and explain the motivation to use L1 loss.\n4) In the \"Attention Block\" section, we have added more explanations for our attention mechanism choices, attention's role in the overall architecture, the choice of positional encodings, and techniques to minimize attention errors.\n5) In the \"Converters\" section, we have added clarification and justification for the relationship between the decoder hidden state and the converter/vocoder.\n\n We note that due to the required additions by the Reviewers, our page limit has exceeded the suggested.\n\n - \"Separately, the thousands-of-speakers results are just not that impressive - a MOS of 2 is not really useable in the real-world ... \"\n* Since this submission, we have worked on optimizing the hyperparameters further. We were able to improve MOS from 2.09 to 2.37 by increasing the dimensionality of the speaker embedding and to 2.89 with the WORLD vocoder. We have updated our draft to reflect the improvement. In addition, it should be noted that LibriSpeech is a dataset for automatic speech recognition (ASR), which is recorded in various environments and often contains noticeable background noise. This characteristic is helpful for the robustness of an ASR system, but is harmful for a TTS system. In the literature, Yamagishi et al. (2010) built TTS systems using several ASR corporas with much fewer speakers, and the highest MOS 2.8 is on WSJ dataset which is \"cleaner\" than Librispeech. We expect a higher MOS score with a \"TTS-quality\" dataset that is at the scale of LibriSpeech, but it is very expensive to collect. Also, we are considering to change the title to \"Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning\" in the final version.\n\n- “It might be worth giving a sentence or two defining the TTS problem ... ”\n* We have modified the first paragraph to introduce definitions.\n\n - \"Why \"softsign\" and not tanh? \"\n* Softsign is preferred over tanh, because it has does not saturate as easily as nonlinearities based on exponential function and still yields sufficiently large gradients for large inputs. We have added a description in Section 3.3.\n\n- \"What do the \"c\" and \"2c\" in Figure 2a denote?\"\n* \"c\" denotes the dimensionality of the input. We have added this clarification to the caption of Fig. 2a.\n\n- \"Why scale (h_k + h_e) by \\sqrt{0.5} when computing the attention value vectors?\"\n* The scaling factor \\sqrt{0.5} ensures that we preserve the unit variance early in training. It is explained in footnote 4. \n\n- \"An L1 loss is computed using the output spectrograms ... Why L1?\"\n* Prediction of spectrograms is treated as a regression problem. We choose L1 loss since it yielded the best results empirically. Other common regression loss functions such as L2 loss may suffer from outlier spectral features (which may correspond to non-speech noise). We have clarified this point in Section 3.5. \n\n- \"In Vaswani et al., it was shown that a learned positional encoding ... \"\n* We didn't try the learned positional encoding in our system. The benefit of adding the positional encoding is significant only at the beginning of training and we do not expect superior audio quality by simply using different positional encodings. We have added these additional details in the paper.", "To incorporate your and other reviewers' suggestions, we have expanded discussions on the motivations behind the design choices in this paper. For example: \n1) We have added a new section (\"Convolution Blocks for Sequential Processing\") to motivate the architecture design choices for the convolution blocks used in our model. \n2) In the \"Encoder\" section, we have added the motivation behind mixing key vector h_k and embedding h_e.  \n3) In the \"Decoder\" section, we have expanded the explanation of query generation for attention and explain the motivation to use L1 loss.\n4) In the \"Attention Block\" section, we have added more explanations for our attention mechanism choices, attention's role in the overall architecture, the choice of positional encodings, and techniques to minimize attention errors.\n5) In the \"Converters\" section, we have added clarification and justification for the relationship between the decoder hidden state and the converter/vocoder.\n\nWe note that due to the required additions, our page limit has exceeded the suggested.\n\nOther comments:\n\n- \"The separation of the \"decoder\" and \"converter\" stage is not entirely clear to me ...\"\n* For a complex deep learning model like a TTS system, it can be challenging to train end-to-end in practice - instead, auxiliary/intermediate losses and multi-task learning may be preferred to guide the training of whole system. In decoder architecture, the loss for mel-scale spectrogram generation guides training of the attention mechanism, because the parameters are trained with the gradients from mel-scale spectrogram generation besides vocoder parameter generation. Our experiments suggest that a mel-scale spectrogram is a compact audio representation with sufficient information content to train a robust attention mechanism. Using mel-scale spectrograms yields fewer attention mistakes, compared to other high-dimensional audio representations (e.g., linear spectrogram, or other vocoder parameters). We observe that inputting the last hidden states of the decoder rather than mel-scale spectrograms to the converter network yields slightly higher audio quality. We attribute this to the richer information content of the hidden states, as a mel-scale spectrogram is a fixed representation. Since WaveNet is conducive to producing high quality audio directly from mel-scale spectrograms, for WaveNet vocoder, we use mel-scale spectrograms as the external conditioners to the WaveNet architecture.\n\n- \"At the bottom of page 2 it is said that \"the whole model is trained end-to-end, excluding the vocoder\" ... \"\n* We have removed this phrase.\n\n- \"In Section 3.3, the point of mixing of h_k and h_e is unclear to me. Why is this done?\"\n* We have added an explanation for this design choice: \"The attention value vectors are computed from attention key vectors and text embeddings, $h_v = \\sqrt{0.5} (h_k + h_e)$, as in  (Gehring et al., 2017), to jointly consider the local information in $h_e$ and the learned long-term context information in $h_k$.\"\n\n - \"The gated linear unit in Figure 2a shows that speaker embedding information is only injected in the linear part. Has this been validated to work experimentally better ... \"\n* We have compared various alternatives to make convolution blocks speaker-dependent, including adding speaker-dependent biases and/or gains. The particular choice in the paper has yielded the best results empirically.  \n\n- \"When the decoder is trained to do autoregressive prediction of spectrograms, is it autoregressive only in time, or also in frequency? ... \"\n* The prediction of spectrogram is autoregressive only in time, so there is an implicit conditional independence assumption across frequency bins given all past timesteps. This design choice is important to achieve faster inference, and it yields good enough result as we demonstrated.  As you mentioned, converter network plays a more important role in determining the audio quality, and it is non-causal (and hence is not autoregressive).\n\n- “Why use the L1 loss on spectrograms?”\n* Prediction of spectrograms is treated as a regression problem. We choose L1 loss since it yields the best result empirically. Other common regression loss functions such as L2 loss may suffer from outlier spectral features (which may correspond to non-speech noise). We have clarified this point in Section 3.5. \n\n- \"The recent work on Parallel WaveNet .... \"\n* Thanks for pointing it out. Parallel WaveNet can be integrated as a vocoder, which may yield better audio quality while still achieving fast inference. We think it is an important future direction and leave it to future work. \n\n- \"... It might make sense to rethink the title to emphasize some of the more relevant and novel aspects of this work.\"\n* Thanks for your suggestion. We are considering to change the title to \"Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning\".", "- \"The experiments on naturalness (Table 2) are convincing and show the viability of the approach. However, the experiments on multi-speaker synthesis (Table 3) are not very strong ...\"\n* Since our submission, we have worked on optimizing the hyperparameters of multi-speaker synthesis further. For Librispeech dataset, we were able to improve MOS to 2.37 by increasing the dimensionality of the speaker embedding and to 2.89 with the WORLD vocoder. We have updated our draft to reflect the improvement. \n\nOther comments:\n\n- \"In Section 2, it is mentioned that RNN-based approaches can leads to attention errors, can the authors elaborate more on that aspect? It seems important as the proposed approach alleviates these issues, but it is not clear from the paper what these errors are and why they happen.\"\n* The attention errors are attributed to canonical attention mechanism rather than the recurrent layers themselves. We have removed that phrase in Section 2 to avoid confusion. We have also added a footnote 11 to exemplify common attention errors.\n\n - \"In Table 3 there seems to be missing models compared to Table 2, like Tacotron with Wavenet, the authors should explain why in the text. \"\n * We have added the reason to the caption of Table 3: \"Deep Voice 2 and Tacotron systems were not trained for the LibriSpeech dataset due to the prohibitively long time required to optimize hyperparameters.\"\n\n - \"The footnote 2 on page 3 looks important enough to be part of the main text.\"\n* Thanks for your suggestion. We have integrated the footnote into the main text." ]
[ 7, 6, 6, -1, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1, -1 ]
[ "iclr_2018_HJtEm4p6Z", "iclr_2018_HJtEm4p6Z", "iclr_2018_HJtEm4p6Z", "HJ-WJuTXf", "r1Ps9aPez", "S1c4VEXWz", "rJo8vWqgM" ]
iclr_2018_r1Ddp1-Rb
mixup: Beyond Empirical Risk Minimization
Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.
accepted-poster-papers
The paper presents a simple but surprisingly effective data augmentation technique which is thoroughly evaluated on a variety of classification tasks, leading to improvement over state-of-the-art baselines. The paper is somewhat lacking a theoretical justification beyond intuitions, but extensive evaluation makes up for that.
val
[ "SJsGHZyrz", "BJrLsP6Ez", "HJrOW5zeG", "H1ZcJ0rgz", "S1EEd8e-G", "r1LTMFT7z", "S13cNOaXz", "SyaZqVn7M", "HJjVr_hXz", "H1I9EpTAW" ]
[ "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "Thanks for your interests in our work!\n\nWe could use the floating point value directly with the cross-entropy loss. Note that the cross-entropy loss function (https://en.wikipedia.org/wiki/Cross_entropy) can be written as:\nl(p, q) = \\sum_i p_i * log(q_i) = p^T log(q) (1)\nwhere i is the category index (e.g. class label index) and p, q are two discrete distributions.\n\n- In typical ERM training, p is the one-hot encoding of the target, and q is the predicted softmax probabilities.\n- If we convexly combine two one-hot target vectors y1 and y2, i.e. p = lam * y1 + (1 - lam) * y2, then since the resulting p represents a discrete distribution, (1) still works without modification. (For example, in Tensorflow you can feed p into `tf.nn.softmax_cross_entropy_with_logits`, as suggested in https://github.com/tensorflow/tensorflow/blob/abf3c6d745c34d303985f210bf9e92cac99ba744/tensorflow/python/ops/nn_ops.py#L1713 ; in PyTorch you may need to implement (1) by writing a customized loss function.)\n- Alternatively, note that l(p, q) is linear in p, which means in the above case, l(p, q) = l(lam * y1 + (1 - lam) * y2, q) = lam * l(y1, q) + (1 - lam) * l(y2, q). Hence you can simply use the cross-entropy loss function in your favorite ML framework to compute l(y1, q) and l(y2, q), as you do for ERM, and then use lam to convexly combine them to get l(p, q). This is what we use in our implementation.\n\nFinally, note that label smoothing (https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.pdf) can also be implemented using (1) by writing a customized loss function.\n\nHope the above explanation is clear. Thanks!", "Dear authors,\n\nI came across your paper when I was doing literature review for my own project. I really enjoyed it as it's well written, easy to follow and showed great empirical performance. I would love to try this in my own project, however, I got confused by the label mixing part for classification task. \n\nFollowing up on your explanation 5), you obtained a smoothed label as (0.7, 0.3, 0). I also found in your paper you claimed that \"To compare mixing inputs and labels with mixing inputs only, we either use a convex combination of the two one-hot encodings as the target, or select the one-hot encoding of the closer training sample as the target.\" The below points are based on my speculation: \n\nIf you used this float value directly, can you elaborate more on what type of loss function did you apply to train your network? \n\nIf you used one-hot encoding that is closer to the training target, if I am understanding correctly, (0.7, 0.3, 0) is selected as (1, 0, 0). Then all the mix-up labels would only depend on the values of alpha: y = alpha y_1 + (1-alpha) y_2, if alpha > 0.5, then y = y1, and if alpha < 0.5, y = y2. The mix-up procedure would produce a training sample like ( alpha x_1 + (1-alpha) x_2, y_1), which feels like adding perturbations to the input.\n\nI hope I am understanding your paper correctly and would love to hear your explanation. Thanks so much!", "This paper studies an approach of data augmentation where a convex combination of multiple samples is used as a new sample. While the use of such convex combination (mixing features) is not new, this paper proposes to use a convex combination of corresponding labels as the label of the new sample (mixing labels). The authors motivate the proposed approach in the context of vicinal risk minimization, but the proposed approach is not well supported by theory. Experimental results suggest that the proposed approach significantly outperforms the baseline of using only the standard data augmentation studied in Goyal et al. (2017).\n\nWhile the idea of mixing not only features but also labels is new and interesting, its advantage over the existing approach of mixing only features is not shown. The authors mention \"interpolating only between inputs with equal label did not lead to the performance gains of mixup,\" but this is not shown in the experiments. The authors cite recent work by DeVries & Taylor (2017) and Pereyra et al. (2017), but the technique of combining multiple samples for data augmentation have been a popular approach. See for example a well cited paper by Chawla et al. (2002). The baseline should thus be mixing only features, and this should be compared against the proposed approach of mixing both features and labels.\n\nN. V. Chawla et al., SMOTE: Synthetic Minority Over-sampling Technique, JAIR 16: 321-357 (2002).\n\nMinor comments:\n\nFigure 1(b): How should I read this figure? For example, what does the color represent?\n\nTable 1: What is an epoch for mixup? How does the per epoch complexity of mixup copare against that of ERM?\n\nTable 2: The test error seems to be quite sensitive to the number of epochs. Why not use validation to determine when to stop training?\n\nTable 2: What is the performance of mixup + dropout?\n\n===\n\nI appreciate the thorough revision. The empirical advantages over baselines including SMOTE and others are now well demonstrated in the experimental results. It is also good to see that mixup is complementary to dropout, and the combined method works even better than either.\n\nI understand and appreciate the authors' argument as to why mixup should work, but it is not sufficiently convincing to me why a convex combination in Euclidean space should produce good data distribution. Convex combination certainly changes the manifold. However, the lack of sufficient theoretical justification is now well complemented by extensive experiments, and it will motivate more theoretical work.\n", "I enjoyed reading this well-written and easy-to-follow paper. The paper builds on the rather old idea of minimizing the empirical vicinal risk (Chapelle et al., 2000) instead of the empirical risk. The authors' contribution is to provide a particular instance of vicinity distribution, which amounts to linear interpolation between samples. This idea of linear interpolation on the training sample to generate additional (adversarial, in the words of the authors) data is definitely appealing to prevent overfitting and improve generalization performance at a mild computational cost (note that this comment does not just apply to deep learning). This notion is definitely of interest to machine learning, and to the ICLR community in particular. I have several comments and remarks on the concept of mixup, listed below in no particular order. My overall opinion on the paper is positive and I stand for acceptance, provided the authors answer the points below. I would especially be interested in discussing those with the authors.\n\n1 - While data augmentation literature is well acknowledged in the paper, I would also like to see a comment on domain adaptation, which is a very closely related topic and of particular interest to the ICLR community.\n\n2 - Paragraph after Eq. (1), starting with \"Learning\" and ending with \"(Szegedy et al., 2014)\": I am not so familiar with the term memorization, is this just a fancy way of talking about overfitting? If so, you might want to rephrase this paragraph with terms more used in the machine learning community. When you write \"one trivial way to minimize [the empirical risk] is to memorize the training data\", do you mean output a predictor which only delivers predictions on $X_i$, equal to $Y_i$? If so, this is again not specific to deep learning and I feel this should be a bit more discussed.\n\n3 - I have not found in the paper a clear heuristics about how pairs of training samples should be picked to create interpolations. Picking at random is the simplest however I feel that a proximity measure on the space $\\mathcal{X}$ on which samples live would come in handy. For example, sampling with a probability decreasing as the Euclidean distance seems a natural idea. In any case, I strongly feel this discussion is missing in the paper.\n\n4 - On a related note, I would like to see a discussion on how many \"adversarial\" examples should be used. Since the computational overhead cost of computing one new sample is reasonable (sampling from a Beta distribution + one addition), I wonder why $m$ is not taken very large, yielding more accurate estimates of the empirical risk. A related question: under what conditions does the vicinal risk converge (in expectation for example) to the empirical risk? I think some comments would be nice.\n\n5 - I am intrigued by the last paragraph of Section 5. What do the authors exactly have in mind when they suggest that mixup could be generalized to regression problems? As far as I understood the paper, since $\\tilde{y}$ is defined as a linear interpolation between $y_i$ and $y_j$, this formulation only works for continuous $y$s, like in regression. This formulation is not straightforwardly transposable to classification for example. I therefore am quite confused about the fact that the authors present experiments on classification tasks, with a method that writes for regression.\n\n6 - Writing linear interpolations to generate new data points implicitly makes the assumption that the input and output spaces ($\\mathcal{X}$ and $\\mathcal{Y}$) are convex. I have no clear intuition wether this is a limitation of the authors' proposed method but I strongly feel this should be carefully addressed by a comment in Section 2.", "Theoretical contributions: None. Moreover, there is no clear theoretical explanation for why this approach ought to work. The authors cite (Chapelle et al., 2000) and actually most of the equations are taken from there, but the authors do not justify why the proposed distribution is a good approximation for the true p(x, y). \n\nPractical contributions: The paper introduces a new technique for training DNNs by forming a convex combination between two training data instances, as well as changing the associated label to the corresponding convex combination of the original 2 labels. \n\nExperimental results. The authors show mixup provides improvement over baselines in the following settings:\n * Image Classification on Imagenet. CIFAR-10 and CIFAR-100, across architectures.\n * Speech data \n * Memorization of corrupted labels\n * Adversarial robustness (white box and black box attacks)\n * GANs (though quite a limited example, it is hard to generalize from this setting to the standard problems that GANs are used for).\n * Tabular data.\n\nReproducibility: The provided website to access the source code is currently not loading. However, experiment hyperparameters are meticulously recorded in the paper. \n\nKey selling points:\n * Good results across the board.\n * Easy to implement.\n * Not computationally expensive. \n\nWhat is missing:\n * Convincing theoretical arguments for why combining data and labels this way is a good approach. Convex combinations of natural images does not result in natural images. \n * Baseline in which the labels are not mixed, in order to ensure that the gains are not coming from the data augmentation only. Combining the proposed data augmentation with label smoothing should be another baseline.\n * A thorough discussion on mixing in feature space, as well as a baseline which mizes in feature space. \n * A concrete strategy for obtaining good results using the proposed method. For example, for speech data the authors say that “For mixup, we use a warm-up period of five epochs where we train the network on original training examples, since we find it speeds up initial convergence.“ Would be good to see how this affects results and convergence speed. Apart from having to tune the lambda hyperparameter, one might also have to tune when to start mixup. \n * Figure 2 seems like a test made to work for this method and does not add much to the paper. Yes, if one trains on convex combination between data, one expects the model to do better in that regime. \n * Label smoothing baseline to put numbers into perspective, for example in Figure 4. \n\n\n\n", "As per the request of reviewers, we now include a new \"ablation studies\" section in the (Jan 5, 2018) revision (available for download). The new experiments compare mixup with mixing only the inputs (of either random pairs or nearest neighbors, with the pair of samples coming from either the same class or any classes. This includes a previous method called \"SMOTE\"), mixing in feature space, label smoothing (with and without mixing the inputs) and adding Gaussian noise to inputs. The results confirm the advantage of mixup over other design choices.\n\nFor theoretical justifications of the design choices of mixup, please refer to our replies to the reviewers, in particular Reviewer 2 and 3. Thanks for your interests!", "We thank the Anonymous Reviewer 1 for interesting comments and feedback.\n\n1. Domain adaptation is indeed a related problem as one can consider a model trained with Vicinal Risk Minimization will be more robust to small drift in the input distribution. The experience on adversarial examples tends to validate this hypothesis. Indeed, the distance between the original distribution and that of the adversarial examples is typically small. A full study of the suitability of mixup to the broader domain adaptation problems is however beyond the scope of this paper.\n\nWe now include a brief discussion about domain adaptation. Inspired by this question, we brainstormed about the possibility of using mixup for domain adaptation in two ways. \n\n(1) Assume a large source-domain dataset D_s = { (xs_1, ys_1), ..., (xs_N, ys_N) }, and a small target-domain dataset D_t = { (xt_1, yt_1), ..., (xt_n, yt_n) }. We are interested in learning D_t with auxiliary knowledge from D_s. Using mixup we could do so by training our classifier on the synthetic pairs:\n\nx = a * xt_i + (1 - a) * xs_j,\ny = a * yt_i + (1 - a) * xs_j,\n\nFor random pairs of indices i \\in { 1, …, n }, j \\in { 1, …, N }, and a particular distribution for the mixing coefficient a. If a is concentrated around one, this process recovers ERM training on the target domain. Otherwise, mixup produces synthetic examples just outside the target domain, by interpolating into the source domain.\n\n(2) Alternatively, mixup can simply be used as a within-domain data augmentation method in existing domain adaptation algorithms. For example, some recent work (e.g. https://arxiv.org/abs/1702.05464) train a network to have similar embedding distributions for both the source and the target domains, using shared weights, moment matching or discriminator loss. With mixup, we can use the same mixing weights lambda (or hyperparameter alpha) for both the source and the target domain samples, and require the learned embeddings to be similar. This forces the model to match the embedding distributions of two domains at a broader region in the embedding space, potentially improving the transfer performance.\n\n2. Correct, by memorization we mean perfect overfitting, as discussed in (Zhang et al., 2017). We will clarify this issue, as well as mentioning that this is a pathology more general than deep learning.\n\n3. At the core of mixup lies its simplicity: pairs of training examples are chosen at random. We have considered other possibilities based on nearest neighbours, but random pairing was much simpler (it does not require to specify a norm in X), and produced better results. We now dedicate a paragraph in the manuscript to describe our choice, as well as proposing the ones by the reviewer for future work.\n\n4. Mixup does not involve computing adversarial examples. Instead, mixup constructs synthetic examples by interpolating random pairs of points from the training set. New synthetic examples are constructed on-the-fly at a negligible cost for each training iteration. Note that *adversarial* examples are only constructed in our experiment to verify the robustness of mixup networks to adversarial attacks. Adversarial examples are never constructed during training. They are only constructed in that particular experiment at test time, to verify the robustness of each network.\n\nFor some examples of VRM converging to ERM, we suggest the original paper of Chapelle et al. An example discussed in that paper is the equivalence of adding Gaussian perturbation to inputs and ERM with L2 regularization.\n\n5. Mathematically, mixup works for both classification and regression problems: for classification, all our experiments mix labels when parameterized as one-hot continuous vectors. For example: 0.3 * (0, 1, 0) + 0.7 * (1, 0, 0) = (0.7, 0.3, 0). However, we haven't done any experiments on regression problems, and therefore we are interested to see i) if regression performance improves with mixup, and ii) how the regression curves are regularized by mixup. \n\n6. We do not make a convexity assumption, in the sense that linear combination of samples fall outside the set of natural images, and therefore we are forcing the classifier to give reasonable (mixup) labels \"outside the convex set\". Overall, linear interpolation is not a limitation, but a simple and powerful way to inform the classifier about how the label changes in the neighbourhood of an image (“this is one direction along which a ‘cat’ becomes closer to a ‘dog’”, etc). In the case where the input space is more structural (e.g. the space of graphs), a convex combination of the raw inputs may not be valid. However, we can always make the convex combination in the embedding space, which is supposed to be a vector space.", "We thank the Anonymous Reviewer 2 for comments and feedback.\n\nA major concern of Reviewer 2 is that the paper should include \"mixing features\" as a baseline.\n\nFirst of all, we thank Reviewer 2 for raising this point and referring to the SMOTE paper. In the latest revised version (now available for download), we have included a new ablation study section which thoroughly compares mixup against related data augmentation ideas and gives further support to mixup's advantage.\n\nMoreover, we would like to clarify that mixup and previous work have other important differences. Specifically, the SMOTE algorithm only makes convex combinations of the raw inputs between *nearest neighbors of the same class*. The recent work by DeVries & Taylor (2017) also follows the same-class nearest neighbor idea of SMOTE, albeit in the feature space (e.g. the embedding space of an autoencoder). This is in sharp contrast to our proposed mixing strategy, which makes convex combination of randomly drawn raw inputs pairs from the training set. In the revised submission, we highlight these differences and demonstrate that each of them is essential for achieving better performance.\n\nConceptually, a desideratum of data augmentation is that the augmented data should be as diverse as possible while covering the space of data distribution. In the case where the data distribution is a low dimensional manifold (such as the Swiss Roll Dataset), typically the number of data n and the dimensionality of the input space d satisfy d << log(n), it suffices to augment the training set by interpolating the nearest neighbors. However, if on the other hand d >> log(n), as is the case in typical vision and speech classification tasks, nearest neighbors provide insufficient information to recover the geometry of the data distribution, and training samples other than nearest neighbors can provide a lot of additional geometric information of the data distribution. Therefore, compared with interpolating nearest neighbors (of either the same class, or the entire training set), interpolating random pairs of training data provides a better coverage of the data distribution. Empirically, in our new ablation studies, we find that mixing nearest neighbors provides little (if any) improvement over ERM.\n\nAs we have justified the interpolation of random training data pairs from potentially different classes, it remains to decide which loss function to use if the synthetic data is a convex combination of samples from two classes. One choice is to assign a single label to the synthetic sample, presumably using the label of the closer sample from the two inputs used to generate the synthetic sample. However, this choice creates an abrupt change of target around a 50-50 mix, while not being able to distinguish a 55-45 mix from a 95-5 mix. The natural solution to these two problems is to also mix the labels using the same weights as the input mix, which makes sure a 55-45 mix of inputs is more similar to a 45-55 mix, rather than a 95-5 one. Empirically, in our new ablation studies, we find that \"only mixing the inputs\" provides some regularization effects (so that the performance of using smaller weight decay improves), but very limited performance gain over ERM.\n\nPlease also refer to the reply to Reviewer 3 for more theoretical justifications.\n\nRegarding the \"minor comments\":\n\nFigure 1(b): - Green: Class 0, Orange: Class 1, Blue shading indicates p(y=1). It is now clarified in the revised version.\n\nTable 1: One epoch of mixup training is the same as one epoch of normal training, with the same number of minibatches and the same minibatch size. The only change in the training loop is the input mixing step and the label mixing step. Therefore, the computational complexity and actually training time remain (almost) the same.\n\nTable 2: For these experiments, the dataset contains a large portion of corrupt labels. In this case, *without proper regularization*, the test error is indeed quite sensitive to the number of epochs, and one can use cross-validation to determine when to stop. However, even if we only consider the best test errors achieved during the training process, mixup still has a significant advantage over dropout, and dropout has a significant advantage over ERM.\n\nTable 2: What is the performance of mixup + dropout?\nThis is a very good question. We conduct additional experiments combining mixup and dropout, both with medium regularization strength. We observe that the best parameter setting of the combined method is comparable with mixup in terms of the best test error during the training process, but outperforms mixup in terms of the test error at the last epoch. This suggests that mixup combined with dropout is even more resistant to corrupt labels. The updated results are available now. We thank Reviewer 2 for raising this interesting question, and will include proper acknowledgement in the final version.", "We thank the Anonymous Reviewer 3 for comments and feedback.\n\nThe following exposition concerns theoretical arguments for the proposed approach, as well as empirical comparisons with mixing only the inputs, mixing in feature space, label smoothing (with and without mixing the inputs). We include all these baselines in an ablation study section in the latest revision (now available for download), giving further support to mixup's advantage. We now explain the theoretical motivations:\n\nOne desideratum of data augmentation is that the augmented data should match the statistics of the training set. This is because augmented data that mismatch the statistics of the training set are not likely to match the statistics of the test set, leading to less effective augmentation. From the perspective of training, using augmented data that deviate too much from the training set statistics will also cause *high training error on the original training set*, and in turn hurt generalization. In the following, we argue that input space interpolation better matches the statistics of the training set.\n\nClassifiers are nothing but functions of the input space. Therefore, similarity between the statistics of the augmented data and those of the data distribution in input space is more important than perceptual quality or semantic interpretability. Perceptual similarity does not correlate well with metric distance in the input space. For example, a blurred image usually looks similar to its original sharp version, while in input space they may have a large (L2) distance. On the other hand, by directly interpolating in the input space, our method is bound to not lose or significantly alter the statistical information in the data distribution. Therefore, despite not looking real, the mixup images can be better synthetic data for training classifiers than images from latent space interpolations. Empirically, in our ablation studies, we see a gradual degradation of accuracy when interpolating in higher layers of representation.\n\nAnother desideratum is that the augmented data should be as diverse as possible to cover the space spanned by the training set. In the case where the data distribution is a low dimensional manifold (such as the Swiss Roll Dataset), typically the number of data n and the dimensionality of the input space d satisfy d << log(n), it suffices to augment the training set by interpolating the nearest neighbors. However, if on the other hand d >> log(n), as is the case in typical vision and speech classification tasks, nearest neighbors provide insufficient information for recovering the geometry of the data distribution, and training samples other than nearest neighbors can provide a lot of additional geometric information of the data distribution. Therefore, compared with interpolating nearest neighbors, interpolating random pairs of training data provides a better coverage of the data distribution. Empirically, in our ablation studies, we find that mixing nearest neighbors provides little (if any) improvement over ERM.\n\nAs we have justified the interpolation of training data from potentially different classes, it remains to decide which synthetic label to use if the synthetic input is a convex combination of samples from two classes. One choice is to assign a single label to the synthetic sample, presumably using the label of the closer sample from the two inputs used to generate the synthetic sample. However, this choice creates an abrupt change of target around a 50-50 mix, while not being able to distinguish a 55-45 mix from a 95-5 mix. The natural solution to these two problems is to also mix the labels using the same weights, which makes sure a 55-45 mix of inputs is more similar to a 45-55 mix, rather than a 95-5 one. Empirically, in our ablation studies, we find that only mixing the inputs provides some regularization effects, but very limited performance gain over ERM.\n\nOne might also consider replacing the label interpolation in mixup with label smoothing or similar target space regularizers (e.g. Pereyra et al., 2017). However, label smoothing and similar work are not designed for between-class input interpolation in that it assigns a small but fixed amount of probability to every class that is not the correct class. Therefore, this loss also has the two problems mentioned above. Empirically, in our ablation studies, we find that adding label smoothing to ERM or replacing label interpolation with label smoothing in mixup provide only limited performance gain over ERM.\n\nOther comments:\n\n\"warm-up\": we believe using SGD with momentum and the standard learning rate schedule (i.e. reducing the learning rate when training error plateaus) is sufficient for good performance.\n\n\"Figure 2\": it shows that the ERM model has improper behaviors not only in adversarial directions but also between training points, which to the best of our knowledge is not commonly known. It also provides a direct motivation for mixup.", "I am an author of Paper275.\nI found my submission seems tightly related to this work. So I put a link to my submission for your information, \nData Augmentation by Pairing Samples for Images Classification\nhttps://openreview.net/forum?id=SJn0sLgRb&noteId=SJn0sLgRb\n" ]
[ -1, -1, 6, 7, 6, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "BJrLsP6Ez", "S13cNOaXz", "iclr_2018_r1Ddp1-Rb", "iclr_2018_r1Ddp1-Rb", "iclr_2018_r1Ddp1-Rb", "iclr_2018_r1Ddp1-Rb", "H1ZcJ0rgz", "HJrOW5zeG", "S1EEd8e-G", "iclr_2018_r1Ddp1-Rb" ]
iclr_2018_HyiAuyb0b
TD or not TD: Analyzing the Role of Temporal Differencing in Deep Reinforcement Learning
Our understanding of reinforcement learning (RL) has been shaped by theoretical and empirical results that were obtained decades ago using tabular representations and linear function approximators. These results suggest that RL methods that use temporal differencing (TD) are superior to direct Monte Carlo estimation (MC). How do these results hold up in deep RL, which deals with perceptually complex environments and deep nonlinear models? In this paper, we re-examine the role of TD in modern deep RL, using specially designed environments that control for specific factors that affect performance, such as reward sparsity, reward delay, and the perceptual complexity of the task. When comparing TD with infinite-horizon MC, we are able to reproduce classic results in modern settings. Yet we also find that finite-horizon MC is not inferior to TD, even when rewards are sparse or delayed. This makes MC a viable alternative to TD in deep RL.
accepted-poster-papers
This is an interesting piece of work that provides solid evidence on the topic of bootstrapping in deep reinforcement learning.
val
[ "B1ibc3dlM", "HJwSvpOxf", "Sku18wYgz", "rkr8pJ_fG", "H1-LCkuff", "Bylqo1OGz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper includes several controlled empirical studies comparing MC and TD methods in predicting of value function with complex DNN function approximators. Such comparison has been carried out both in theory and practice for simple low dimensional environments with linear (and RKHS) value function approximation showing how TD methods can have much better sample complexity and overall performance compared to pure MC methods. This paper shows some results to the contrary when applying RL to complex perceptual observation space.\n\nThe main results include:\n(1) In a rollout update a mix of MC and TD update (i.e. a rollout of > 1 and < horizon) outperforms either extreme. This is inline with TD-lambda analysis in previous work.\n(2) Pure MC methods can outperform TD methods when the rewards becomes noisy.\n(3) TD methods can outperform pure MC methods when the return is mostly dominated by the reward in the terminal state.\n(4) MC methods tend to degrade less when the reward signal is delayed.\n(5) Somewhat surprising: MC methods seems to be on-par with TD methods when the reward is sparse and even longer than the rollout horizon.\n(6) MC methods can outperform TD methods with more complex and high dimensional perceptual inputs.\n\nThe authors conjecture that several of the above observations can be explained by the fact that the training target in MC methods is \"ground truth\" and do not rely on bootstrapping from the current estimates as is done in a TD rollout. They suggest that training on such signal can be beneficial when training deep models on complex perceptual input spaces.\n\nThe contributions of the paper are in parts surprising and overall interesting. I believe there are far more caveats in this analysis than what is suggested in the paper and the authors should avoid over-generalizing the results based on a few domains and the analysis of a small set of algorithms. Nonetheless I find the results interesting to the RL community and a starting point to further analysis of the MC methods (or adaptations of TD methods) that work better with image observation spaces. Publishing the code, as the authors mentioned, would certainly help with that. \n\nNotes:\n- I find the description of the Q_MC method presented in the paper very confusing and had to consult the reference to understand the details. Adding a couple of equations on this would improve the readability of the paper.\n\n- The first mention of partial observability can be moved to the introduction.\n\n- Adding results for m=3 to table 2 would bring further insight to the comparison.\n\n- The results for the perceptual complexity experiment seem contradictory and inconclusive. One would expect Q_MC to work well in Grid Map domain if the conjecture put forth by the authors was to hold universally.\n\n- In the study on reward sparsity, although a prediction horizon of 32 is less than the average steps needed to get to a rewarding state, a blind random walk might be enough to take the RL agent to a close-enough neighbourhood from which a greedy MC-based policy has a direct path to the goal. What is missing from this picture is when a blind walk cannot reach such a state, e.g. when a narrow corridor is present in the environment. Such a case cannot be resolved by a short horizon MC method. In other words, a sparse reward setting is only \"difficult\" if getting into a good neighbourhood requires long term planning and cannot be resolved by a (pseudo) blind random walk.\n\n- The extrapolation of the value function approximator can also contribute to why the limited horizon MC method can see beyond its horizon in a sparse reward setting. That is, even if there is no way to reach a reward state in 32 steps, an MC value function approximation with horizon 32 can extrapolate from similar looking observed states that have a short path to a rewarding state, enough to be better than a blind random walk. It would have been nice to experiment with increasing model complexity to study such effect. ", "This paper revisits a subject that I have not seen revisited empirically since the 90s: the relative performance of TD and Monte-Carlo style methods under different values for the rollout length. Furthermore, the paper performs controlled experiments using the VizDoom environment to investigate the effect of a number of other environment characteristics, such as reward sparsity or perceptual complexity. The most interesting and surprising result is that finite-horizon Monte Carlo performs competitively in most tasks (with the exception of problems where terminal states play a big role (it does not do well at all on Pong!), and simple gridworld-type representations), and outperforms TD approaches in many of the more interesting settings. There is a really interesting experiment performed that suggests that this is the case due to finite-horizon MC having an easier time with learning perceptual representations. They also show, as a side result, that the reward decomposition in Dosvitskiy & Koltun (oral presentation at ICLR 2017) is not necessary for learning a good policy in VizDoom.\n\nOverall, I find the paper important for furthering the understanding of fundamental RL algorithms. However, my main concern is regarding a confounding factor that may have influenced the results: Q_MC uses a multi-headed model, trained on different horizon lengths, whereas the other models seem to have a single prediction head. May this helped Q_MC have better perceptual capabilities?\n\nA couple of other questions:\n- I couldn't find any mention of eligibility traces - why?\n- Why was the async RL framework used? It would be nice to have a discussion on whether this choice may have affected the results.", "The authors present a testing framework for deep RL methods in which difficulty can be controlled along a number of dimensions, including: reward delay, reward sparsity, episode length with terminating rewards, binary vs real rewards and perceptual complexity. The authors then experiment with a variety of TD and MC based deep learners to explore which methods are most robust to increases in difficulty along these dimensions. The key finding is that MC appears to be more robust than TD in a number of ways, and in particular the authors link this to domains with greater perceptual challenges. \n\nThis is a well motivated and explained paper, in which a research agenda is clearly defined and evaluated carefully with the results reflected on thoughtfully and with intuition. The authors discover some interesting characteristics of MC based Deep-RL which may influence future work in this area, and dig down a little to uncover the principles a little. The testing framework will be made public too, which adds to the value of this paper. I recommend the paper for acceptance and expect it will garner interest from the community.\n\nDetailed comments\n • [p4, basic health gathering task] \"The goal is to survive and maintain as much health\nas possible by collecting health kits...The reward is +1 when the agent collects a health kit and 0 otherwise.\" The reward suggests that the goal is to collect as many health kits as possible, for which surviving and maintaining health are secondary.\n • [p4, Delayed rewards] It might be interesting to have a delay sampled from a distribution with some known mean. Otherwise, the structure of the environment might support learning even when the reward delay would otherwise not.\n • [p4, Sparse rewards] I am not sure it is fair to say that the general difficulty is kept fixed. Rather, the average achievable reward for an oracle (that knows whether health packs are) is fixed.\n • [p6] \"Dosovitskiy & Koltun (2017) have not tested DFP on Atari games.\" Probably fairer/safer to say: did not report results on Atari games.\n", "We thank the reviewer for the careful review and useful suggestions.\n\n> Q_MC uses a multi-headed model, trained on different horizon lengths, whereas the other models seem to have a single prediction head. May this helped Q_MC have better perceptual capabilities?\n\nAs we discuss in the supplement section \"Difference between asynchronous n-step Q and Q_MC\", n-step Q learning relies on the use of multiple rollouts within one batch. The only way to use multiple rollouts in a finite horizon MC setting is by having multiple heads. Therefore we used the better variants of both algorithms throughout the paper. To make sure that the multiple heads are not the reason for the better perceptual capabilities, we added perception learned by the 1-head Q_MC algorithm to the Table 3, receiving similar results to the multi head perception.\n\n> I couldn't find any mention of eligibility traces - why?\n\nEligibility traces are used in order to implement the TD(lambda) algorithm in the backward view. In our paper we interpolate between MC and TD using n-step Q learning with different n values instead of TD(lambda). There are multiple reasons for this decision. First, most of the recent RL algorithms use a forward view implementation. Second, recent methods achieve best performance using RMS or Adam optimizers, but algorithms with eligibility traces are based on stochastic gradient descent and do not trivially carry over to other optimizers. Third, van Seijen has shown that standard TD(lambda) with eligibility traces does not perform well when combined with non-linear function approximation (van Seijen, Effective Multi-step Temporal-Difference Learning for Non-Linear Function Approximation 2016). Therefore, we decided to focus on algorithms that do not use eligibility traces.\n\n> Why was the async RL framework used?\n\nQ_MC and n-step Q (n>1) are both on-policy algorithms. We follow the asynchronous training framework for on-policy RL algorithms from Mnih at al. 2016. Asynchronous training allows us to run experiments efficiently on CPUs, which are more easily available than GPUs. Moreover, we have found that asynchronous Q_MC achieves similar performance as synchronous DFP, therefore we expect no major impact of this implementation detail on the results.\n", "We thank the reviewer for the valuable comments and detailed suggestions. \n\n> I find the description of the Q_MC method presented in the paper very confusing and had to consult the reference to understand the details. Adding a couple of equations on this would improve the readability of the paper.\n\nThe equations describing the Q_MC method were in the \"Network details\" section of the supplement material. We have renamed the section into \"Q_MC and n-step Q details\" and refer to the section in the main paper to make it easier to find this information.\n\n> The first mention of partial observability can be moved to the introduction.\n\nWe changed the introduction accordingly. \n\n> Adding results for m=3 to table 2 would bring further insight to the comparison.\n\nm=3 results were added to the supplement.\n\n> The results for the perceptual complexity experiment seem contradictory and inconclusive. One would expect Q_MC to work well in Grid Map domain if the conjecture put forth by the authors was to hold universally.\n\nWe think that the grid worlds were presented in the paper in a misleading manner. The two grid worlds are not clearly different in their perceptual difficulty. In one of them the agent receives its location and locations of health kits as 2D vectors of coordinates (sorted by distance to the agent) and in the other one as k-hot vectors. In both cases, the relevant information is readily available. It is not obvious which of these representations is easier for a deep network to process. We have changed the grid world names to \"Coord. Grid\" and \"k-hot Grid\", modified the Figure 4 caption, and adjusted the paper text to clarify the grid tasks results.\n\n> What is missing from this picture is when a blind walk cannot reach such a state, e.g. when a narrow corridor is present in the environment.\n\nWe agree that this is an interesting problem. However, if a random agent is never reaching a reward, both TD and MC cannot improve the policy. Both rely on receiving a reward for the Q_target to start improving. We believe this problem is more related to improving on the epsilon-greedy exploration or introducing auxiliary rewards encouraging exploration, and less related to the comparison between TD and MC algorithms. One example of such a problem is the Pitfall Atari game where a random agent is unable to reach any positive rewards. To the best of our knowledge, so far no epsilon-greedy-based algorithm was able to reach a positive average reward, as for example seen for multiple algorithms in Figure 14 in Bellemare et al. \"A Distributional Perspective on Reinforcement Learning\", 2017.\n", "We thank the reviewer for the detailed review and positive feedback. \nWe have adjusted the paper according to the comments.\n\n> It might be interesting to have a delay sampled from a distribution with some known mean.\n\nWe performed additional experiments with a uniformly sampled delay in the intervals of [2,6] and [6,10]. The results were nearly identical to the according experiments of 4 and 8 step delay." ]
[ 7, 7, 7, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_HyiAuyb0b", "iclr_2018_HyiAuyb0b", "iclr_2018_HyiAuyb0b", "HJwSvpOxf", "B1ibc3dlM", "Sku18wYgz" ]
iclr_2018_ry1arUgCW
DORA The Explorer: Directed Outreaching Reinforcement Action-Selection
Exploration is a fundamental aspect of Reinforcement Learning, typically implemented using stochastic action-selection. Exploration, however, can be more efficient if directed toward gaining new world knowledge. Visit-counters have been proven useful both in practice and in theory for directed exploration. However, a major limitation of counters is their locality. While there are a few model-based solutions to this shortcoming, a model-free approach is still missing. We propose E-values, a generalization of counters that can be used to evaluate the propagating exploratory value over state-action trajectories. We compare our approach to commonly used RL techniques, and show that using E-values improves learning and performance over traditional counters. We also show how our method can be implemented with function approximation to efficiently learn continuous MDPs. We demonstrate this by showing that our approach surpasses state of the art performance in the Freeway Atari 2600 game.
accepted-poster-papers
This is a very interesting paper that also seems a little underdeveloped. As noted by the reviewers, it would have been nice to see the idea applied to domains requiring function approximation to confirm that it can scale -- the late addition of Freeway results is nice, but Freeway is also by far the simplest exploration problem in the Atari suite. There also seems to be a confusion between methods such as UCB, which explore/exploit, and purely exploitative methods. The case gamma_E > 0 is also less than obvious. Given the theoretical leanings of the paper, I would strongly encourage the authors to focus on deriving an RMax-style bound for their approach.
train
[ "Hy0gemvxf", "BJeqeaOgM", "r1EghdKeM", "HkdCfLsXz", "Sy5-emIQf", "SJ9GoksGG", "SJ4ghLMzz", "r1rjjLMff" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "author" ]
[ "\n\nThe paper proposes a novel way for trading of exploration and exploitation in model-free reinforcement learning. The idea is to learn a second (kind of) Q function, which could be called E-function, which captures the value of exploration (E-value). In contrast to the Q-function of the problem at hand, the E-function assumes no preferences among actions. \nThis is makes sense in my opinion as exploration is exactly “no preferences among actions”. \n\nActually, to be more precise, the paper shows that the logarithm of E-Values can be thought of as a generalization of visit counters, with propagation\nof the values along state-action pairs. This is important, as E-values should actually decrease with repetition. Moreover, the paper shows that by using counters for stochastic action-selection rules commonly employed within the RL community, for every stochastic rule there exist equivalent deterministic rules. Once turned to deterministic counter-based rules, it is again possible improve them using E-values. This provides a nice story for a simple (in a positive sense) approach to tackle the exploration-exploitation tradeoff. The experimental results demonstrate this is a sufficient number of domains. To summarize, for an informed outsider such as the reviewer, the paper makes a simple but strong contribution to an important problem. Overall the paper is well writing and structured. ", "This paper presents an exploration method for model-free RL that generalizes the counter-based exploration bonus methods and takes into account long term exploratory value of actions rather than a single step look-ahead. This generalization is achieved by relying on the convergence rate of SARSA updates on an auxiliary MDP.\n\nThe method presented in the paper trains a parallel \"E-value\" MDP, with initial value of 1 for all state-action pairs. It applies SARSA (on-policy) update rule to the E-value MDP, where the acting policy is selected on the original MDP. While the E-value MDP is training, the proposed method uses a 1/log transformation applied to E-values to get the corresponding exploration bonus term for the original MDP. This bonus term is shown to be equivalent counter-based methods for finite MDPs when the discount factor of the E-MDP is set to 0. The paper has minimal theoretical analysis of the proposed algorithm, essentially only showing convergence with infinite visiting. In that regard, the presented method seems like a useful heuristic with anecdotal empirical benefits.\n\nWhat is crucially lacking from the paper is any reference to model-free Bayesian methods that have very similar intuition behind them: taking into account the long term exploratory benefits of actions (passed on through the Bayesian inference). A comparison would have been trivial to do (with a generic non-informative prior) for the finite MPD setting (section 3.4). Even for the function approximation case one could use Gaussian process methods as the Bayesian baseline. There are also several computationally tractable approximations of Bayesian RL that can be used as baseline for empirical analysis.\n\nIt would have also been nice to do some analysis on how the update rule in a function approximation case is affecting the bonus terms. Unlike the finite case, updates to the value of one E-value can change the value for another state-action pair and the convergence could be faster than (1-alpha)^n. Given the lack of any theory on this, an empirical analysis is certainly valuable. (Update: experiment added in the a later revision to study this effect)\n\nNotes:\n- The plots are horrible in a print. I had to zoom 400% into the PDF file to be able to read the plots. Please scale them at least by 200% and use a larger font for the legends.\n\n- Add a minimal description of the initial setup for E-value neural network to section 4.1 (i.e. how the initializing is achieved to have a constant value for all state-action pairs as described in the appendix).\n\n* Note: This review and rating has been partially revised with the updates to the paper after the initial comments. ", "The paper proposes an approach to exploration based on initializing a value function to 1 everywhere, then letting the value decay back toward zero as the state space is explored. I like the idea a lot. I don't really like the paper, though. I'd really like to see a strong theoretical and/or empirical justification for it, and both are lacking. On the theoretical side, can a bound be proven for this approach, even in the tabular case? On the empirical side, there are more (and more recent!) testbeds that have come to define the field---the mountain car problem is just not sufficient to convincingly argue that the method scales and generalizes. My intuition is that such an approach ought to be effective, but I really want to see additional evidence. Given the availability of so many RL testbeds, I worry that it had been tried but failed.\n\nDetailed comments:\n\n\"Where γ is\" -> \", <newline> where γ is\".\n\n\"The other alternative\" -> \"The alternative\"?\n\n\"without learning a model (Mongillo et al., 2014).\": Seems like an odd choice for a citation for model-free RL. Perhaps select \nthe paper that first used the term? Or an RL survey?\n\nRight before Section 1.1, put a period after the Q-learning update equation.\n\n\"new states may\" -> \"new states, may\".\n\n\"such approaches leads\" -> \"such approaches lead\".\n\n\"they still fails\" -> \"they still fail\".\n\n\"evaluated with respect only to its immediate outcome\": Not so. Several of the cited papers use counters to determine which \nstates are \"known\" and then solve an MDP to direct exploration past immediate outcomes.\n\n\" exploration bonus(Little & Sommer, 2014)\" -> \" exploration bonus (Little & Sommer, 2014)\".\n\n\"n a model-free settings.\" -> \"n model-free settings.\".\n\n\" Therefore, a satisfying approach for propagating directed exploration in model-free reinforcement learning is still missing. \": I think you should cite http://research.cs.rutgers.edu/~nouri/papers/nips08mre.pdf , which also combines a kind of counter \nidea with function approximation to improve exploration.\n\n\"initializing E-values to 1\": I like this idea. I wonder if one could prove bounds similar to the delayed Q-learning algorithm with \nthis approach. It is reminiscent of https://arxiv.org/pdf/1205.2606.pdf , which also drives exploration by beginning with an \noverly optimistic estimate and letting the data (in a function approximation setting) decay the influence of this initialization.\n\n\"So after visited n times\" -> \"So after being visited n times\".\n\n\"figure 1a\" -> \"Figure 1a\". (And, in other places.)\n\n\"An important property of E-values is that it decreases over repetition\" -> \"An important property of E-values is that they decrease over repetition\".\n\n\"t utilize counters, can\" -> \"t utilize counters can\".\n\n\" hence we were interested a convergence measure\": Multiple problems in this sentence, please fix.\n\nFigure 2: How many states are in this environment? Some description is needed.\n\nFigure 3: The labels in this figure (and all the figures) are absurdly small and, hence, unreadable.\n\n\"now turn to show that by using counters,\" -> \"now turn to showing that, by using counters,\".\n\nTheorem 3.1: I'm not quite getting why we want to take a stochastic rule and make it deterministic. Note that standard PAC-MDP algorithms choose deterministically. It's not clear why we'd want to start from a stochastic rule.\n\n\" models(Bellemare\" -> \" models (Bellemare\".\n\n\"Efficient memory-based learning for robot control\": This reference is incomplete. (I'm skeptical that it represents the first use of this problem, but I can't check it.)\n\n\"Softmax exploration fail\" -> \"Softmax exploration fails\".\n\n\"whom also analyzed\" -> \"who also analyzed\".\n\n\"non-Markovity\" -> \"non-Markovianness\"?\n", "We thank the reviewers again for their hard work and useful comments, which have significantly improved the manuscript. In the revised manuscript we have addressed all remaining suggestions. Specifically, we scaled the figures, fixed typos and added additional references, as suggested by Reviewer2. ", "We were pleased to see that DORA algorithm was found of sufficient interest for the Reproducibility Challenge, and we would like to thank Drew Davis, Jiaxuan Wang, and Tianyang Pan (DWP) for their effort and comments.\n\nDWP report that while they were able to reproduce our results in the tabular environments, they failed to reproduce the MountainCar function approximation results. In fact, their figures imply that within 3000 episodes, neither LLL nor standard DQN reach satisfying levels of performance.\n\nGoing over their code, we find that their failure to reproduce our results (and hence their claim for “misreported hyperparameter”) stems from the fact that DWP attempted to solve a MountainCar problem, in which episode length is 200 steps. By contrast, we followed Sutton and Barto's definition of the MountainCar problem (see fig. 10.2 in the 2nd edition of 'Reinforcement Learning') and used episodes of length 1000 steps. This point will be clarified in the revised manuscript. In addition to this important difference, the temperature parameter used by DWP was T=1 (More precisely, DWP implementation does not explicitly include a temperature parameter). By contrast, as explained in the text, we fitted the temperature parameter to optimize learning (and used T=0.5). This, however, is not an essential point. We tested DWP code on the 1,000-steps episode MountainCar problem and found that the LLL versions of both e-greedy and softmax (for various T values of T=0.1,0.5,1) learn faster than their DQN equivalents.\n\nAnother issue raised by DWP is that “DORA’s experiments using function approximation put DQN into a disadvantageous position (not a fair comparison). We are able to adjust the setting to get much better result using DQN.” Simulating the code of DWP, without changing any parameters, we find that the example presented in the modified setting of DWP is not representative. On average, LLL does better than its stochastic counterpart. This is particularly pronounced when considering softmax action-selection. Simulating DWP code, we find that stochastic softmax fails to learn within 3000 episodes whereas its LLL counterpart can learn the task within less than 2000 episodes. Finally, we would like to comment that DWP consider very different settings from the ones we considered. In their simulations, they changed the network architecture, the non-linearity of the activation function, the update frequency of the parameters of the network, the learning rate and more. While we agree that this (modified settings) might be a better baseline (in the Appendix of the revised manuscript we now use a different baseline -- linear approximation with tile coding features), the finding that LLL agents work well in these settings is, in fact, another evidence for the robustness of our approach. \n\nFinally, we would like to note that following the reviewers' comments, the performance of our approach in the function-approximation case is now demonstrated on a larger scale problem, namely FreeWay (see revised manuscript and response to reviewers). \n\nAll code will become available on GitHub after the manuscript is accepted, this is to keep the anonymity of the writers.\n", "The Directed Outreaching Reinforcement Action-Selection (DORA) (\\cite{Dora}) paper had five primary experiments and we replicated all five. Additionally, we replicated an experiment located in the appendix of the paper to further investigate how E-values compared to optimistic algorithms. For each of these experiments, the authors did not provide code or additional resources beyond the originally submitted paper. Beyond the experiments found in the original submission, we performed two additional experiments under function approximation settings: one in the bridge environment and one in the cart pole environment. We found that tabular environments are reproducible while the function approximation task is brittle. We perform additional experiments and identified misreported hyperparameter as the primary cause of failure in continuous state space.\n\n\nFull write up and results available at https://github.com/nathanwang000/deep_exploration_with_E_network\n\n(completed by Drew Davis, Jiaxuan Wang, and Tianyang Pan)", "We thank the reviewer for the insightful comments and would like to respond to the main issues that the reviewer raises. We will address all remaining issues, including scaling the figures and the legend fonts as requested, in the final version.\n\nRegarding the comment about the \"anecdotal empirical benefits\" of the presented method, please refer to revised Fig. 6, where we demonstrated the benefits of our method in the Freeway Atari 2600 game, which is known as a hard exploration problem (see also response to AnonReviewer 2).\n\nRegarding \"reference to model-free Bayesian methods\", we are not sure that we correctly understood the comment. The Bayesian approach (e.g, using GPTD) and the use of E-values are not mutually exclusive. While E-values are a proxy measure of uncertainty, the more important point is that they support effective action-selection rules. The question of choosing actions to promote exploration is left undecided in the Bayesian TD algorithms, where practical implementations adopt some stochastic exploration policies, e.g. e-greedy or some close variants (as in Engel et al., Proc. ICML 2003 and Ghavamzadeh et al., Foundations and Trends in Machine Learning, 8, 2015). Put differently, Bayesian methods can become even more powerful by effectively choosing actions. For example, consider the bridge problem discussed in our paper: clearly, any e-greedy exploration scheme on this problem will have very poor performance, assuming no prior knowledge. If actions are only selected based on the expected value (i.e without taking into account the posterior’s variance) than we are basically back in the \"regular\" Q-learning, in terms of exploration. It will be interesting to compare the uncertainty measured by E-values and the posterior value-function variance. However, this goes beyond the scope of this paper.\n\nWe agree with the reviewer that an evaluation of the learned E-values in the function-approximation scenario is an interesting contribution. In response to the reviewer comment, we now address this question in for the MountainCar problem (Appendix D, Figs. 9-13). We show that the logarithm of E-values with gamma_E=0 well approximate the visit counters, and that the logarithm of E-values with gamma_E>0 are a linear function of the visit counters. These results support for the effectiveness of E-values as estimators of visit-counters/generalized visit-counters in continuous MDPs.", "We thank the reviewer for the hard work and detailed comments. We will address all of them in the final version, in particular we will scale the figures and legend fonts as requested. Here we address the major comments:\n\nRegarding the comment that \"mountain car problem is just not sufficient to convincingly argue that the method scales and generalizes\", the MountainCar problem was chosen because of its simplicity, providing a better insight to the algorithm (see also response to AnonReviewer 3). However, we agree with the reviewer that the problem is too simple to convincingly argue that the method scales and generalizes. Therefore, in the revised manuscript (Fig. 6) we tested our approach using the Freeway Atari 2600 game, which is known as a hard exploration problem (Bellemare et al., 2016). Using standard DQN technique (Mnih et al., 2015) without any sophisticated additions or tuning any parameter, we show that the performance of a model that incorporates an E-value exploration bonus exceeds state-of-the-art performance (Bellemare et al., 2016) both in learning speed and in computational efficiency (in fact simulation of the counters suggested by (Bellemare et al., 2016) is computationally so demanding that in the current draft we only show the results up to 2M steps. This simulation is expected to take a few more days to complete).\n\nRegarding the comment that \"Several of the cited papers use counters to determine which states are \"known\" and then solve an MDP to direct exploration past immediate outcomes\", to the best of our understanding, they do so in a model-based framework, in which the model parameters are learned. Our approach is unique in being model-free.\n\nRegarding the question of \"why one may want to take a stochastic rule and make it deterministic\", we agree with the reviewer that in many cases, deterministic algorithms are preferable. Nevertheless, for various reasons, stochastic rules -- in particular epsilon-greedy -- are commonly used in practice. By mapping the stochastic rules to a deterministic rule we show how one can make the minimal change to the stochastic algorithm, which will incorporate the E-values (in the case of gamma_E=0), while preserving exactly the same level of exploration. We consider this point just as an example of utilizing E-values as an exploration bonus.\n" ]
[ 7, 6, 6, -1, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ry1arUgCW", "iclr_2018_ry1arUgCW", "iclr_2018_ry1arUgCW", "iclr_2018_ry1arUgCW", "SJ9GoksGG", "iclr_2018_ry1arUgCW", "BJeqeaOgM", "r1EghdKeM" ]
iclr_2018_Skw0n-W0Z
Temporal Difference Models: Model-Free Deep RL for Model-Based Control
Model-free reinforcement learning (RL) has been proven to be a powerful, general tool for learning complex behaviors. However, its sample efficiency is often impractically large for solving challenging real-world problems, even for off-policy algorithms such as Q-learning. A limiting factor in classic model-free RL is that the learning signal consists only of scalar rewards, ignoring much of the rich information contained in state transition tuples. Model-based RL uses this information, by training a predictive model, but often does not achieve the same asymptotic performance as model-free RL due to model bias. We introduce temporal difference models (TDMs), a family of goal-conditioned value functions that can be trained with model-free learning and used for model-based control. TDMs combine the benefits of model-free and model-based RL: they leverage the rich information in state transitions to learn very efficiently, while still attaining asymptotic performance that exceeds that of direct model-based RL methods. Our experimental results show that, on a range of continuous control tasks, TDMs provide a substantial improvement in efficiency compared to state-of-the-art model-based and model-free methods.
accepted-poster-papers
There is a concern from one of the reviewers that the paper needs deeper analysis. On the other hand, applying finite horizon techniques to deep RL is relatively unexplored, and the paper does provide some interesting results in that direction.
train
[ "BJ25DYuxG", "rJyvKScxz", "rkvII7pef", "rJ9zsPaXM", "BktRDPpXz", "HkCQvwaQG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper universal value function type ideas to learn models of how long the current policy will take to reach various states (or state features), and then incorporates these into model-predictive control. This looks like a reasonable way to approach the problem of model-based RL in a way that avoids the covariate shift produced by rolling learned transition models forward in time. Empirical results show their method outperforming Hindsight Experience Replay (which looks quite bad in their experiments), DDPG, and more traditional model-based learning. It also outperforms DDPG quite a bit in terms of sample efficiency on a real robotic arm. They also show the impact of planning horizon on performance, demonstrating a nice trade-off.\n\nThere are however a couple of relevant existing papers that the authors miss referencing / discussing:\n- \"Reinforcement Learning with Unsupervised Auxiliary Tasks\" (Jarderberg et al, ICLR 2017) - uses predictions about auxiliary tasks, such as effecting maximum pixel change, to obtain much better sample efficiency.\n- \"The Predictron: End-To-End Learning and Planning\" (Silver et al, ICML 2017), which also provides a way of interpolating between model-based and model-free RL.\n\nI don't believe that these pieces of work subsume the current paper, however the authors do need to discuss the relationship their method has with them and what it brings.\n\n** UPDATE Jan 9: Updated my rating in light of authors' response and updated version. I recommend that the authors find a way to keep the info in Section 4.3 (Dynamic Goal and Horizon Resampling) in the paper though, unless I missed where it was moved to. **\n", "\nThis paper proposes a \"temporal difference model learning\", a method that aims to combine the benefits of model-based and model-free RL. The proposed method essentially learns a time-varying goal-conditional value function for a specific reward formulation, which acts as a surrogate for a model in an MPC-like setting. The authors show that the method outperforms some alternatives on three continuous control domains and real robot system.\n\nI believe this paper to be borderline, but ultimately below the threshold for acceptance. On the positive side, there are certainly some interesting ideas here: the notion of goal-conditioned value functions as proxies for a model, and as a means of merging model-free and model-based approaches is very really interesting, and hints at a deeper structure to goal-conditioned value functions in general. Ultimately, though, I feel that there are two main issues that make this research feel as though it is still ultimately in the earlier stages: 1) the very large focus on the perspective that this approach is unifying model-based and model-free RL, when it fact this connection seems a bit tenuous; and 2) the rather lackluster experimental results, which show only marginal improvement over purely model-based methods (at the cost of much additional complexity), and which make me wonder if there's an issue with their implementation of prior work (namely the Highsight Experience Replay algorithm).\n\nTo address the first point, although the paper stresses it to a very high degree, I can't help but feel that the connection that the claimed advance of \"unifying model-based and model-free RL\" is overstated. As far as I can tell, the connection is as follows: the learned quantity here is a time-varying goal-conditioned value function, and under some specific definition of reward, we can interpret the constraint that this value function equal zero as a proxy for the dynamics constraint in MPC. But the exact correspondence between this and the MPC formulation only occurs for a horizon of size zero: longer horizons require a multi-step MPC for the definition of the model-free and model-based correspondence. The fact that the action selection of a model-based method and this approach have some function which looks similar (but only under certain conditions), just seems like a fairly odd connection to highlight so heavily.\n\nRather, it seems to me that what's happening here is really quite simple: the authors are extending goal-conditioned value functions to the case of non-stationary finite horizon value functions (the claimed \"key insight\" in eq (5) is a completely standard finite-horizon MDP formulation). This seems to describe perfectly well what is happening here, and it does also seem intuitive that this provides an advantage over stationary goal-conditioned value functions: just as goal conditioned value functions offer the advantage of considering \"every state as a goal\", this method can consider \"every state as a goal for every time horizon\". This seems interesting enough on its own, and I admit I don't see the need for the method to be yet another claimed unification of model-free and model-based RL.\n\nI would also suggest that the authors look into the literature on how TD methods implicitly learn models (see e.g. Boyan 1997 \"Least-squares temporal difference learning\", and Parr et al., 2007 \"An analysis of linear models...\"). In these works it has been shown that least squares TD methods (at least in the linear feature setting), implicitly learn a dynamics model in feature space, but only the \"projection\" of the reward function is actually needed to learn the TD weights. In building the proposed value functions, it seems like the authors are effectively solving for multiple rewards simultaneously, which would effectively preserve the learned dynamics model. I feel like this may be an interesting line of analysis for the paper if the authors _do_ want to stick with the notion of the method as unifying model-free and model-based RL.\n\nAll these points may ultimately just be a matter of interpretation, though, if not for the second issue with the paper, which is that the results seem quite lackluster, and the claimed performance of HER seems rather suspicious. But instead, the authors evaluate the algorithm on just three continuous control tasks (and a real robot, which is more impressive, but the task here is still so extremely simple for a real robot system that it really just qualifies as a real-world demonstration rather than an actual application). And in these three settings, a model-based approach seems to work just as well on two of the tasks, and may soon perform just as well after a few more episodes on the last task (it doesn't appear to have converged yet). And despite the HER paper showing improvement over traditional policy approaches, in these experiments plain DDPG consistently performs as well or better than HER. ", "This is an interesting direction. There is still much to understand about the relative strengths and limitations of model based and model free techniques, and how best to combine them, and this paper discusses a new way to address this problem. The empirical results are promising and the ablation studies are good, but it also makes me wonder a bit about where the benefit is coming from.\n\nCould you please put a bit of discussion in about the computational and memory cost. TDM is now parameterized with (state, action (goal) state, and the horizon tau). Essentially it is now computing the distance to each possible goal state after starting in state (s,a) and taking a fixed number of steps. \nIt seems like this is less compact than learning a 1-step dynamics model directly.\nThe results are better than models in some places. It seems likely this is because the model-based approach referenced doesn’t do multi-step model fitting, but essentially TDM is, by being asked to predict and optimize for C steps away. If models were trained similarly (using multi-step loss) would models do as well as TDM?\nHow might this be extended to the stochastic setting?\n", "Thank you for your feedback!\n\nTo address the reviewer’s concerns about the experiments, we ran our algorithm on more difficult tasks and have updated the experimental results. We would like to emphasize is that a primary goal of our method is to achieve both sample-efficiency and good final performance. While the asymptotic performance of TDMs may not always be far better than that the model-free methods, TDMs are substantially more sample-efficient, as shown in the updated Figure 2. We consider this important, as sample efficiency is important in many real-world tasks, such as robotics, where collecting data is expensive. For the model-based baseline, a concern brought up was that, “model-based approach…may soon perform just as well after a few more episodes on the last task (it doesn't appear to have converged yet).” We ran the half-cheetah experiments for more iterations and see that the trend is the same: TDM converges to a better solution than the model-based baseline, by a significant margin. We also evaluated our method on two substantially more complex 3D locomotion tasks using the “ant” quadrupedal robot. We tested two tasks: asking the ant to run to a target location, and asking it to achieve a target location at the same time as a target velocity (this is meant to be representative, e.g., of the ant attempting to make a jump). The latter task is particularly interesting, since the ant cannot maintain both the position and velocity goal at the same time, and must therefore achieve it at a single time step. TDMs significantly outperform the model-based baseline on these tasks as well. This supports the central claim of the paper, in Section 1, paragraph 5, which is that TDMs achieve learning times comparable to model-based methods, but asymptotic performance that is comparable to model-free algorithms. We believe that this kind of improvement in learning performance is of substantial interest to the reinforcement learning community.\n\nWe were able to obtain the code for hindsight experience replay (HER) from the original authors. Using this code as reference, we improved our implementation by incorporating their hyperparameter settings and implementation details, including ones that we had difficulty deducing from the original HER paper. The performance of HER on our tasks did improve, and we have updated Figure 2 with the new results. At this point, with the help of the original authors, we are confident that our implementation of HER is accurate. An observation we would like to make is that the purpose of HER is not necessarily to improve the sample efficiency of tasks where dense rewards are sufficient for DDPG to learn. Rather, a big selling point of HER is that it can improve the asymptotic performance of DDPG in tasks with sparse rewards. To test this hypothesis, we ran an additional baseline of DDPG with sparse rewards (-1 if the goal state is not reach, 0 if it is). HER definitively outperforms this baseline, so our results confirm that HER helps with sparse-reward tasks. We wanted to improve the sample efficiency of DDPG, and not necessarily DDPG’s feasibility for sparse-reward tasks, and so we focused on tasks that DDPG could already solve, albeit after many samples. It may be that the benefits of HER shine through on tasks where dense rewards will not lead to good policies.\n\nWe appreciate your comments regarding the connection between model-based and model-free RL. In this paper, we presented two main contributions: one is a connection between model-free and model-based reinforcement learning, and another is an algorithm derived from this connection. We have edited the paper (throughout the paper, and notably in Section 3.2, Paragraph 2) to balance the presentation better between these two components, and to avoid overstating the connection. We would be happy to incorporate any other concrete suggestions you might have.\n\nThank you for the references to the earlier work connecting TD-methods and model-based methods. We have added a discussion to this work in the related works (Section 5, paragraph 2). While these papers also show a connection between TD-methods and model-based methods, their objective is rather different from ours. Boyan shows an exact equivalence between a learned model and learned value function, but this requires a tabular value function which effective keeps track of every state-action-next-state transition. Parr shows that for linear function approximators, a value function extracted using a learned model is the same as a value function learned with TD-learning. Rather than analyzing equivalence at convergence, our primary contribution is how we can achieve sample complexity comparable to model-based RL while retaining the favorable asymptotic performance of model-free RL in complex tasks with function approximation.\n\nWe believe that we have addresses all the issues raised by the reviewer. We would be happy to discuss and address any additional concerns.", "Thank you for your feedback!\n\nOne concern brought up is the computation and memory cost of using TDMs. To address this concern, we have added discussion of this point in Section 4.3, paragraph 2, as well as Figure 4 in the appendix. In short, the learning for TDM and DDPG both have the number of updates per environment step as a hyperparameter, which largely determines the computation cost. The empirical result we got is that as we increased the number of updates, the performance of TDM increased while the performance of DDPG stayed the same or degraded. TDMs can benefit from more computation (number of updates per environment step) than DDPG since they can learn a lot more by relabeling goal states and horizon tau; we see this more as a benefit as it means TDMs can extract more information from the same amount of data. We also would like to point out that one advantage of doing more computation at training time is that test time is relatively fast: to do multi-step planning, we simply set tau=5 in our TDM, whereas a typical multi-step model-based planning approach would need to unroll a model over five time steps and optimize over all intermediate actions. Furthermore, we hope that this, in addition to the ablative studies in section 6.2, addresses the concern that, “… it also makes me wonder a bit about where the benefit is coming from.”\n\nFor stochastic environments, the TDM would learn the expected distance to the goal, rather than the exact distance. We have added this discussion to the second paragraph of the conclusion.\n\nWe agree that a more in-depth discussion of the connection to multi-step models would be appropriate. We’ve added discussion of two related works in Section 5, paragraph 4. One critical distinction between these methods and TDMs is that TDMs can be viewed as goal conditioned models: the prediction is made T steps into the future, conditioned on a policy that is trying to reach a particular state. Most model learning methods do not condition on a policy, requiring them to take in an entire sequence of future actions, which greatly increases the input space.", "Thank you for your feedback! We have edited the paper to address all of the issues that you’ve raised (see below). We would appreciate any further feedback that you might have to provide.\n\nWe have added a discussion of the two papers suggested (Jaderberg et al ICML 2017, Silver et al, ICML 2017) to the paper in the related work section (Section 5, paragraph 6). Our method shares the same motivation as those papers: to increase the amount of supervision in model-free RL to achieve sample-efficient learning. We also include recent work on distributional RL (Bellemare et. al., 2017) as another example of this general idea.\n\nWe were able to obtain the code for hindsight experience replay (HER) from the original authors. Using this code as reference, we improved our implementation by incorporating their hyperparameter settings and implementation details, including ones that we had difficulty deducing from the original HER paper. The performance of HER on our tasks did improve, and we have updated Figure 2 with the new results. At this point, with the help of the original authors, we are confident that our implementation of HER is accurate. An observation we would like to make is that the purpose of HER is not necessarily to improve the sample efficiency of tasks where dense rewards are sufficient for DDPG to learn. Rather, a big selling point of HER is that it can improve the asymptotic performance of DDPG in tasks with sparse rewards. To test this hypothesis, we ran an additional baseline of DDPG with sparse rewards (-1 if the goal state is not reach, 0 if it is). HER definitively outperforms this baseline, so our results confirm that HER helps with sparse-reward tasks. We wanted to improve the sample efficiency of DDPG, and not necessarily DDPG’s feasibility for sparse-reward tasks, and so we focused on tasks that DDPG could already solve, albeit after many samples. It may be that the benefits of HER shine through on tasks where dense rewards will not lead to good policies." ]
[ 7, 4, 7, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_Skw0n-W0Z", "iclr_2018_Skw0n-W0Z", "iclr_2018_Skw0n-W0Z", "rJyvKScxz", "rkvII7pef", "BJ25DYuxG" ]
iclr_2018_H1dh6Ax0Z
TreeQN and ATreeC: Differentiable Tree-Structured Models for Deep Reinforcement Learning
Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning. To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions. TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values. We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network. Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the tree. We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks (Oh et al., 2017) on multiple Atari games. Furthermore, we present ablation studies that demonstrate the effect of different auxiliary losses on learning transition models.
accepted-poster-papers
This is a nicely written paper proposing a reasonably interesting extension to existing work (e.g. VPN). While the Atari results are not particular convincing, they do show promise. I encourage the authors to carefully take the reviewers' comment into consideration and incorporate them to the final version.
test
[ "ry1vffqeM", "r1cczyqef", "Hkg9o32gG", "B1t83k27M", "B1_hA0W7f", "rkKSHOTZG", "H1fRVuTWM", "Hya_Qdabf", "r1SimQEgf", "rJVUBbXxG", "HJAkFD-xM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "author", "public", "author", "public" ]
[ "# Update after the rebuttal\nThank you for the rebuttal.\nThe authors claim that the source of objective mismatch comes from n-step Q-learning, and their method is well-justified in 1-step Q-learning. However, there is still a mismatch even with 1-step Q-learning because the bootstrapped target is also computed from the TreeQN. More specifically, there can be a mismatch between the optimal action sequences computed from TreeQN at time t and t+1 if the depth of TreeQN is equal or greater than 2. Thus, the author's response is still not convincing to me.\nI like the overall idea of using a tree-structured neural network which internally performs planning as an abstraction of Q-function, which makes implementation simpler compared to VPN. However, the particular method (TreeQN) proposed in this paper introduces a mismatch in the model learning as mentioned above. One could argue that TreeQN is learning an \"abstract\" planning rather than \"grounded\" planning. However, the fact that reward prediction loss is used to train TreeQN significantly weakens this claim, and there is no such an evidence in the paper. \nIn conclusion, I think the research direction is worth pursuing, but the proposed modification from VPN is not well-justified.\n\n# Summary\nThis paper proposes TreeQN and ATreeC which perform look-ahead planning using neural networks. TreeQN simulates the future by predicting rewards/values of the future states and performs tree backup to construct Q-values. ATreeC is an actor-critic architecture that uses a softmax over TreeQN. The architecture is trained through n-step Q-learning with reward prediction loss. The proposed methods outperform DQN baseline on 2D Box Pushing domain and outperforms VPN on Atari games.\n\n[Pros]\n- The paper is easy to follow.\n- The application to actor-critic setting (ATreeC) is novel, though the underlying idea was proposed by [O'Donoghue et al., Schulman et al.].\n\n[Cons]\n- The proposed method has a technical issue.\n- The proposed idea (TreeQN) and underlying motivation are almost same as those of VPN [Oh et al.], but there is no in-depth discussion that shows why TreeQN is potentially better than VPN. \n- Comparison to VPN on Atari is not much convincing. \n\n# Novelty and Significance\n- The underlying motivation (planning without predicting observations), the architecture (transition/reward/value functions applied to the latent state space), and the algorithm (n-step Q-learning with reward prediction loss) are same as those of VPN. But, the paper does not provide in-depth discussion on this. The following is the differences that I found from this paper, so it would be important to discuss why such differences are important.\n\n1) The paper emphasizes the \"fully-differentiable tree planning\" aspect in contrast to VPN that back-propagates only through \"non-branching\" trajectories during training. However, differentiating TreeQN also amounts to back-propagating through a \"single\" trajectory in the tree that gives the maximum Q-value. Thus, the only difference between TreeQN and VPN is that TreeQN follows the best (estimated) action sequence, while VPN follows the chosen action sequence in retrospect during back-propagation. Can you justify why following the best estimated action sequence is better than following the chosen action sequence during back-propagation (see Technical Soundness section for discussion)?\n\n2) TreeQN only sets targets for the final Q-value after tree backup, whereas VPN sets targets for all intermediate value predictions in the tree. Why is TreeQN's approach better than VPN's approach? \n\n- The application to actor-critic setting (ATreeC) is novel, though the underlying idea of combining Q-learning with policy gradient was proposed by [O'Donoghue et al.] and [Schulman et al.].\n\n# Technical Soundness\n- The proposed idea of setting targets for the final Q-value after tree backup can potentially make the temporal credit assignment difficult, because the best estimated actions during tree planning does not necessarily match with the chosen actions. Suppose that TreeQN estimated \"up-right-right\" as the best future action sequence the during 3-step tree planning, while the agent actually ended up with choosing \"up-left-left\" (this is possible because the agent re-plans at every step and follows epsilon-greedy policy). Following n-step Q-learning procedure, we end up with setting target Q-value based on on-policy action sequence \"up-left-left\", while back-propagating through \"up-right-right\" action sequence in the TreeQN's plan. This causes a wrong temporal credit assignment, because TreeQN can potentially increase/decrease value estimates in the wrong direction due to the mismatch between the planned actions and chosen actions. So, it is unclear why the proposed algorithm is technically correct or better than VPN's approach (i.e., back-propagating through the chosen actions in the search tree).\n \n# Quality\n- Comparison to VPN on Atari is not convincing because TreeQN-1 is actually (almost) equivalent to VPN-1, but the results show that TreeQN-1 performs much better than VPN on many games. Since the authors took the numbers from [Oh et al.] rather than replicating VPN, it is possible that the gap comes from implementation details (e.g., hyperparameter). \n\n# Clarity\n- The paper is overall easy to follow and the description of the proposed method is clear.", "The authors propose a new network architecture for RL that contains some relevant inductive biases about planning. This fits into the recent line of work on implicit planning where forms of models are learned to be useful for a prediction/planning task. The proposed architecture performs something analogous to a full-width tree search using an abstract model (learned end-to-end). This is done by expanding all possible transitions to a fixed depth before performing a max backup on all expanded nodes. The final backup value is the Q-value prediction for a given state, or can represent a policy through a softmax.\n\nI thought the paper was clear and well-motivated. The architecture (and various associated tricks like state vector normalization) are well-described for reproducibility. \n\nExperimental results seem promising but I wasn’t fully convinced of its conclusions. In both domains, TreeQN and AtreeC are compared to a DQN architecture, but it wasn’t clear to me that this is the right baseline. Indeed TreeQN and AtreeC share the same conv stack in the encoder (I think?), but also have the extra capacity of the tree on top. Can the performance gain we see in the Push task as a function of tree depth be explained by the added network capacity? Same comment in Atari, but there it’s not really obvious that the proposed architecture is helping. Baselines could include unsharing the weights in the tree, removing the max backup, having a regular MLP with similar capacity, etc.\n\nPage 5, the auxiliary loss on reward prediction seems appropriate, but it’s not clear from the text and experiments whether it actually was necessary. Is it that makes interpretability of the model easier (like we see in Fig 5c)? Or does it actually lead to better performance? \n\nDespite some shortcomings in the result section, I believe this is good work and worth communicating as is.", "\n This was an interesting read. \n\nI feel that there is a mismatch between intuition of what a model could do (based on the structure of the architecture) versus what a model does. Just because the transition function is shared and the model could learn to construct a tree, when trained end-to-end the system is not sufficiently constrained to learn this specific behaviour. More to a point. I think the search tree perspective is interesting, but isn’t this just a deeper model with shared weights? And a max operation? It seems no loss is used to force the embeddings produced by the transition model to match the embeddings that you would get if you take a particular action in a particular state, right? Is there any specific attempt to visualize or understand the embeddings inside the tree? The same regarding the rewards. If there is no auxiliary loss attempting to force the intermediary prediction to be valid rewards, why would the model use those free latent variables to encode rewards? I think this is a pitfall that many deep network papers fall, where by laying out a particular structure it is directly inferred that the model discovers or follows a particular solution (where the latent have prescribed semantics). I would argue that is rarely the case. When the system is learned end-to-end, the structure does not impose the behaviour of the model, and is up to the authors of the paper to prove that the trained model does anything similar to expanding a tree. And this is not by showing final performance on a game. If indeed the model does anything similar to search, than all intermediary representations should correspond to what semantically they should. \nIgnoring my verbose comment, another view is that the baseline are disadvantaged to the treeQN, because they have less parameters (and are less deep which has a huge impact on the learnability and expressivity of the deep network). \n", "Thank you very much. We really appreciate your time and effort in trying to reproduce our results; this is a great initiative. We read through your report and online code, and identified several issues and important bugs that we list and discuss below. We intend to open source our code in time (after the anonymous review period), but are happy to correspond further in advance of that, if you’re interested in attempting further to reproduce our results.\n\nIssues in the report:\n - All results on the Box pushing environment seem to be off. The average reward is negative, even for the DQN baseline. This suggests there is something fundamentally different in your implementation.\n - Cartpole is not a suitable test environment. It is a very simple environment and there is no reason to believe that ad-hoc planning is required or helps. Indeed, any reasonably tuned algorithm should reach a performance ceiling on this toy task, so there is no reason TreeQN should outperform DQN. \n - Appendix: “unsure which part of the network is minimized” -- we aren’t sure where the misunderstanding lies (whole network is trained end-to-end and overall loss is minimized)\n - In our paper “DQN” refers to the architecture rather than the training algorithm, which is n-step Q-learning as described in the background section -- we do not use a replay buffer. However, we hope and expect that a proper TreeQN implementation would help with off-policy training data as well, and it is worth investigation.\n\nErrors in code:\n TreeQN (Tensorflow implementation)\n - The Q-learning algorithm implementation for TreeQN looks wrong. You need to compute targets using the target network evaluation of next_states. In line 74 of TreeQNModel.py these targets are computed using the current state, and do not include the observed reward.\n - Action selection should use the online network rather than the target network.\n ATreeC (Pytorch implementation)\n - For the CNN encoder (our encoder) there doesn’t appear to be any residual connection or nonlinearity (nonlinearities between layers are crucial!) in the transition function.\n - The value module should be a different set of parameters from the critic’s fully-connected layer, as shown in fig. 3 of our paper. We can make this more clear in the paper.\n - There should be an auxiliary loss on the reward prediction; we use this for both TreeQN and ATreeC.\n - Don’t subtract the critic value from Q estimates; critic heads are not used in policy at all. The “Q-estimates” from the tree should go straight into a softmax to produce the policy.\n - For your fully-connected variant of ATreeC’s encoder, try using the residual connection just for transition function; don’t add again the original inputs from before the encoder layers.\n - NB: there may be a more fundamental issue, as the failure of any algorithm to learn in fig. 3 of the report is surprising.\n\nHyperparameters:\n - We will update the paper to include: discount factor gamma=0.99, and the target networks are updated every 40000 environment transitions; for both box-pushing and Atari.", "Our group reproduced this paper and the detailed result is in the link.\n\nhttps://gitlab.eecs.umich.edu/kongfz/TreeQN_ATreeC_repro/blob/master/report.pdf\n\nHope our feedback can improve your paper!\n", "Thank you for your positive comments and useful feedback.\n\nConcerning baselines, our preliminary experiments showed that simply adding more parameters via width or depth to a DQN architecture did not result in significant performance gains, which we will make clear in the paper. A large-scale investigation of such architectures and their combination with auxiliary losses on many Atari games may be infeasible for us, but we have added to the appendix a figure demonstrating the limitations of naively adding parameters to DQN on the box-pushing domain.\n\nWe didn’t do a systematic investigation of the reward prediction loss across all environments, but in preliminary experiments on Seaquest and the box-pushing environment it helped performance. Interpretable sequences for box-pushing tended to appear when rewards were immediately available, which leads us to believe the grounding from this loss played a part.", "Thank you for your feedback.\n\nRegarding the soundness of n-step Q-learning targets:\nAs you point out, there is a mismatch between the n-step targets, which include an on-policy component, and our model’s estimates of the optimal Q-function. However, this mismatch appears for *any* model estimating the optimal Q* with partially on-policy bootstraps. The weakly grounded internal temporal semantics of our architecture do not exacerbate this problem, but simply render more explicit the mechanism for estimating Q*.\nIn 1-step Q-learning, or when using policy gradients, this model-objective mismatch does not appear. In practice, n-step Q-learning targets help to stabilise and speed learning despite the unusual combination of on- and off- policy components in the targets. However, it is true that 1-step Q-learning, or policy gradients, provides objectives more consistent with our overall approach.\n\nRegarding the comparison to VPN algorithmically:\nFollowing the best estimated action sequence removes a mismatch between the use of the model components at training and test-time: in our approach, the components are freely optimised for their use in estimating the optimal action-value or action-probability, rather than trained to match a greedy bootstrap with an on-policy internal path. We believe it is crucial to maintain an equivalent context at training and evaluation time.\nThis is also the motivation for optimising Q only after the full tree backup, rather than also after partial backups. We want to learn a model that is as good as possible within the specific architecture we use, rather than across a class of related architectures with varying tree-depths. It is possible that such transfer learning could help in some problems. However, it is important to note that intermediate value estimates are still used, as they are mixed into the final prediction during the backup.\nConstructing the full tree at each time-step frees us to make value estimates for all (root-node) actions, which enables the extension to ATreeC -- something that can’t easily be done with VPN. This extension is more about using an *architecture* designed for value-prediction in the context of policy gradients, rather than using *algorithmic* components of Q-learning with policy gradients as in the work of [O'Donoghue et al., Schulman et al.].\nOur overall strategy also simplifies training, as the whole model can be used as a drop-in replacement for an action-value or policy network, without recombining the components in a different manner for on-policy training segments, target evaluation, and testing. In our view this is a valuable contribution over VPN.\n\nRegarding the experimental comparison to VPN, it is clear that some details of hyperparameters or implementation constitute a large part of the difference (as most clearly seen in the different baseline DQN results). This is precisely why we focus on comparing to our own, much stronger, DQN and A2C baselines. We included these data to facilitate other work using frameskip-10 Atari as a domain for planning-inspired deep RL. We feel it is unreasonable to expect a reimplementation of VPN with tuning to approach the level of our baselines, and assume the authors of that work put a reasonable effort into optimising their algorithm.", "It’s great to hear that you found this work interesting - and thank you for your feedback.\n\nThe goal of our research is to design methods that yield good performance in the considered tasks, and we hope the reviewers will evaluate our paper accordingly. Explicit grounding, or lack thereof, is merely a means to the end of maximising task performance. Therefore, while the empirical question of how explicitly to ground the model is fascinating, we believe the quality of the paper should not be measured in terms of how explicitly the model is grounded. See also the answer to the anonymous comment from 21st of November below.\n\nWe found that grounding the reward function (which takes intermediate embeddings as input) with an explicit loss did help performance, so we included that in the objective (see Sec. 3.2). However, we found that explicitly grounding the latent representations based on a reconstruction loss did not help. In the paper, we also discuss reasons why one should avoid such objectives when constructed in the observation space. Also note that intermediate value predictions are mixed into the final prediction. While this doesn’t force a grounding, it encourages each of the intermediate embeddings to correspond to a valid state embedding.\n\nThe key idea behind our paper is that the architecture that makes sense for a grounded model (e.g., tree-planning) should still provide a useful inductive bias for a learned model that is only weakly grounded or not grounded at all.\n\nConcerning baselines, our preliminary experiments showed that simply adding more parameters via width or depth to a DQN architecture did not result in significant performance gains, which we will make clear in the paper. A large-scale investigation of such architectures and their combination with auxiliary losses on many Atari games may be infeasible for us, but we have added to the appendix a figure demonstrating the limitations of naively adding parameters to DQN on the box-pushing domain.", "I understand this method now. However, if add the comments below in the original paper, readers can understand it more easily. Overall, this method is very interesting and insightful, I appreciate it very much.\n\n ''There is no guarantee that the trained sub-networks learn a faithful model of the environment''\n''However, this flexibility is intentional because, at the end of the day, we only care about predicting accurate state-action values. Consequently, we want to make our architecture flexible enough to learn an abstract representation of the environment and a transition model that, when used together inside TreeQN, are effective at predicting those state-action values even if the resulting architecture does not correspond to rigid definitions of model, state, or plan.''\n\n\nThe sub-networks are not CNNs indeed. I want to express that the sub-networks can mimic the planning, while CNN can extract local features, and both of them are universal modules although designed for special purposes.", "Thank you for your comments! It is true that the whole network is trained end-to-end based on the TD-error of n-step Q-learning and so there is no guarantee that the trained sub-networks learn a faithful model of the environment (i.e., the internal state representations are not guaranteed or necessarily expected to contain the information needed to accurately predict observations in future time-steps).\n\nHowever, this flexibility is intentional because, at the end of the day, we only care about predicting accurate state-action values. Consequently, we want to make our architecture flexible enough to learn an abstract representation of the environment and a transition model that, when used together inside TreeQN, are effective at predicting those state-action values even if the resulting architecture does not correspond to rigid definitions of model, state, or plan. Instead, we incorporate an inductive bias through the recursive application of a learned transition function that is shared across the tree. This inductive bias is introduced through the network’s structure and does indeed encourage planning.\n\nNote that there is a weak grounding of predicted states via our reward prediction auxiliary loss. Furthermore, our results show that the model does produce interpretable trees in some situations, which demonstrates that grounded planning is performed when useful, even though the model is not limited to it. We experimented with ways of further grounding the transition function (and thus the states) but found that it only hurt performance. Finding ways to encourage stronger grounding without hurting performance is an interesting research direction, but not in the scope of this paper.\n\nThe sub-networks (transition, reward and value function) are not CNNs (see Equations 6 to 8).\n", "The idea that integrate model planning into the Q-function or policy is interesting. \n\nHowever, I wander how the model functions (transition function, reward function and value function) are trained. From the description of the paper, they may be special designed sub-networks, the whole network is trained based on the TD-error of n-step Q-learning. If it is, how can we know the sub-networks are planning indeed?\n\nIn my opinion, TreeQN is a complex network with a fixed planning step (i.e., the tree depth). And each planning step is a special designed sub-network. The sub-network is similar as CNN, which can extract useful feature from the state representation z.\n\nDo I understand this correctly?" ]
[ 4, 8, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1dh6Ax0Z", "iclr_2018_H1dh6Ax0Z", "iclr_2018_H1dh6Ax0Z", "B1_hA0W7f", "iclr_2018_H1dh6Ax0Z", "r1cczyqef", "ry1vffqeM", "Hkg9o32gG", "rJVUBbXxG", "HJAkFD-xM", "iclr_2018_H1dh6Ax0Z" ]