paper_id
stringlengths 19
21
| paper_title
stringlengths 8
170
| paper_abstract
stringlengths 8
5.01k
| paper_acceptance
stringclasses 18
values | meta_review
stringlengths 29
10k
| label
stringclasses 3
values | review_ids
sequence | review_writers
sequence | review_contents
sequence | review_ratings
sequence | review_confidences
sequence | review_reply_tos
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2018_H1DkN7ZCZ | Deep learning mutation prediction enables early stage lung cancer detection in liquid biopsy | Somatic cancer mutation detection at ultra-low variant allele frequencies (VAFs) is an unmet challenge that is intractable with current state-of-the-art mutation calling methods. Specifically, the limit of VAF detection is closely related to the depth of coverage, due to the requirement of multiple supporting reads in extant methods, precluding the detection of mutations at VAFs that are orders of magnitude lower than the depth of coverage. Nevertheless, the ability to detect cancer-associated mutations in ultra low VAFs is a fundamental requirement for low-tumor burden cancer diagnostics applications such as early detection, monitoring, and therapy nomination using liquid biopsy methods (cell-free DNA). Here we defined a spatial representation of sequencing information adapted for convolutional architecture that enables variant detection at VAFs, in a manner independent of the depth of sequencing. This method enables the detection of cancer mutations even in VAFs as low as 10x-4^, >2 orders of magnitude below the current state-of-the-art. We validated our method on both simulated plasma and on clinical cfDNA plasma samples from cancer patients and non-cancer controls. This method introduces a new domain within bioinformatics and personalized medicine – somatic whole genome mutation calling for liquid biopsy. | workshop-papers | Authors present a method for representing DNA sequence reads as one-hot encoded vectors, with genomic context (expected original human sequence), read sequence, and CIGAR string (match operation encoding) concatenated as a single input into the framework. Method is developed on 5 lung cancer patients and 4 melanoma patients.
Pros:
- The approach to feature encoding and network construction for task seems new.
- The target task is important and may carry significant benefit for healthcare and disease screening.
Cons:
- The number of patients involved in the study is exceedingly small. Though many samples were drawn from these patients, pattern discovery may not be generalizable across larger populations. Though the difficulty in acquiring this type of data is noted.
- (Significant) Reviewer asked for use of public benchmark dataset, for which authors have declined to use since the benchmark was not targeted toward task of ultra-low VAFs. However, perhaps authors could have sourced genetic data from these recommended public repositories to create synthetic scenarios, which would enable the broader research community to directly compare against the methods presented here. The use of only private datasets is concerning regarding the future impact of this work.
- (Significant) The concatenation of the rows is slightly confusing. It is unclear why these were concatenated along the column dimension, rather than being input as multiple channels. This question doesn't seem to be addressed in the paper.
Given the pros and cons, the commitee recommends this interesting paper for workshop. | train | [
"rk6_mqulz",
"H18G3z5gM",
"rJKKR8qxM",
"B11xHxxVf",
"SJFOVOpXf",
"ryLJedamM",
"B1GXE_aQG",
"Hk7yNdamz",
"Byf2CVy-f",
"B1avSDnlM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"public"
] | [
"In this paper the author propose a CNN based solution for somatic mutation calling at ultra low allele frequencies.\nThe tackled problem is a hard task in computational biology, and the proposed solution Kittyhawk, although designed with very standard ingredients (several layers of CNN inspired to the VGG structure), seems to be very effective on both the shown datasets.\nThe paper is well written (up to a few misprints), the introduction and the biological background very accurate (although a bit technical for the broader audience) and the bibliography reasonably complete. Maybe the manuscript part with the definition of the accuracy measures may be skipped. Moreover, the authors themselves suggest how to proceed along this line of research with further improvements.\nI would only suggest to expand the experimental section with further (real) examples to strengthen the claim.\nOverall, I rate this manuscript in the top 50% of the accepted papers.",
"Summary:\n\nIn this paper the authors offer a new algorithm to detect cancer mutations from sequencing cell free DNA (cfDNA). The idea is that in the sample being sequenced there would also be circulating tumor DNA (ctDNA) so such mutations could be captured in the sequencing reads. The issue is that the ctDNA are expected to be found with low abundance in such samples, and therefore are likely to be hit by few or even single reads. This makes the task of differentiating between sequencing errors and true variants due to ctDNA hard. The authors suggest to overcome this problem by training an algorithm that will identify the sequence context that characterize sequencing errors from true mutations. To this, they add channels based on low base quality, low mapping quality. The algorithm for learning the context of sequencing reads compared to true mutations is based on a multi layered CNN, with 2/3bp long filters to capture di and trinucleotide frequencies, and a fully connected layer to a softmax function at the top. The data is based on mutations in 4 patients with lung cancer for which they have a sample both directly from the tumor and from a healthy region. One more sample is used for testing and an additional cancer control which is not lung cancer is also used to evaluate performance.\n\nPros:\n\nThe paper tackles what seems to be both an important and challenging problem. We also liked the thoughtful construction of the network and way the reference, the read, the CIGAR and the base quality were all combined as multi channels to make the network learn the discriminative features of from the context. Using matched samples of tumor and normal from the patients is also a nice idea to mimic cfDNA data.\n\nCons:\n\nWhile we liked both the challenge posed and the idea to solve it we found several major issues with the work. \n\nFirst, the writing is far from clear. There are typos and errors all over at an unacceptable level. Many terms are not defined or defined after being introduced (e.g. CIGAR, MF, BQMQ). A more reasonable CS style of organization is to first introduce the methods/model and then the results, but somehow the authors flipped it and started with results first, lacking many definitions and experimental setup to make sense of those. Yet Sec. 2 “Results” p. 3 is not really results but part of the methods. The “pipeline” is never well defined, only implicitly in p.7 top, and then it is hard to relate the various figures/tables to bottom line results (having the labels wrong does not help that).\n\nThe filters by themselves seem trivial and as such do not offer much novelty. Moreover, the authors filter the “normal” samples using those (p.7 top), which makes the entire exercise a possible circular argument. \n\nIf the entire point is to classify mutations versus errors it would make sense to combine their read based calls from multiple reads per mutations (if more than a single read for that mutation is available) - but the authors do not discuss/try that. \n\nThe entire dataset is based on 4 patients. It is not clear what is the source of the other cancer control case. The authors claim the reduced performance show they are learning lung cancer-specific context. What evidence do they have for that? Can they show a context they learned and make sense of it? How does this relate to the original papers they cite to motivate this direction (Alexandrov 2013)? Since we know nothing about all these samples it may very well be that that are learning technical artifacts related to their specific batch of 4 patients. As such, this may have very little relevance for the actual problem of cfDNA. \n\nFinally, performance itself did not seem to improve significantly compared to previous methods/simple filters, and the novelty in terms of ML and insights about learning representations seemed limited.\n\nAlbeit the above caveats, we iterate the paper offers a nice construction for an important problem. We believe the method and paper could potentially be improved and make a good fit for a future bioinformatics focused meeting such as ISMB/RECOMB.\n",
"his paper proposes a deep learning framework to predict somatic mutations at extremely low frequencies which occurs in detecting tumor from cell-free DNA. They key innovation is a convolutional architecture that represents the invariance around the target base. The method is validated on simulations as well as in cfDNA and is s\nhown to provide increased precision over competing methods.\n\nWhile the method is of interest, there are more recent mutation callers that should be compared. For example, Snooper which uses a RandomForest (https://bmcgenomics.biomedcentral.com/articles/10.1186/s12864-016-3281-2) and hence would be of interest as another machine learning framework. They also should compare to Strelka whic\nh interestingly they included only to make final calls of mutations but not in the comparison.\n\nFurther, I would also have liked to see the use of standard benchmark datasets for mutation calling ( https://www.nature.com/articles/ncomms10001).\n\nIt appears that the proposed method (Kittyhawk) has a steep decrease in PPV and enrichment for low tumor fraction which are presumably the parameter of greatest interest. The authors should explore this behavior in greater detail.",
"Thank you for your interest in our project. Unfortunately, we used clinical data for this manuscript and will be unable to share due to privacy concerns. The data is currently being deposited to a public repository with appropriate data access procedures, and we will share the access information once available. In the meanwhile, we would recommend accessing TCGA and using LUAD whole genome data. In our revision we have included detailed information about the entire process including pre-processing of WGS, read selection for training, and CNN parameter/architecture. We hope that this is sufficient to reproduce our results and we are happy to further facilitate in any way. ",
"We thank the reviewer for appreciation of the challenges of this task as well as its importance. We have revised the manuscript to increase its clarity for a broader audience and performed further proofing. As suggested, we have further expanded the dataset. \n\n",
"We thank the reviewer for finding our submission of significance and acknowledging the importance of this problem. \n\nIn our revision we have included the tools suggested in benchmarking (Snooper, Strelka), and have updated the results and figures. However, it is important to note, that our tool does not fulfill the same function as these tools and therefore direct benchmarking is not always informative. All current tools are designed to assess the information from multiple supporting reads to designate a genomic locus as mutated or unmutated. In fact, we believe that in the settings for which these tools were designed, they likely outperform Kittyhawk, as the use of information from >1 reads to make a mutation call is expected to provide a more powerful classifier. Kittyhawk was designed to address a different context in which the variant allele frequency is far lower than the depth of sequencing (such as low tumor burden cfDNA), such that only one read is expected to support a mutation call at best. This setting required a conceptual rethinking of the problem where mutation calling is driven by classifying the individual read rather than a locus. While Snooper formally supports a single supporting read, in practice, the median variant allele frequency the authors have reported was 0.38, and given the average depth of sequencing used, most loci had >1 supporting reads. This same issue limits also the use of benchmarking datasets suggested by the reviewer, where the variant allele fraction is greater than the context of low tumor burden cfDNA. For this reason we generated simulated plasma to reflect these conditions and optimize our method for this setting. \n\nWe also would like to thank you for emphasizing the importance of the performance in the lower tumor-fraction settings. The decrease in PPV with decreasing tumor fractions is expected even with stable sensitivity and specificity. This is due to fact that PPV is strongly affected by the prevalence of truly mutated reads in data. Thank you for the observation about enrichment, we believe this was due to an error in the initial submission. Our revision now includes the corrected enrichment plots, showing stable enrichment across the tumor fraction range. \n\n",
"8. \"The entire dataset is based on 4 patients. It is not clear what is the source of the other cancer control case.\" \n\nWe regret the lack of clarity on our part in the original manuscript. We have used data from 5 patients with lung cancer, 4 patients with melanoma as well as 2 early lung cancer cfDNA and a cfDNA control from a patient with a benign lung nodule. These are now detailed in Table 1. We are currently expanding this dataset using a WGS dataset of more than a 100 NSCLC patients, which we anticipate to complete prior to the presentation of the work. The nature of controls used in all of the analyses have been clarified. \n\n9. \"The authors claim the reduced performance show they are learning lung cancer-specific context. What evidence do they have for that? Can they show a context they learned and make sense of it? How does this relate to the original papers they cite to motivate this direction (Alexandrov 2013)? \"\n\nWe thank the reviewer for this important comment. We have added an additional figure addressing this issue (Figure 3). Namely, to assess the ability of the model to detect specific mutational signatures, we have measured the difference in the tri-nucleotide distributions between true cancer variants and sequencing artifact variants, and correlated the models score with these differences. We found a strong correlation that is specific to lung cancer samples, and less so to melanoma. Furthermore, we show that a model developed for lung cancer underperforms in melanoma and vice versa. \n\n10. \"Since we know nothing about all these samples it may very well be that that are learning technical artifacts related to their specific batch of 4 patients. As such, this may have very little relevance for the actual problem of cfDNA.\"\n\nAs shown in Figure 3 and as described above, the model is learning sequence contexts that are generally observed in lung adenocarcinoma rather than specific to this batch of patients. Furthermore, to demonstrate the applicability of this method to cfDNA, we tested it on patient-derived cfDNA. The robust performance in both of these settings suggests that the Kittyhawk model is indeed learning more general features that define true tumor variants vs. sequencing artifacts, rather than specific batch characteristics. \n\n11. \"Finally, performance itself did not seem to improve significantly compared to previous methods/simple filters, and the novelty in terms of ML and insights about learning representations seemed limited\"\n\nWe respectfully disagree. To our knowledge, Kittyhawk is the first tool to directly address the challenge of mutation calling in the setting of variant allele frequency lower than the depth of coverage, a major emerging challenge as noted by all reviewers. For comparison, Mutect, a state-of-the-art caller delivers no mutation calls at a tumor fraction of 1:1000. The ability to tackle ultra-low frequency mutations is done through a reframing of the mutation-calling problem from a locus-centric approach to a read-centric approach. This reframing is empowered by the embodiment of the read as features amenable to CNN learning originally designed for image learning. As such, it serves to extend the application of CNN to an important clinical area of development. While we anticipate that our ongoing efforts, that include larger datasets in training, will result in further performance improvement, even at this proof-of-principle stage the algorithm is providing a 30-fold enrichment. Notably, this is done in a manner that is completely independent from variant allele fraction, a unique performance feature that addresses a major emerging unmet need. ",
"We thank the reviewer for the careful examination and critique. We have tried to address all concerns as detailed below and hope that the manuscript is now significantly improved. \nPoint-by-point response:\n1. \"First, the writing is far from clear. There are typos and errors all over at an unacceptable level.\" \nWe regret that these errors have been included in the submission. In our revised manuscript we have performed a more rigorous proofing and hope to have resolved this issue. \n2.\" Many terms are not defined or defined after being introduced (e.g. CIGAR, MF, BQMQ).\" \nWe thank the reviewer for this important comment and have included clear definitions as well as a glossary of terms used to enhance the readability of the manuscript to diverse audiences. \n3. \"A more reasonable CS style of organization is to first introduce the methods/model and then the results, but somehow the authors flipped it and started with results first, lacking many definitions and experimental setup to make sense of those. Yet Sec. 2 “Results” p. 3 is not really results but part of the methods.\"\nWe have revised the manuscript as suggested to address this concern and follow a CS format. \n 4. \"The “pipeline” is never well defined, only implicitly in p.7 top, and then it is hard to relate the various figures/tables to bottom line results (having the labels wrong does not help that).\" \nWe have included an appendix detailing the mutation calling pipeline and a detailed figure (Figure 1) dedicated to provide an overview of the entire procedure. We have corrected the labeling issues. \n5. \"The filters by themselves seem trivial and as such do not offer much novelty.\"\nThe filters we have used first include rejecting reads with very low base quality and mapping quality. This filtering step allows the CNN to learn more complex features and interactions between the features. In the application of Kittyhawk to cfDNA, we apply an additional filter on variant allele frequency to exclude private germline single nucleotide polymorphisms, allowing the direct application of this algorithm even in the absence of matched normal germline DNA. \n6. \"Moreover, the authors filter the “normal” samples using those (p.7 top), which makes the entire exercise a possible circular argument. \"\nIndeed, low base and mapping quality germline DNA reads (“normal” samples) were filtered prior to the use of the reads in model training. As noted above, this was done in order to allow the CNN to learn more complex features that distinguish true mutated reads and artifactually altered reads. In the implementation of our strategy to either synthetic or real cfDNA data we also include this first filtering step to remove these reads, as we have shown them to be highly enriched in sequencing artifacts. We note no circularity in this approach, as the cfDNA data includes no labeling of normal DNA vs. tumor DNA reads.\n7. \"If the entire point is to classify mutations versus errors it would make sense to combine their read based calls from multiple reads per mutations (if more than a single read for that mutation is available) - but the authors do not discuss/try that. \"\nWe are grateful for this suggestion. Indeed future integration of our model with extant mutation caller can be considered, to improve mutation calling in the setting of multiple supporting reads per mutated locus. We have not developed this aspect as it is not directly related to the unique challenge we are tackling (variant allele fraction far lower than depth of sequencing). We have included a discussion of such a potential integration. \n\n",
"If you cannot share your data with me, can you at least publish the appendix mentioned in the paper? perhaps I can get a similar dataset and still use the paper for the challenge.",
"Hello, \n\nI would like to reproduce your results as part of the ICLR 2018 Reproducibility Challenge (http://www.cs.mcgill.ca/~jpineau/ICLR2018-ReproducibilityChallenge.html). \n\nYou mentioned in your paper an appendix which I don't seem to have access to; would you be able to make that available? I would also like to know where the raw data samples are available.\nWould you be able to share Code with me?\n\nThank you very much\n\n"
] | [
8,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H1DkN7ZCZ",
"iclr_2018_H1DkN7ZCZ",
"iclr_2018_H1DkN7ZCZ",
"Byf2CVy-f",
"rk6_mqulz",
"rJKKR8qxM",
"Hk7yNdamz",
"H18G3z5gM",
"B1avSDnlM",
"iclr_2018_H1DkN7ZCZ"
] |
iclr_2018_ByaQIGg0- | AUTOMATED DESIGN USING NEURAL NETWORKS AND GRADIENT DESCENT | We propose a novel method that makes use of deep neural networks and gradient decent to perform automated design on complex real world engineering tasks. Our approach works by training a neural network to mimic the fitness function of a design optimization task and then, using the differential nature of the neural network, perform gradient decent to maximize the fitness. We demonstrate this methods effectiveness by designing an optimized heat sink and both 2D and 3D airfoils that maximize the lift drag ratio under steady state flow conditions. We highlight that our method has two distinct benefits over other automated design approaches. First, evaluating the neural networks prediction of fitness can be orders of magnitude faster then simulating the system of interest. Second, using gradient decent allows the design space to be searched much more efficiently then other gradient free methods. These two strengths work together to overcome some of the current shortcomings of automated design. | workshop-papers | Differentiable neural networks used as a measure of design optimality in order to improve efficiency of automated design.
Pros:
- Genetic algorithms, which are the dominant optimization routine for automated design systems, can be computationally expensive. This approach alleviates this bottleneck under certain circumstances and applications.
Cons:
- Primarily application paper, machine learning advancement is marginal.
- Multiple reviewers: Generalization capability not clear. For example, some utility systems may be stochastic (i.e. turbulence) and require multiple trials to measure fitness, which this method would not be able to model.
Overall, the committee feels this paper is interesting enough to appears as a workshop paper. | val | [
"HJ_m58weG",
"HkWsjrOlz",
"rJ_jlsdlf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes to use neural network and gradient descent to automatically design for engineering tasks. It uses two networks, parameterization network and prediction network to model the mapping from design parameters to fitness. It uses back propagation (gradient descent) to improve the design. The method is evaluated on heat sink design and airfoil design.\n\nThis paper targets at a potentially very useful application of neural networks that can have real world impacts. However, I have three main concerns:\n1) Presentation. The organization of the paper could be improved. It mixes the method, the heat sink example and the airfoil example throughout the entire paper. Sometimes I am very confused about what is being described. My suggestion would be to completely separate these three parts: present a general method first, then use heat sink as the first experiment and airfoil as the second experiment. This organization would make the writing much clearer.\n\n2) In the paragraph above Section 4.1, the paper made two arguments. I might be wrong, but I do not agree with either of them in general. First of all, \"neural networks are good at generalizing to examples outside their train set\". This depends entirely on whether the sample distribution of training and testing are similar and whether you have enough training examples that cover important sample space. This is especially critical if a deep neural network is used since overfitting is a real issue. Second, \"it is easy to imagine a hybrid system where a network is trained on a simulation and fine tuned ...\". Implementing such a hybrid system is nontrivial due to the reality gap. There is an entire research field about closing the reality gap and transfer learning. So I am not convinced by these two arguments made by this paper. They might be true for a narrow field of application. But in general, I think they are not quite correct.\n\n3) The key of this paper is to approximate the dynamics using neural network (which is a continuous mapping) and take advantage of its gradient computation. However, many of dynamic systems are inherently discontinuous (collision/contact dynamics) or chaotic (turbulent flow). In those scenarios, the proposed method might not work well and we may have to resort to the gradient free methods. It seems that the proposed method works well for heat sink problem and the steady flow around airfoil, both of which do not fall into the more complex physics regime. It would be great that the paper could be more explicit about its limitations.\n\nIn summary, I like the idea, the application and the result of this paper. The writing could be improved. But more importantly, I think that the proposed method has its limitation about what kind of physical systems it can model. These limitation should be discussed more explicitly and more thoroughly.\n\n",
"This paper introduces an appealing application of deep learning: use a deep network to approximate the behavior of a complex physical system, and then design optimal devices (eg airfoil shapes) by optimizing this network with respect to its inputs. Overall, this research direction seems fruitful, both in terms of different applications and in terms of extra machine learning that could be done to improve performance, such as ensuring that the optimization doesn't leave the manifold of reasonable designs. \n\n On one hand, I would suggest that this work would be better placed in an engineering venue focused on fluid dynamics. On the other hand, I think the ICLR community would benefit from about the opportunities to work on problems of this nature.\n\n =Quality=\nThe authors seem to be experts in their field. They could have done a better job explaining the quality of their final results, though. It is unclear if they are comparing to strong baselines.\n\n=Clarity=\nThe overall setup and motivation is clear.\n\n=Originality=\nThis is an interesting problem that will be novel to most member of the ICLR community. I think that this general approach deserves further attention from the community.\n\n\n=Major Comments=\n* It's hard for me to understand if the performance of your method is actually good. You show that it outperforms simulated annealing. Is this the state of the art? How would an experienced engineer perform if he or she just sat down and drew the shape of an airfoil, without relying on any computational simulation at all?\n\n* You can afford to spend lots of time interacting with the deep network in order to optimize it really well with respect to the inputs. Why not do lots of random initializations for the optimization? Isn't that a good way to help avoid local optima?\n\n* I'd like to see more analysis of the reliability of your deep-network-based approximation to the physics simulator. For example, you could evaluate the deep-net-predicted drag ratio vs. the simulator-predicted drag ratio at the value of the parameters corresponding to the final optimized airfoil shape. If there's a gap, it suggests that your NN approximation might have not been that accurate.\n\n=Minor Comments=\n* \"We also found that adding a small amount of noise too the parameters when computing gradients helped jump out of local optima\"\nGenerally, people add noise to the gradients, not the values of the parameters. See, for example, uses of Langevin dynamics as a non-convex optimization method.\n\n* You have a complicated method for constraining the parameters to be in [-0.5,0.5]. Why not just enforce this constraint by doing projected gradient descent? For the constraint structure you have, projection is trivial (just clip the values). \n\n * \"The gradient decent approach required roughly 150 iterations to converge where as the simulated annealing approach needed at least 800.\"\nThis is of course confounded by the necessary cost to construct the training set, which is necessary for the gradient descent approach. I'd point out that this construction can be done in parallel, so it's less of a computational burden.\n\n* I'd like to hear more about the effects of different parametrizations of the airfoil surface. You optimize the coefficients of a polynomial. Did you try anything else?\n\n* Fig 6: What does 'clean gradients' mean? Can you make this more precise?\n\n* The caption for Fig 5 should explain what each of the sub figures is.\n\n\n",
"1. This is a good application paper, can be quite interesting in a workshop related to Deep Learning applications to physical sciences and engineering\n2. Lacks in sufficient machine learning related novelty required to be relevant in the main conference\n3. Design, solving inverse problem using Deep Learning are not quite novel, see\nStoecklein et al. Deep Learning for Flow Sculpting: Insights into Efficient Learning using Scientific Simulation Data. Scientific Reports 7, Article number: 46368 (2017).\n4. However, this paper introduces two different types of networks for \"parametrization\" and \"physical behavior\" mapping, which is interesting, can be very useful as surrogate models for CFD simulations \n5. It will be interesting to see the impacts of physics based knowledge on choice of network architecture, hyper-parameters and other training considerations\n6. Just claiming the generalization capability of deep networks is not enough, need to show how much the model can interpolate or extrapolate? what are the effects of regulariazations in this regard? "
] | [
5,
7,
4
] | [
4,
4,
5
] | [
"iclr_2018_ByaQIGg0-",
"iclr_2018_ByaQIGg0-",
"iclr_2018_ByaQIGg0-"
] |
iclr_2018_HyDMX0l0Z | Towards Effective GANs for Data Distributions with Diverse Modes | Generative Adversarial Networks (GANs), when trained on large datasets with diverse modes, are known to produce conflated images which do not distinctly belong to any of the modes. We hypothesize that this problem occurs due to the interaction between two facts: (1) For datasets with large variety, it is likely that the modes lie on separate manifolds. (2) The generator (G) is formulated as a continuous function, and the input noise is derived from a connected set, due to which G's output is a connected set. If G covers all modes, then there must be some portion of G's output which connects them. This corresponds to undesirable, conflated images. We develop theoretical arguments to support these intuitions. We propose a novel method to break the second assumption via learnable discontinuities in the latent noise space. Equivalently, it can be viewed as training several generators, thus creating discontinuities in the G function. We also augment the GAN formulation with a classifier C that predicts which noise partition/generator produced the output images, encouraging diversity between each partition/generator. We experiment on MNIST, celebA, STL-10, and a difficult dataset with clearly distinct modes, and show that the noise partitions correspond to different modes of the data distribution, and produce images of superior quality. | workshop-papers | The paper presents a really interesting take on the mode collapse problem and argue that the issue arises because of the current GAN models try to model distributions with disconnected support using continuous noise and generators. The authors try to fix this issue by training multiple generators with shared parameters except for the last layer.
The paper is well written and authors did a good job in addressing some of the reviewer concerns and improving the paper.
Even though arguments presented are novel and interesting, reviewers agree that the paper lacks sufficient theoretical or experimental analysis to substantiate the claims/arguments made in the paper. Limited quantitative and subjective results are not always in favor of the proposed algorithm. More controlled toy experiments and results on larger datasets are needed. The central argument about "tunneling" is interesting and needs deeper investigation. Overall the committee recommends this paper for workshop. | val | [
"HJjiZ-qef",
"H1BVVg9ez",
"HknROGcxG",
"rkoUufImz",
"S1sXdf8mz",
"ryaerzImM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Summary:\n\nThe paper studies the problem of learning distributions with disconnected support. The paper is very well written, and the analysis is mostly correct, with some important exceptions. However, there are a number of claims that are unverified, and very important baselines are missing. I suggest improving the paper taking into account the following remarks and I will strongly consider improving the score.\n\nDetailed comments:\n\n- The paper is very well written, which is a big plus.\n\n- There are a number of claims in the paper that are not supported by experiments, citations, or a theorem.\n\n- Sections 3.1 - 3.3 can be summarized to \"Connected prior + continuous generator => connected support\". Thus, to allow for disconnected support, the authors propose to have a discontinuous generator. However to me it seems that a trivial and important attack to this problem is to allow a simple disconnected prior, such as a mixture between uniforms, or at least an approximately disconnected (given the superexponential decay of the gaussian pdf) of a mixture of gaussians, which is very common. The authors fail to mention this obvious alternative, or explore it further, which I think weakens the paper.\n\n- Another standard approach to attacking diverse datasets such as imagenet is adding noise in the intermediate layers of the generator (this was done by EBGAN and the Improved GAN paper by Salimans et al.). It seems to me that this baseline is missing.\n\n- Section 3.4, paragraph 3, \"the outputs corresponding to vectors linearly interpolated from z_1 to z_2 show a smooth\". Actually, this is known to not perform very well often, indeed the interpolations are done through great circles in z_1 and z_2. See https://www.youtube.com/watch?v=myGAju4L7O8 for example.\n\n- Lemma 1 is correct, but the analysis on the paragraph following is flat out wrong. The fact that a certain z has high density doesn't imply that the sample g_\\theta(z) has high density! You're missing the Jacobian term appearing in the change of variables. Indeed, it's common to see neural nets spreading appart regions of high probability to the extent that each individual output point has low density (this is due in its totallity to the fact that ||\\nabla_x g_\\theta(z)|| can be big.\n\n- Borrowing from the previous comment, the evidence to support result 5 is insufficient. I think the authors have the right intuition, but no evidence or citation is presented to motivate result 5. Indeed, DCGANs are known to have extremely sharp interpolations, suggesting that small jumps in z lead to large jumps in images, thus having the potential to assign low probability to tunnels.\n\n- A citation, experiment or a theorem is missing showing that the K of a generator is small enough in an experiment with separated manifolds. Until that evidence is presented, section 3.5 is anecdotal.\n\n- The second paragraph of section 3.6 is a very astute observation, but again it is necessary to show some evidence to verify this intuition.\n\n- The authors then propose to partition the prior space by training separate first layers for the generator in a maximally discriminative way, and then at inference time just sampling which layer to use uniformly. It's important to note that this has a problem when the underlying separated manifolds in the data are not equiprobable. For example, if we use N = 2 in CelebRoom but we use 30% faces and 70% bedrooms, I would still expect tunneling due to the fact that one of the linear layers has to cover both faces and bedrooms.\n\n- MNIST is known to be a very poor benchmark for image generation, and it should be avoided.\n\n- I fail to see an improvement in quality in CelebA. It's nice to see some minor form on clustering when using generator's prediction, but this has been seen in many other algorithms (e.g. ALI) with much better results long before. I have to say also the official baseline for 64x64 images in wgangp (that I've used several times) gives much better results than the ones presented in this paper https://github.com/igul222/improved_wgan_training/blob/master/gan_64x64.py .\n\n- The experiments in celebRoom are quite nice, and a good result, but we are still missing a detailed analysis for most of the assumptions and improvements claimed in the paper. It's very hard to make very precise claims about the improvements of this algorithm in such a complex setting without having even studied the standard baselines (e.g. noise at every layer of the generator, which has very public and well established code https://github.com/openai/improved-gan/blob/master/imagenet/generator.py).\n\n- I would like to point a lot of tunneling issues can be seen and studied in toy datasets. The authors may want to consider doing targeted experiments to evaluate their assumptions.\n\n=====================\n\nAfter the rebuttal I've increased my score. The authors did a great job at addressing some of the concerns. I still think there is more room to be done as to justifying the approach, dealing properly with tunneling when we're not in the somewhat artificial case of equiprobable partitions, and primarily at understanding the extent to which tunneling is a problem in current methods. The revision is a step forward in this direction, but still a lot remains to be done. I would like to see simple targeted experiments aimed at testing how much and in what way tunneling is a problem in current methods before I see high dimensional non quantitative experiments.\n\nIn the case where the paper gets rejected I would highly recommend the acceptance at the workshop due to the paper raising interesting questions and hinting to a partial solution, even though the paper may not be at a state to be published at a conference venue like ICLR.",
"The authors propose to train multiple generators (with same set of parameters), each of which with a different linear mapping in the first layer. The idea is that the final output of the generator should be a distribution whose support are disconnected. The idea does look interesting. But a lot of details is missing and needs clarification.\n\n1) A lot of technical details are missing. The main formula is given in page 6 (Sec. 4.2), without much explanation. It is also not clear how different generators are combined as a final generator to feed into the discriminator. Also how are the diversity enforced?\n\n2) The experiments are not convincing. It is stated that the new method produces results that are visually better than existing ones. But there is no evidence that this is actually due to the proposed idea. I would have liked to see some demonstration of how the different modes look like, how they are disconnected and collaborate to form a stronger generator. Even some synthetic examples could be helpful.",
"This paper concerns a potentially serious issue with current GAN based approaches. Complex data distributions, such as natural images, likely lie upon many disconnected manifolds. However standard GANs use continuous noise and generators and must therefore output a connected distribution over inputs. This constraint results in the generator outputting what the paper terms “tunnels” regions of output which connect these actually disconnected manifolds but do not correspond to actual samples from valid manifolds. \n\nThis is an important observation. The paper makes a variety of sensible claims - attributing incoherent samples to these tunnels and stating that complex datasets such as Imagenet are more likely to suffer from this problem. This behavior can indeed be observed during training on toy examples such as a 2d mixture of gaussians. However it is an open question how important this issue is in practice and the paper does not clearly separate this issue from the issue of properly modeling the complicated manifolds themselves. It is admittedly difficult to perform quantitative evaluations on generative models but much more work could be done to demonstrate and characterize the problem in practice.\n\nThe tunnel problem motivates the authors proposed approach to introducing discontinuities into the generator. Specifically the paper proposes training N different generators composed of N different linear projections of the noise distribution while sharing all further layers. A projection is chosen uniformly at random during training/sampling. An additional extension adds a loss term for the discriminator/generator to encourage predictability and thus diversity of the projection layers and improves results significantly. \n\nThe only experimental results presented are qualitative analysis of samples by the authors. This is a very weak form of evidence suffering from bias as the evaluations are not performed blinded and are of a subjective nature. If the paper intends to present experimental results solely on sample quality then, blinded and aggregated human judgments should be expected. As a reader, I do agree that qualitatively the proposed approach produces higher quality samples than the baseline on CelebRoom but I struggle to see any significant difference on celebA itself. I am uncomfortable with this state of affairs and feel the claims of improvements on this task are unsubstantiated.\n\nWhile discussion is motivated by known difficulties of GANs on highly varied datasets such as Imagenet, experiments are conducted on both MNIST and celebA datasets which are already well handled by current GANs. The proposed CelebRoom dataset (a 50/50 mixture of celebA and LSUN bedrooms) is a good dataset to validate the problem on but it is disappointing that the authors do not actually scale their method to their motivating example. Additionally, utilizing Imagenet would lend itself well to a more quantitative measure of sample quality such as inception score.\n\nOn the flip side, a toy experiment with known disconnected manifolds, while admittedly toy could increase confidence since it lends itself to more thorough quantitative analysis. For instance, a mixture of disconnected 2d gaussians where samples can be measured to be on or off manifold could be included.\n \nAt a high level I am not as sure as the authors on the nature of disconnected manifolds and the issue of tunnels. Any natural image has a large variety of transformations that can be applied to it that still correspond to valid natural images. Lighting transformations such as brightening or darkening of the image corresponds to a valid image transformations which allows for a “lighting tunnel” to connect all supposedly disjoint image manifolds through very dark/bright images. While this is definitely not the optimal way to approach the problem it is meant as a comment on the non-intuitive and poorly characterized properties of complex high dimensional data manifolds. \n\nThe motivating observation is an important one and the proposed solution appears to be a reasonable avenue to tackle the problem. However the paper lacks quantitative evidence for both the importance of the problem and demonstrating the proposed solution.",
"We thank the reviewer for their insightful critique and detailed comments.\nWe have added a revision of the paper with additional experiments, minor corrections & clarifications. We realize that there was an error in our discussion concerning unrealistic outputs in DCGANs, and we have withdrawn that section from the paper. However, we would like to point out that this does not detract our main message because this particular proof was meant to mathematically elucidate the problem of tunneling in DCGANs as an example. While our attempt to showcase the problem particularly for DCGANs stands invalidated, the rest of the general arguments set forth in the paper still hold.\nWe address other pending concerns below:\n\n1) A lot of technical details are missing. The main formula is given in page 6 (Sec. 4.2), without much explanation. It is also not clear how different generators are combined as a final generator to feed into the discriminator. Also how are the diversity enforced?\n\nA: We apologize for any obfuscation in our presentation, however, we do explain the setup in the Proposed Solutions section. We have rewritten the main formula to make it more understandable. We also address the specific queries raised here:\ni) The generators are \"combined\" by sampling uniformly from each of them. The resulting distribution is reported in the last paragraph of the Modified Loss Functions section.\nii) The diversity is enforced by adding a prediction loss to the discriminator's and generator's losses. Thus, each generator is incentivized to produce outputs which are distinguishable from the outputs of the other generators. The discriminator is incentivized to learn features which help in distinguishing between the different generators.\n\n2) The experiments are not convincing. It is stated that the new method produces results that are visually better than existing ones. But there is no evidence that this is actually due to the proposed idea. I would have liked to see some demonstration of how the different modes look like, how they are disconnected and collaborate to form a stronger generator. Even some synthetic examples could be helpful.\n\nA: Visualization (in fact even distinguishing) modes in high dimensional data is very hard, hence it is difficult to show how the partitions collaborate. We thank the reviewer for suggesting synthetic examples. We have included experiments from a popular toy setup consisting of 8 bivariate Gaussians arranged in a circle (thus 8 modes are present). We report results by running vanilla GAN, WGAN-GP, and our setup on this dataset, showing that the partitions can collaborate to cover distinct modes, which cannot be done with other setups. We have also included additional experiments on STL-10 (which is an ImageNet subset) as evidence for the efficacy of the split generator setup.",
"We thank the reviewer for their insightful critique and detailed comments.\nWe have added a revision of the paper with additional experiments, minor corrections & clarifications. We realize that there was an error in our discussion concerning unrealistic outputs in DCGANs, and we have withdrawn that section from the paper. However, we would like to point out that this does not detract our main message because this particular proof was meant to mathematically elucidate the problem of tunneling in DCGANs as an example. While our attempt to showcase the problem particularly for DCGANs stands invalidated, the rest of the general arguments set forth in the paper still hold.\nWe address other pending concerns below:\n\nQ: There are a number of claims in the paper that are not supported by experiments, citations, or a theorem.\n\nA: We shall do our best to provide any missing citations. We shall be grateful to the reviewer for directing us towards any specific unsupported claims.\n\nQ: ... it seems that a trivial and important attack to this problem is to allow a simple disconnected prior, such as a mixture between uniforms, or at least an approximately disconnected (given the superexponential decay of the gaussian pdf) of a mixture of gaussians, which is very common.\n\nA: We thank the reviewer for pointing out this omission. We did consider this alternative originally, in the form of a mixture of Gaussians with trainable parameters, but did not report it. We have included it in the revision, along with supporting experiments.\n\nQ: Lemma 1 is correct, but the analysis on the paragraph following is flat out wrong. The fact that a certain z has high density doesn't imply that the sample g_\\theta(z) has high density! You're missing the Jacobian term appearing in the change of variables ... Borrowing from the previous comment, the evidence to support result 5 is insufficient ... Section 3.4, paragraph 3, \"the outputs corresponding to vectors linearly interpolated from z_1 to z_2 show a smooth\". Actually, this is known to not perform very well often, indeed the interpolations are done through great circles in z_1 and z_2.\n\nA: We thank the reviewer for spotting this error. Our attempt was to showcase the tunneling problem specifically for an easily understood example. However, in light of the technical error, we have withdrawn this discussion from the revised submission.\n\nQ: A citation, experiment or a theorem is missing showing that the K of a generator is small enough in an experiment with separated manifolds. Until that evidence is presented, section 3.5 is anecdotal.\n\nA: We agree that there is little support for the fact that K is small enough. However, we do not claim that K is indeed small. We just propose that there is some measure of probability lost, as a function of K.\n\nQ: The second paragraph of section 3.6 is a very astute observation, but again it is necessary to show some evidence to verify this intuition.\n\nA: We believe that one way to verify this observation would be to discard label information, while holding on to the partitioning property endowed by conditioning on labels. This is precisely what our experiments with multi-partition GANs do. We request the reviewer to consider the experiments in this light.\n\nQ: The authors then propose to partition the prior space by training separate first layers for the generator in a maximally discriminative way, and then at inference time just sampling which layer to use uniformly. It's important to note that this has a problem when the underlying separated manifolds in the data are not equiprobable. For example, if we use N = 2 in CelebRoom but we use 30% faces and 70% bedrooms, I would still expect tunneling due to the fact that one of the linear layers has to cover both faces and bedrooms.\n\nA: We agree with the reviewer that tunneling will still occur. However, it does get reduced to some extent by our method, since one entire generator can be devoted to creating 50% bedrooms. The other generator can create 30% faces and 20% bedrooms. Thus only this part will face tunneling issues, and the first generator escapes these issues. \n\nQ: I have to say also the official baseline for 64x64 images in wgangp (that I've used several times) gives much better results than the ones presented in this paper \n\nA: We thank the reviewer for pointing out the discrepancy, and directing us to the official code. As a precaution, we reimplemented our setup using the WGAN-GP code and reconducted all experiments.\n\nQ: I would like to point a lot of tunneling issues can be seen and studied in toy datasets. The authors may want to consider doing targeted experiments to evaluate their assumptions.\n\nA: We have included experiments from a popular toy setup consisting of 8 bivariate Gaussians arranged in a circle.\nWe have also included results on STL-10, a subset of ImageNet as a step towards ImageNet complexity.",
"We thank the reviewer for their insightful critique and detailed comments.\nWe have added a revision of the paper with additional experiments, minor corrections & clarifications. We realize that there was an error in our discussion concerning unrealistic outputs in DCGANs, and we have withdrawn that section from the paper. However, we would like to point out that this does not detract our main message because this particular proof was meant to mathematically elucidate the problem of tunneling in DCGANs as an example. While our attempt to showcase the problem particularly for DCGANs stands invalidated, the rest of the general arguments set forth in the paper still hold.\nWe address other pending concerns below:\n\nQ: While discussion is motivated by known difficulties of GANs on highly varied datasets such as Imagenet, experiments are conducted on both MNIST and celebA datasets which are already well handled by current GANs. The proposed CelebRoom dataset (a 50/50 mixture of celebA and LSUN bedrooms) is a good dataset to validate the problem on but it is disappointing that the authors do not actually scale their method to their motivating example.\n\nA: We have extended our experiments to include results on STL-10, which is a subset of ImageNet. We believe that this is a step towards ImageNet level complexity.\n\nQ: On the flip side, a toy experiment with known disconnected manifolds, while admittedly toy could increase confidence since it lends itself to more thorough quantitative analysis. For instance, a mixture of disconnected 2d gaussians where samples can be measured to be on or off manifold could be included.\n\nA: We thank the reviewer for suggesting the idea of a toy experiment. We have extended our experiments to include results on a toy dataset with 8 equiprobable, concentrated Gaussian distributions. The setup is the same as in the WGAN-GP paper - 8 bivariate Gaussians with means arranged uniformly on a circle of radius 2. The covariance matrices are taken to be 0.02I. We show that our method quickly converges and covers the Gaussians, while standard GAN and WGAN-GP are unable to cover the distribution or take a long time to converge.\n\nQ: At a high level I am not as sure as the authors on the nature of disconnected manifolds and the issue of tunnels. Any natural image has a large variety of transformations that can be applied to it that still correspond to valid natural images. Lighting transformations such as brightening or darkening of the image corresponds to a valid image transformations which allows for a “lighting tunnel” to connect all supposedly disjoint image manifolds through very dark/bright images. While this is definitely not the optimal way to approach the problem it is meant as a comment on the non-intuitive and poorly characterized properties of complex high dimensional data manifolds.\n\nA: The example provided by the reviewer does connect two supposedly disjoint manifolds through very dark images. However, we would like to point out that such images will probably not be part of the \"real distribution\" of images (of faces, say). Hence it is probably not a real concern that the manifolds will intersect."
] | [
6,
4,
4,
-1,
-1,
-1
] | [
5,
3,
3,
-1,
-1,
-1
] | [
"iclr_2018_HyDMX0l0Z",
"iclr_2018_HyDMX0l0Z",
"iclr_2018_HyDMX0l0Z",
"H1BVVg9ez",
"HJjiZ-qef",
"HknROGcxG"
] |
iclr_2018_rkEtzzWAb | Parametric Adversarial Divergences are Good Task Losses for Generative Modeling | Generative modeling of high dimensional data like images is a notoriously difficult and ill-defined problem. In particular, how to evaluate a learned generative model is unclear.
In this paper, we argue that *adversarial learning*, pioneered with generative adversarial networks (GANs), provides an interesting framework to implicitly define more meaningful task losses for unsupervised tasks, such as for generating "visually realistic" images. By relating GANs and structured prediction under the framework of statistical decision theory, we put into light links between recent advances in structured prediction theory and the choice of the divergence in GANs. We argue that the insights about the notions of "hard" and "easy" to learn losses can be analogously extended to adversarial divergences. We also discuss the attractive properties of parametric adversarial divergences for generative modeling, and perform experiments to show the importance of choosing a divergence that reflects the final task. | workshop-papers | Pros:
- The paper proposes interesting new ideas on evaluating generative models.
- Paper provides hints at interesting links between structural prediction and adversarial learning.
- Authors propose a new dataset called Thin-8 to demonstrate the new ideas and argue that it is useful in general to study generative models.
- The paper is well written and the authors have made a good attempt to update the paper after reviewer comments.
Cons:
- The proposed ideas are high level and the paper lack deeper analysis.
- Apart from demonstrating that the parametric divergences perform better than non-parametric divergences are interesting, but the reviewers think that practical importance of the results are weak in comparison to previous works.
With this analysis, the committee recommends this paper for workshop. | train | [
"H1EMeWfgz",
"S1owSiOeM",
"SJvLO1WZf",
"ByMGLaMNf",
"BJYVF7TXG",
"SJlaT3nQf",
"H1pl71jQz",
"SJq0z1omM",
"rJeC1JiXM",
"rJRiJyoQf",
"HysGOaZQM",
"ryTCJFdfG",
"HJd9FudGf",
"ByBOYudfM",
"H1QeAUOfz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"public",
"author",
"author",
"author",
"author",
"public",
"author",
"author",
"author",
"public"
] | [
"This paper is in some sense a \"position paper,\" giving a framework for thinking about the loss functions implicitly used by the generator of GAN-type models. It advocates thinking about the loss in a way similar to how it is considered in structured prediction. It also proposes that approximating the dual formulation of various divergences with functions from a parametric class, as is typically done in GAN-type setups, is not only more tractable (computationally and in sample complexity) than the full nonparametric estimation, but also gives a better actual loss.\n\nOverall, I like the argument here, and think that it is a useful framework for thinking about these things. My main concern is that the practical contribution on top of Liu et al. (2017) might be somewhat limited.\n\nA few small points:\n\n- f-divergences can actually be nonparametrically estimated purely from samples, e.g. with the k-nearest neighbor estimator of https://arxiv.org/abs/1411.2045, or (for certain f-divergences) the kernel density based estimator of https://arxiv.org/abs/1402.2966. These are unlikely to lead to a practical learning algorithm, but could be mentioned in Table 1.\n\n- The discussion of MMD in the end of section 3.1 is a little off. MMD is fundamentally defined by the kernel choice; Dziugaite et al. (2015) only demonstrated that the Gaussian RBF kernel is a poor choice for MNIST modeling, while the samples of Li et al. (2015) simply by using a mixture of Gaussian kernels were much better. No reasonable fixed kernel is likely to yield good results on a harder image modeling problem, but that is a slightly different message than the one this paragraph conveys.\n\n- It would be interesting to replicate the analysis of Danihelka et al. (2017) on the Thin-8 dataset. This might help clarify which of the undesirable effects observed in the VAE model here are due to likelihood, and which due to other aspects of VAEs (like the use of the lower bound).",
"This paper introduces a family of \"parametric adversarial divergences\" and argue that they have advantages over other divergences in generative modelling, specially for structured outputs. \n\nThere's clear value in having good inductive biases (e.g. expressed in the form of the discriminator architecture) when defining divergences for practical applications. However, I think that the paper would be much more valuable if its focus shifted from presenting a new notion of divergence to deep-diving into the effect of inductive biases and presenting more specific results (theoretical and / or empirical) in structured prediction or other problems. In its current form the paper doesn't seem particularly strong for either the divergence or GAN literatures. Some reasons below:\n\n* There are no specific results on properties of the divergences, or axioms that justify them. I think that presenting a very all-encompassing formulation without a strong foundation does not add value. \n* There's abundant literature on f-divergences which show that there's a 1-1 relationship between divergences and optimal (Bayes) risks of classification problems (e.g. Reid at al. Information, Divergence and Risk for Binary Experiments in JMLR and Garcia-Garcia et al. Divergences and Risks for Multiclass Experiments in COLT). This disproves the point that the authors make that it's not possible to encode information about the final task in the divergence. If the loss for the task is proper, then it's well known how to construct a divergence which coincides with the optimal risk.\n* The divergences presented in this work are different from the above since the risk is minimised over a parametric class instead of over the whole set of integrable functions. However, practical estimators of f-divergences also reduce the optimization space (e.g. unit ball in a RKHS as in Nguyen et al. Estimating Divergence Functionals and the\nLikelihood Ratio by Convex Risk Minimization or Ruderman et al. Tighter Variational Representations of f-Divergences via Restriction to Probability Measures). So, given the lack of strong foundation for the formulation, \"parametric adversarial divergences\" feel more like estimators of other divergences than a relevant new family.\n* There are many estimators for f-divergences (like the ones cited above and many others based e.g. on nearest-neighbors) that are sample-based and thus correspond to the \"implicit\" case that the authors discuss. They don't necessarily need to use the dual form. So table 1 and the first part of Section 3.1 are not accurate.\n* The experiments are few and too specific, specially given that the paper presents a very general framework. The first experiment just shows that Wasserstein GANs don't perform well in an specific dataset and use that to validate a point about those GANs not being good for high dimensions due to their sample complexity. That feels like confirmation bias and also does not really say anything about the parametric adversarial GANs, which are the focus of the paper.\n\nIn summary, I like the authors idea to explore the restriction of the function class of dual representations to produce useful-in-practice divergences, but the paper feels a bit middle of the road. The theory is not strong and the experiments don't necessary support the intuitive claims made in the paper.",
"This paper takes some steps in the direction of understanding adversarial learning/GAN and relating GANs and structured prediction under statistical decision theory framework. \n\nOne of the main contribution of the paper is to study/analyze parametric adversarial divergences and link it with structured losses. Although, I see a value in the idea considered in the paper, it is not clear to me how much novelty does this work bring on top of the following two papers:\n\n1) S. Liu. Approximation and convergence properties of generative adversarial learning. In NIPS, 2017.\n2) S. Arora. Generalization and equilibrium in generative adversarial nets (GANs). In ICML, 2017.\n\nMost of their theoretical results seems to be already existing in literature (Liu, Arora, Arjovsky) in some form of other and it is claimed that this paper put these result in perspective in an attempt to provide a more principled view of the nature and usefulness of adversarial divergences, in comparison to traditional divergences.\n\nHowever, it seems to me that the paper is limited both in theoretical novelty and practical usefulness of these results. Especially, I could not see any novel contribution for GAN literature or adversarial divergences. \n\nI would suggests authors to clearly specify novelties and contrast their work with\n1) GAN literature: ([2] Arora's results) \n2) Adversarial divergences literature: ([1] Liu)\n\nAlso, provide more experiments to support several claims (without any rigorous theoretical justifications) made in the paper.",
"Thank you for your comment, Ilya.\n\nWe will release the Thin-8 dataset as well as the Visual Hyperplane (MNIST digits summing to 25), as soon as our submission is de-anonymized, along with the data-augmentation code (elastic deformations for Thin-8).\n\nPlease note that the Visual Hyperplane dataset is generated on-the-fly from MNIST: every time a sample is requested, a combination of 5 symbolic digits is sampled uniformly from all possible combinations that sum to 25, then a corresponding image is sampled from MNIST for each symbolic digit. Finally, the 5 images are concatenated.",
"This work gives a nice overview of different generative modelling methods (mostly GAN- and VAE-variants). It makes an interesting link to structured learning (see Section 4.1), thereby enabling the future transfer of results and techniques from the vast supervised learning literature to the field of unsupervised learning. The authors for example note that the recent work of Osokin et al. (2017) in structured prediction could explain why some GAN-types are easier to train than others and why they generate perceptually better images (Section 4.3). We would like to emphasise that this paper does not delve into the mathematical implications of the link to structured prediction: it rather discusses them at a relatively high-level. The authors however test their hypotheses on a set of original and convincing experiments. They thereby introduce an interesting new dataset, which consists of high-resolution hand-written '8'-digits. Incidentally, they show that GANs can generate high resolution images very accurately when the intrinsic dimension of the data is low, which I find a very nice observation on its own.\n\nOverall, I do understand that the rating of this paper is subject to controversy, but I think that even high-level reasonings that are not yet entirely formalised could benefit the general ML-community, especially when they are backed up by interesting empirical experiments.\n\n\nOsokin, Bach, Lacoste-Julien, On structured prediction theory with calibrated convex surrogate losses, NIPS, 2017",
"I think this is a good overview paper. It nicely summarizes a very recent line of work related to the properties of the adversarial divergences: weaker parametric divergences ('neural' or 'adversarial' divergences) are much better suited for the goals of the unsupervised generative modeling than stronger non-parametric divergences. Even though this conclusion is not new and has been made in the literature before, the authors support it with yet another interesting argument originating in the theory of structured prediction. \n\nI would say, the main novel contributions of this paper are as follows:\n(1) A general view landing the generative modeling and the structure prediction tasks in one framework. In this paper the authors use this observation to conclude that one should prefer parametric divergences over the non-parametric ones when dealing with high-dimensional data in unsupervised tasks. But I do think potentially this new point of view may lead to many other interesting discoveries. \n(2) I also like couple of experiments introduced in the paper, including the Thin-8 dataset and the 5-digit MNIST experiment. I am personally very curious to try out those in my future research. In particular, Thin-8 seems to demonstrate the \"blurriness\" effect of VAE much better than the 28x28 MNIST. I am curious if the authors are going to share the Thin-8 dataset?",
"Now, we give some of our potential contributions to the GAN literature, and more generally to the generative modeling literature:\n- we give further experimental evidence that parametric divergences can be better than maximum-likelihood for modeling structured data. We consider two tasks: modeling high-dimensional 512x512 data lying on a low-dimensional manifold (Thin-8 dataset), and modeling data with high-level abstract structure/constraints (visual hyperplane task). On both those tasks, we show that to train the same generator, minimizing a WGAN-GP parametric divergence yields better samples than optimizing objectives related to maximum likelihood (VAE evidence lower-bound).\n- in the GAN literature, parametric divergences are commonly referred to as \"lower bounds\" or \"variational lower bounds\" of their corresponding nonparametric divergences (see for instance the f-GAN paper by Nowozin et al. (2017), https://arxiv.org/pdf/1606.00709.pdf). We think that the terminology is misleading, and we show in this paper that parametric divergences are not to be thought merely as a lower-bound of the corresponding nonparametric divergences. First, statistics-wise, parametric divergences have been shown to have very different sample complexities than nonparametric divergences. Moreover, if the final goal is generative modeling, parametric divergences can be more meaningful objectives; they have been shown to only match the moments that the discriminator family is able to represent, which in image generation seems to be enough to generate visually appealing samples. On the contrary, most nonparametric divergences are strict and enforce matching all moments, which is unnecessarily constraining, and might actually make the objective harder to learn. Finally, we illustrated experimentally that those differences do matter, by showing that using an objective derived from the true (nonparametric) Wasserstein yields worse results than using a parametric Wasserstein in high dimensions.\n- to the best of our knowledge, we have not found extensive studies in the GAN literature of the behavior of parametric divergences with respect to transformations of the distributions. This is important because in GANs, those divergences are minimized using gradient descent. Thus a divergence suitable for generative modeling should vary smoothly with respect to sensible transformations of the dataset (such as deformations, for images) in order to provide a meaningful learning signal to the generator. Therefore, we carry out preliminary experiments to assess the invariance properties of some parametric divergences to simple transformations. One should note that although such simple transformations are not completely representative of the ones induced by a GAN during the course of learning, it is not obvious how to design more complex transformations, such as ones that depart from the data manifold (other than noise, or image blurring).\n\nEven if we are not yet capable of deriving a rigorous theory, we do believe that parametric divergences are strong candidates to consider in generative modeling, both as learning objectives and as evaluation metrics. As pointed out in Colin Raffel’s comment, our paper is laying some of the groundwork for designing more meaningful and practical objectives in generative modeling. We hope that our work helps other researchers get a better perspective on generative modeling, and acts as a reminder to always keep the final task, which is our true goal, in mind.\n\n\nWe hope we have addressed the reviewer's concerns and we thank the reviewer again for taking the time to review our paper.",
"We thank the reviewer for taking the time to review our paper.\n\nWe now answer the reviewer’s comments and questions.\n\nR: “Most of their theoretical results seems to be already existing in literature (Liu, Arora, Arjovsky) in some form of other and it is claimed that this paper put these result in perspective in an attempt to provide a more principled view of the nature and usefulness of adversarial divergences, in comparison to traditional divergences.”\n\nConcerning the difference with the work of Liu et al. (2017), we refer the reviewer to Shuang Liu's comment \"Mathematical View vs. Philosophical View\", as well as our comment \"Difference with Liu et al. (2017) and Arora et al. (2017)\". Concerning the difference with the work of Arora et al. (2017), we also refer the reviewer to our comment \"Difference with Liu et al. (2017) and Arora et al. (2017)\".\nWe have updated our related work section to better contrast our work with those works.\n\nThe bottom line is that those works focus on specific mathematical properties of parametric divergences. Arora et al. (2017) focus on statistical efficiency of parametric divergences. Liu et al. (2017) focus on topological properties of adversarial divergences and the mathematical interpretation of minimizing neural divergences (in a nutshell: matching moments).\n\nHowever, neither of those works attempts to study the meaning and practical properties of parametric divergences. In our paper, we start by introducing the notion of final task, which is our true goal, but is often difficult to formalize and hard to learn from directly. We then give arguments why parametric divergences can be good approximations/surrogates for the final task at hand. To do that, we review results from the literature, establish links with structured prediction theory, and perform a series of preliminary experiments to better understand parametric divergences by attempting to answer the following questions. How are they affected by various factors: discriminator family, transformations of the dataset? How important is the sample complexity? How good are they at dealing with challenging datasets such as high-dimensional data, or data with abstract structure and constraints?\n\nR: “However, it seems to me that the paper is limited both in theoretical novelty and practical usefulness of these results. Especially, I could not see any novel contribution for GAN literature or adversarial divergences.”\n\nA: Here are some potential contributions to the adversarial divergence literature:\n- it is often believed in the GAN literature that weaker losses (in the topological sense) are easier to learn than stronger losses. There has indeed been work in the adversarial divergence literature on the relative strength and convergence properties of adversarial divergences. However, to the best of our knowledge, there is no rigorous theory that explains why weaker losses are easier to learn. By relating adversarial divergences used in generative modeling with the task losses used in structured prediction, we put into perspective some theoretical results from structured prediction theory that actually show and quantify how the strength of the objective affects the ease of learning the model. Because those results are consistent with the intuition that weaker divergences are easier to learn, they give additional reasons to think that this intuition is correct.\n\nWe take this opportunity to emphasize that it is highly non-trivial to derive a rigorous theory on quantifying which divergences are better for learning. Unlike structured prediction, where the task loss is also used for evaluating the learned model, there is no one good way of evaluating generative models yet. Because a rigorous theory should study the influence of minimizing a divergence on minimizing the evaluation metric, any theory that is derived on divergences can only be as meaningful as the evaluation metric considered.\n",
"\nR: \"The discussion of MMD in the end of section 3.1 is a little off. MMD is fundamentally defined by the kernel choice; Dziugaite et al. (2015) only demonstrated that the Gaussian RBF kernel is a poor choice for MNIST modeling, while the samples of Li et al. (2015) simply by using a mixture of Gaussian kernels were much better. No reasonable fixed kernel is likely to yield good results on a harder image modeling problem, but that is a slightly different message than the one this paragraph conveys.\"\n\nA: We updated section 3.1 to make it clear that MMD is fundamentally dependent on the choice of kernels. In particular we emphasize that the fact that MMD does not perform well for generative modeling is because generic kernels are used. We actually provide a more complete discussion of the choice of kernels in Section 3.2 \"Ability to Integrate Desirable Properties for the Final Task\", where we discuss the possibility to learn the kernel based on data instead of hand-defining it. That discussion was motivated by the possibility of integrating more knowledge about the final task into the kernel.\n\nR: \"It would be interesting to replicate the analysis of Danihelka et al. (2017) on the Thin-8 dataset. This might help clarify which of the undesirable effects observed in the VAE model here are due to likelihood, and which due to other aspects of VAEs (like the use of the lower bound).\"\n\nA: Thank you for this interesting experiment idea. Indeed RealNVP is attractive as a generative model for comparing maximum likelihood and parametric divergences because the likelihood can be evaluated explicitly. However, we think that such an experiment is currently out of the scope of the paper because it is quite non-trivial for the following reasons.\n\nOne reason is that there are no obvious extensions of RealNVP to convolutional architectures, which are arguably the best architecture to deal efficiently with high-resolution images. However, there are other generators with also feature explicit likelihood. Probably one of the best known architectures with explicit likelihood for image generation is the PixelCNN (Van den Oord et al. (2016), https://arxiv.org/pdf/1601.06759.pdf). Training them using maximum-likelihood is not a problem because teacher-forcing is used, which allows to parallelize the process. However training them using a discriminator requires generating samples, which is extremely slow, because images have to be generated pixel after pixel.\n\n\nWe hope we have addressed the reviewer's concerns and we thank the reviewer again for their constructive review.",
"We thank the reviewer for taking the time to review our paper, and for evaluating our paper as a position paper - which is indeed what we intended our paper to be.\n\nConcerning the difference with the work of Liu et al. (2017), we refer the reviewer to Shuang Liu's comment \"Mathematical View vs. Philosophical View\", as well as our comment \"Difference with Liu et al. (2017) and Arora et al. (2017)\". We have updated our related work section to better contrast our work with those works (see the revised version).\n\nThe bottom line is that while Liu et al. (2017) concentrate more on the mathematical properties of parametric adversarial divergences, they do not attempt to study the meaning and practical properties of parametric divergences. In our paper, we start by introducing the notion of final task, which is our true goal, but is often difficult to formalize and hard to learn from directly. We then give arguments why parametric divergences can be good approximations/surrogates for the final task at hand. To do that, we review results from the literature, establish links with structured prediction theory, and perform a series of preliminary experiments to better understand parametric divergences by attempting to answer the following questions. How are they affected by various factors: discriminator family, transformations of the dataset? How important is the sample complexity? How good are they at dealing with challenging datasets such as high-dimensional data, or data with abstract structure and constraints?\n\nAs you have noted, we are not claiming that we have a complete theory of parametric divergences. Rather, we are proposing new ways to think of parametric divergences, and more generally of the (final) task of generative modeling.\n\nWe now answer the reviewer's questions:\n\nR: \"f-divergences can actually be nonparametrically estimated purely from samples, e.g. with the k-nearest neighbor estimator of https://arxiv.org/abs/1411.2045, or (for certain f-divergences) the kernel density based estimator of https://arxiv.org/abs/1402.2966. These are unlikely to lead to a practical learning algorithm, but could be mentioned in Table 1.\"\n\nA: Thank you for pointing out that there is a rich literature on estimating f-divergences from samples. We have updated section 3.1 to include some of those techniques. However, one should note that those techniques all make additional (implicit or explicit) assumptions on the densities. We updated the table caption and Section 3.1 to reflect that.",
"I am writing this comment because I enjoyed this paper and was surprised to see that the reviews were low.\n\nFirst and foremost, I read this paper as a \"position paper\" (as AnonReviewer2 noted) - that is, it is making a philosophical argument about how to evaluate generative models. Specifically, it's arguing that using parametric adversarial divergences (e.g. the critic/discriminator objective functions used in GAN training) as an evaluation metric is an interesting and potentially useful idea. The argument amounts to\n1. Parametric adversarial divergences have good sample efficiency and are straightforward to compute compared to their non-parametric/non-adversarial counterparts (Table 1).\n2. It's easy to integrate prior knowledge about what you want to measure via the design of the critic/discriminator architecture (as they write, \"The form of the discriminator may determine what aspects the divergence will be sensitive or blind to.\")\n3. It's kind of like choosing a good \"structured loss\" for structured prediction. Weakening the structured loss allows learning.\nThey back up these points with a few experiments, showing e.g. that a GAN discriminator can learn unusual characteristics of the task that a VAE can't.\n\nI like this paper because it presents a strong and thorough argument in favor of considering adversarial divergences for generative modeling. After reading the paper I was personally sold on the idea (at least to the extent that I'm interesting in seeing practical applications and implementations of it). While this paper *relies* on some prior theory on GANs, the idea and goals of the paper are distinct from those works -- namely, this paper is trying to convince people to adopt a new way of thinking about generative modeling evaluation, not prove properties about adversarial divergences. Separately, this paper does not provide a lot of practical advice, but I see this paper as a prerequisite to adopting a standardized way of using adversarial divergences for evaluation. It provides a compelling argument that future work can refer to when proposing how to utilize these ideas in practice.",
"We are posting this comment since two reviewers have asked us to clarify the difference between our work and:\n\n(1) S. Liu et al. Approximation and convergence properties of generative adversarial learning. In NIPS, 2017.\n(2) S. Arora et al. Generalization and equilibrium in generative adversarial nets (GANs). In ICML, 2017.\n\nThe following is an extract of our updated related work section.\n\nArora et al. (2017) argue that analyzing GANs with a nonparametric (optimal discriminator) view does not really make sense, because the usual nonparametric divergences considered have bad sample complexity. They also prove sample complexities for parametric divergences. Liu et al. (2017) prove under some conditions that globally minimizing a neural divergence is equivalent to matching all moments that can be represented within the discriminator family. They unify parametric divergences with nonparametric divergences and introduce the notion of strong and weak divergence. However, both those works do not attempt to study the meaning and practical properties of parametric divergences. In our work, we start by introducing the notion of final task, and then discuss why parametric divergences can be good task losses with respect to usual final tasks. We also perform experiments to determine properties of some parametric divergences, such as invariance, ability to enforce constraints and properties of interest, as well as the difference with their nonparametric counterparts. Finally, we unify structured prediction and generative modeling, which could give a new perspective to the community.",
"R: \" The divergences presented in this work are different from the above since the risk is minimised over a parametric class instead of over the whole set of integrable functions. However, practical estimators of f-divergences also reduce the optimization space (e.g. unit ball in a RKHS as in Nguyen et al. Estimating Divergence Functionals and the\nLikelihood Ratio by Convex Risk Minimization or Ruderman et al. Tighter Variational Representations of f-Divergences via Restriction to Probability Measures). So, given the lack of strong foundation for the formulation, \"parametric adversarial divergences\" feel more like estimators of other divergences than a relevant new family.\"\n\nA: Whether parametric divergences are a new family or simply estimators is more of an opinion. However, our opinion is that parametric divergences are a new family because they have very different sample complexities than their nonparametric counterparts, and because they will only match the moments that the discriminator family can represent.\n\nR: \"There are many estimators for f-divergences (like the ones cited above and many others based e.g. on nearest-neighbors) that are sample-based and thus correspond to the \"implicit\" case that the authors discuss. They don't necessarily need to use the dual form. So table 1 and the first part of Section 3.1 are not accurate.\"\n\nA: This is true, thanks for pointing it out. However, all these methods make additional assumptions about the densities, some of which are conceptually similar to smoothing the density, which makes them different from the true f-divergence. We updated Section 3.1 to reflect that.\n\nR: The experiments are few and too specific, specially given that the paper presents a very general framework. The first experiment just shows that Wasserstein GANs don't perform well in an specific dataset and use that to validate a point about those GANs not being good for high dimensions due to their sample complexity. That feels like confirmation bias and also does not really say anything about the parametric adversarial GANs, which are the focus of the paper.\n\nA: For the first experiment [Sample Complexity], it is well known that models trained with parametric divergences have no trouble generating MNIST and CIFAR. See for instance the DCGAN paper (https://pdfs.semanticscholar.org/3575/6f711a97166df11202ebe46820a36704ae77.pdf) and the WGAN-GP paper (https://arxiv.org/pdf/1704.00028.pdf).\nOn the contrary, using the true Wasserstein yields bad results on CIFAR. Our point is to raise awareness that parametric Wasserstein is NOT nonparametric Wasserstein, by showing that the resulting samples are much worse.\nThe second experiment [Robustness to Transformations] focuses on understanding actual properties of parametric divergences, by seeing how robust they are to simple transformations such as rotations and additive noise.\nThe third and fourth experiment compare parametric divergences with nonparametric divergences by taking a popular parametric divergence: the parametric Wasserstein and comparing with the most popular nonparametric divergence: the KL. Using them to train exactly the same generator architectures, we see that KL fails as the resolution goes from 32x32 to 512x512, while the parametric Wasserstein yields results with comparable quality to the training set. Similarly on the task of generating sequence of 5 digits that sum to 25, we see that the parametric Wasserstein is better at enforcing the constraint than the KL.\n\nTo sum up, our experiments show the difference between parametric and nonparametric divergence, study invariance properties of parametric divergences, and compare how well parametric and nonparametric divergences can deal with high-dimensionality and enforcing constraints.\n\nIt's true we do not have strong theory. But as we stated in the beginning, it's very challenging to prove that parametric divergences are a good proxy for human perception, when mathematically defining human perception is itself challenging. So the best we can do is to study the properties of the parametric divergences.\n\nWe hope we have addressed the reviewer's concerns and thank the reviewer again for their time.",
"We thank the reviewer for their long and thorough review.\n\nBefore we start addressing the reviewer's concerns, we would like to make it clear that we are a position paper. We are not claiming to introduce a new family of divergences. Rather, we are giving the name of \"parametric adversarial divergence\" to the divergences which have been used recently in GANs, and attempting to better understand why they are good candidates for generative modeling.\n\nWe now answer the reviewer's points:\n\nR: \"There are no specific results on properties of the divergences, or axioms that justify them. I think that presenting a very all-encompassing formulation without a strong foundation does not add value.\"\n\nA: It's actually very hard to obtain theoretical results for our work. What we claim is that parametric divergences can be a good approximation of our final task, which in the case of generation, is to generate realistic and diverse samples. It is not something that can be easily evaluated or proved: it is notoriously difficult to mathematically define a perceptual loss, so it's not obvious how to prove rigorously that parametric divergences approximate the perceptual loss well, other than by looking at samples, or using meaningful but debatable proxies such as inception score.\n\nR: \"There's abundant literature on f-divergences which show that there's a 1-1 relationship between divergences and optimal (Bayes) risks of classification problems (e.g. Reid at al. Information, Divergence and Risk for Binary Experiments in JMLR and Garcia-Garcia et al. Divergences and Risks for Multiclass Experiments in COLT). This disproves the point that the authors make that it's not possible to encode information about the final task in the divergence. If the loss for the task is proper, then it's well known how to construct a divergence which coincides with the optimal risk.\"\n\nA: What you are referring to is the equivalence between computing a divergence and solving a classification problem. This is seen in GANs as the discriminator is solving a classification problem with the appropriate loss between two distributions p and q, the loss of which corresponds to the divergence between p and q. In fact, by choosing the appropriate losses one can recover any f-divergence and any IPM (it corresponds to choosing the Delta in equation 1 of our paper).\nHowever the binary loss here is very different from what we call task loss or final loss. The final loss is what we actually care about (images that respect perspective, that are not blurry, made of full objects). Instead the loss you are referring to is a loss that defines the binary classification problem between p and q. We updated the paper to include your references. Originally we were based on the work of Sriperumbudur et al 2012. Thank you for helping us complete the references.\n\n",
"I just want to compare this work with the following two papers:\n\n(1) S. Liu et al. Approximation and convergence properties of generative adversarial learning. In NIPS, 2017.\n(2) S. Arora et al. Generalization and equilibrium in generative adversarial nets (GANs). In ICML, 2017.\n\nSpecifically, I will compare this work with the \"approximation\" part of (1) and \"generalization\" part of (2).\n\n(a) The \"approximation\" part of (1) basically shows the global minima of parametric adversarial divergences are those distributions that are indistinguishable from the target distribution under certain statistical tests.\n(b) The \"generalization\" part of (2) basically shows the sample complexity scales polynomially with the number of parameters for parametric adversarial divergences, whereas the sample complexity is usually exponential or even infinite for non-parametric adversarial divergences.\n\nNow, this work tries to understand (a) and (b) in a more philosophical way. The argument is as follows: humans do not need an exponentially large amount of samples to learn, therefore the loss function adopted by a human must be \"parametric\". Furthermore, the loss function adopted by humans are usually induced by a certain set of criteria. Therefore, using parametric adversarial divergences can both reduce the sample complexity and encourage prediction results to be more close to what humans would make."
] | [
6,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rkEtzzWAb",
"iclr_2018_rkEtzzWAb",
"iclr_2018_rkEtzzWAb",
"SJlaT3nQf",
"iclr_2018_rkEtzzWAb",
"iclr_2018_rkEtzzWAb",
"SJq0z1omM",
"SJvLO1WZf",
"rJRiJyoQf",
"H1EMeWfgz",
"iclr_2018_rkEtzzWAb",
"iclr_2018_rkEtzzWAb",
"ByBOYudfM",
"S1owSiOeM",
"iclr_2018_rkEtzzWAb"
] |
iclr_2018_r1YUtYx0- | Ensemble Robustness and Generalization of Stochastic Deep Learning Algorithms | The question why deep learning algorithms generalize so well has attracted increasing
research interest. However, most of the well-established approaches,
such as hypothesis capacity, stability or sparseness, have not provided complete
explanations (Zhang et al., 2016; Kawaguchi et al., 2017). In this work, we focus
on the robustness approach (Xu & Mannor, 2012), i.e., if the error of a hypothesis
will not change much due to perturbations of its training examples, then it
will also generalize well. As most deep learning algorithms are stochastic (e.g.,
Stochastic Gradient Descent, Dropout, and Bayes-by-backprop), we revisit the robustness
arguments of Xu & Mannor, and introduce a new approach – ensemble
robustness – that concerns the robustness of a population of hypotheses. Through
the lens of ensemble robustness, we reveal that a stochastic learning algorithm can
generalize well as long as its sensitiveness to adversarial perturbations is bounded
in average over training examples. Moreover, an algorithm may be sensitive to
some adversarial examples (Goodfellow et al., 2015) but still generalize well. To
support our claims, we provide extensive simulations for different deep learning
algorithms and different network architectures exhibiting a strong correlation between
ensemble robustness and the ability to generalize. | workshop-papers | The paper proposes a new way to understand why neural networks generalize well. They introduce the concept of ensemble robustness and try to explain DNN generalization based on this concept. The reviewers feel the paper is a bit premature for publication in a top conference although this new way of explaining generalization is quite interesting. | train | [
"rJhcwfLgf",
"SkLRjndlG",
"r1BvYvAeG",
"r1oRSIn7M",
"Hyon4U3Qz",
"Hyb7ir3Xf",
"Hk6PtE2XM",
"ry3UJraMG",
"BJ4tCEaGM",
"rJ1R2E6Mz",
"B1yPYVpMf",
"H1T7gLalM",
"SJx-TCsxz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"Summary:\nThis paper presents an adaptation of the algorithmic robustness of Xu&Mannor'12 to a notion robustness of ensemble of hypothesis allowing the authors to study generalization ability of stochastic learning algorithms for Deep Learning Networks. \nGeneralization can be established as long as the sensitiveness of the learning algorithm to adversarial perturbations is bounded.\nThe paper presents learning bounds and an experimental showing correlation between empirical ensemble robustness and generalization error.\n\nQuality:\nGlobally correct\n\nClarity:\nPaper clear\n\nOriginality:\nLimited with respect to the original definition of algorithmic robustness\n\nSignificance:\nThe paper provides a new theoretical analysis for stochastic learning of Deep Networks but the contribution is limited in its present form.\n\n\nPros:\n-New theoretical study for DL algorithms \n-Focus on adversarial learning\nCons\n-I find the contribution a bit limited\n-Some aspects have to be precised/more argumented\n-Experimental study could have been more complete\n\n\nComments:\n---------\n\n\n*About the proposed framework.\nThe idea of taking a max over instances of partition C_i (Def 3) already appeared in the proof of results of Xu&Mannor, and the originality of the contribution is essentially to add an expectation over the result of the algorithm.\n\n\nIn Xu&Mannor paper, there is a notion of weak robustness that is proved to be necessary and sufficient to generalize. The contribution of the authors would be stronger if they can discuss an equivalent notion in their context.\n\nThe partition considered by the framework is never discussed nor taken into account, while this is an important aspect of the analysis. In particular, there is a tradeoff between \\epsilon(s) and K: using a very fine tiling it is always possible to have a very small \\epsilon(s) at the price of a very large K (if you think of a covering number, K can be exponential in the size of the tiling and hard to calculate). \nIn the context of adversarial examples, this is actually important because it can be very likely that the adversarial example can belong to a partition set different from the set the original example belong to. \nIn this context, I am not sure to understand the validity of the framework because we can then compare 2 instances of different set which is outside of the framework. \nSo I wonder if the way the adervarial examples are generates should be taken into account for the definition of the partition.\nAdditionnally, the result is given in the contect of IID data, and with a multinomial distribution according to the partition set - adversarial generation can violate this IID assumption.\n\nIn the experimental setup, the partition set is not explained and we have no guarantee to compare instances of the same set. Nothing is said about $r$ and its impact on the results. This is a clear weak aspect of the experimental analysis\nIn the experimental setup, as far as I understand the setup, I find the term \"generalization error\" a bit abusive since it is actually the error on the test set. \nUsing cross validation or considering multiple training/test sets would be more appropriate.\n\n\nIn the proof of Lemma 2, I am not sure to understand where the term 1/n comes from in the term 2M^2/2 (before \"We then bound the term H as follows\")\n\n",
"This paper proposes a study of the generalization ability of deep learning algorithms using an extension of notion of stability called ensemble robustness. It requires that algorithm is stable on average with respect to randomness of the algorithm. The paper then gives bounds on the generalization error of a randomized algorithm in terms of stability parameter and provides empirical study attempting to connect theory with practice.\n\nWhile I believe that paper is trying to tackle an important problem and maybe on the right path to find notions that are responsible for generalization in NNs, I believe that contributions in this work are not sufficiently strong for acceptance.\n\nFirstly, it should be noted that the notion of generalization considered in this work is significantly weaker than standard notions of generalization in learning theory since (a) results are not high probability results (b) the bounds are with respect to randomness of both sample and sample (which gives extra slack).\n\nStabiltiy parameter epsilon_bar(n) is not studied anywhere. How does it scale with sample size n for standard algorithms? How do we know it does not make bounds vacuous?\n\nIt is only allude it to that NN learning algorithms may poses ensemble robustness. It is not clear and not shown anywhere that they do. Indeed, simulations demonstrate that this could be the case but this still presents a significant gap between theory and practice (just like any other analysis that paper criticizes in intro).\n\nMinor:\n\n1. After Theorem 2: \"... can substantially improve ...\" not sure if improvement is substantial since it is still not a high probability bound.\n\n2. In intro, \"Thus statistical learning theory ... struggle to explain generalization ...\". Note that the work of Zhang et al does not establish that learning theory struggle to explain generalization ability of NNs since results in that paper do not study margin bounds. To this end refer to some recent work Bartlett et al, Cortes et al., Neyshabur et al.\n\n3. Typos in def. 3. missing z in \"If s \\in C_i...\". No bar on epsilon.",
"The paper studied the generalization ability of learning algorithms from the robustness viewpoint in a deep learning context. To achieve this goal, the authors extended the notion of the (K, \\epsilon)- robustness proposed in Xu and Mannor, 2012 and introduced the ensemble robustness. \n\nPros: \n\n1, The problem studied in this paper is interesting. Both robustness and generalization are important properties of learning algorithms. It is good to see that the authors made some efforts towards this direction.\n2, The paper is well shaped and is easy to follow. The analysis conducted in this paper is sound. Numerical experiments are also convincing. \n3, The extended notion \"ensemble robustness\" is shown to be very useful in studying the generalization properties of several deep learning algorithms. \n\nCons: \n\n1, The terminology \"ensemble\" seems odd to me, and seems not to be informative enough.\n2, Given that the stability is considered as a weak notion of robustness, and the fact that the stability of a learning algorithm and its relations to the generalization property have been well studied, in my view, it is quite necessary to mention the relation of the present study with stability arguments. \n3, After Definition 3, the author stated that ensemble robustness is a weak notion of robustness proposed in Xu and Manner, 2012. It is better to present an example here immediately to illustrate. ",
"\"Regarding epsilon_bar(n): While the study of epsion_bar(n) is hard in the context of general algorithms and deep networks, it can be done for simpler learning algorithms. For example, for linear SVM, \\epsilon_bar(n) will be relevant to the covering number (robustness and regularization of support vector machines, Xu et. al. 09).\"\n\n--> But how do we now the bound is not trivial in case of deep nets?\n\n\n\"Regarding high probability bounds: Can the reviewer explain what he means by these two comments? (a) Our theorems are given in the PAC epsilon/delta formulation which is, in fact, a high probability bound. (b) We do not understand what the reviewer means by the randomness of both sample and sample. \"\n\n--> Standard results in ML are logarithmic in 1/delta, these results are only linear in 1/delta which is a very weak result.\n\n--> I meant randomness of sample and algorithms.\n",
"Logarithmic dependence on 1/delta is what is understood under \"high probability\". This is standard in ML theory see for instance definition of PAC learning.",
"\"Robustness and Stability are different properties, to see that observe that robustness is a global property while stability is local, and that robustness concerns properties of a single hypothesis, while stability concerns two (one for the original data set and one for the modified one). \"\n\nI don't think \"global vs local\" is the best way to distinguish robustness from stability. Both robustness and stability bound the affect of local perturbations. The key difference, IMO, is that robustness deals with perturbations of the test example, whereas stability deals with perturbations of a single training example. Robustness also constrains the test example perturbations to be within a certain partition of the instance space, whereas stability allows the perturbations to range over the entire instance space.\n\nFor algorithms that are both robust and stable, which analysis yields better bounds? Since the paper is concerned with deep learning, consider the stability results in Hardt et al. (2016) or Kuzborskij & Lampert (2017) for learning with non-convex objectives. If one were to combine these results with Elisseeff et al.'s generalization bounds, would the resulting bounds be better or worse than the ones in this paper? I'm just saying that more comparison to related work would make the paper stronger.",
"With all due respect, I feel that this review is mistaken about the bounds not holding with high probability. Theorems 1 & 2 clearly state that the bounds hold \"with probability at least $1 − \\delta$ with respect to the random draw of the s and h.\"\n\nThat said, the bounds are _linear_ in $1/\\delta$, which is not ideal; it would be stronger if they were logarithmic in $1/\\delta$. (Note: Theorem 2 has a term that is linear in $1/\\delta$, which becomes the dominating term.)\n\n",
"We thank the reviewer for his feedback. \n\nRegarding epsilon_bar(n): While the study of epsion_bar(n) is hard in the context of general algorithms and deep networks, it can be done for simpler learning algorithms. For example, for linear SVM, \\epsilon_bar(n) will be relevant to the covering number (robustness and regularization of support vector machines, Xu et. al. 09).\n\nRegarding the robustness of NNs:\nWe agree it is hard to show explicitly that NNs are robust. This is exactly the goal of this paper, trying to bridge the gap between theory and practice. We want to emphasize that the goal of this paper is not to criticize other methods, but to provide a different perspective. \n\nRegarding high probability bounds: Can the reviewer explain what he means by these two comments? (a) Our theorems are given in the PAC epsilon/delta formulation which is, in fact, a high probability bound. (b) We do not understand what the reviewer means by the randomness of both sample and sample. \n\nAll minor comments that the reviewers mentioned were fixed in the pdf. \n\n\n\n\n\n",
"We thank the reviewer for pointing these issues out and agree that they were not explained well. We have revised the paper to explain the data partitioning principles better and address here the main points the reviewer raises. \n\nRegarding partition for sets: \nGenerally, there is a trade-off between epsilon(s) and, K, the larger K is the smaller \\epsilon(s) due to the finer tiling as the reviewer suggested. This tradeoff is also evident in the bound of Theorem 1, where the right-hand side increases with K and \\epsilon(s) so there is a minimum point (see Corollaries 4&5 in Xu&Mannor 2012 for choosing the minimal K). \n \nHowever, in the context of Deep Neural Networks, we chose k=n (training data size), to be an implicit partition such that each set contains a small R2 ball around each training example, without specifying the partition explicitly. We then approximate the loss in this partition using the adversarial example, i.e., approximating the maximal loss in the partition using the adversarial example. While this approximation is loose, we show that empirically, it is correlated with generalization. Under this partition, there is no violation of the IID assumption for general stochastic algorithms, but it is violated in the case of adversarial training as the reviewer suggested. However, simulations suggest that correlation exists for both.\n\n\nRegarding weak robustness: We are more interested in the standard generalizability and found weak robustness to be out of the scope of this work. We do believe however that similar bound can be derived for weak robustness of randomized algorithms using the same techniques we used in this work. ",
"We thank the reviewer for his feedback. \n\nRegarding the 3 cons the reviewer mentioned:\n\n1. We agree that a better terminology may be found, at the moment we decided to stick to the original one. \n2. We have addressed point two in the forum and in the new version of the pdf (related work Section). \n3. Good point. We moved this discussion to after theorem two and revisited the discussion after theorem 2 to explain this issue better.\n \n",
"We thank the reviewers for their constructive feedback, which we found very helpful to improve the quality of this work. For each of the reviewers, a personal response is posted in the forum. Also, a new revision of the paper is available following the reviewer remarks. For the reviewer convenience, additions/corrections are marked in a red color in the text to distinguish new text from old one.\n\nHere, we would like to emphasize the contributions of this paper and its importance to the ICLR community as we see it. This paper revisits the robustness=generalization theory (Xu & Mannor, 2012) in the context of deep networks. We introduce new theorems that deal with stochastic algorithms (the most deployed ones) and provide a complimentary empirical study on the connection between robustness and generalization of Deep Neural Networks. We provide for the first time, an empirical study on the global (as we define it) robustness of the Deep Neural Networks, and its connections to generalization and adversarial examples, which has been puzzling the Deep Learning community lately. Moreover, we have shown that taking an expectation over robustness indeed improves the correlation between robustness and generalization, which we later demonstrate how to evaluate efficiently through Bayesian networks. Finally, we believe that the study of different approaches for generalization of Deep Neural Nets is of high importance, and we believe that this work makes an interesting step in this direction. \n\nFor each of the reviewers, a personal response is posted in the forum. Also, a new revision of the paper is available following the reviewer remarks. For the reviewer convenience, additions/corrections are marked in a red color in the text to distinguish new text from old one. Main modifications:\n \n· Intro: a few clarifications about our claims and fixing of citations following R2 comments.\n· Intro: better discussion on adversarial training and Parseval networks.\n· Related work: discussion on stability following comment in the forum + R1.\n· Better explanation of partitions sets, experimental considerations of them – Sections 4, and 5. (R3)",
"Thank you for your comment. Stability and robustness are two examples of desired properties of a learning algorithm that can also guarantee generalization under some conditions.\n\nA stable algorithm produces an output hypothesis that is stable to small changes in the data set, i.e., if a training example is replaced with another example from the same distribution, the training error will not change much. Elisseeff et al. (JMLR, 2005), indeed showed that algorithm that fulfills this requirement generalize well. \n\nRobustness, on the other hand, is a different property of learning algorithms. A Robust algorithm produces a hypothesis that is robust to bounded perturbations of the entire data set, as we explain in more detail in our paper. Robustness and Stability are different properties, to see that observe that robustness is a global property while stability is local, and that robustness concerns properties of a single hypothesis, while stability concerns two (one for the original data set and one for the modified one). \n\nWe emphasize that a learning algorithm may be both stable and robust, e.g., SVM, \"Robustness and Regularization of Support Vector Machines,\" Huan Xu, Constantine Caramanis, Shie Mannor 2009). However, there also exist algorithms that are robust but not stable, e.g., Lasso Regression, \"Robust Regression and Lasso,\" Huan Xu, Constantine Caramanis, Shie Mannor 2008). \n\nWe will further expand the discussion on these issues in a future revision of the paper. ",
"Ensemble robustness is conceptually very similar to randomized algorithm stability. The latter concept has been thoroughly analyzed by Elisseeff et al. (JMLR, 2005), who derived a number of generalization bounds for randomized algorithms based on different notions of stability (uniform, hypothesis, pointwise hypothesis). Given the similarity between robustness and stability, it seems to me that the submitted paper should discuss the connections to Elisseeff et al.'s work (which is not cited) and compare the bounds in both."
] | [
4,
4,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_r1YUtYx0-",
"iclr_2018_r1YUtYx0-",
"iclr_2018_r1YUtYx0-",
"ry3UJraMG",
"Hk6PtE2XM",
"H1T7gLalM",
"SkLRjndlG",
"SkLRjndlG",
"rJhcwfLgf",
"r1BvYvAeG",
"iclr_2018_r1YUtYx0-",
"SJx-TCsxz",
"iclr_2018_r1YUtYx0-"
] |
iclr_2018_SyKoKWbC- | Distributional Adversarial Networks | In most current formulations of adversarial training, the discriminators can be expressed as single-input operators, that is, the mapping they define is separable over observations. In this work, we argue that this property might help explain the infamous mode collapse phenomenon in adversarially-trained generative models. Inspired by discrepancy measures and two-sample tests between probability distributions, we propose distributional adversaries that operate on samples, i.e., on sets of multiple points drawn from a distribution, rather than on single observations. We show how they can be easily implemented on top of existing models. Various experimental results show that generators trained in combination with our distributional adversaries are much more stable and are remarkably less prone to mode collapse than traditional models trained with observation-wise prediction discriminators. In addition, the application of our framework to domain adaptation results in strong improvement over recent state-of-the-art. | workshop-papers | All the reviewers and myself have concerns about the potentially incremental nature of this work. While I do understand that the proposed method goes beyond crafting minibatch losses, and instead parametrizes things via a neural network, ultimately it's roughly very similar to simply combining MMD and minibatch discrimination and "learning the kernel". The theoretical justifications are interesting, but the results are somewhat underwhelming (as an example, DANN's are by no means the state of the art on MNIST->MNIST_M, and this task is rather contrived; the books dataset is not even clearly used by anyone else).
The interesting analysis may make it a good candidate for the workshop track, so I am recommending that. | train | [
"Hy1iAsugf",
"S1raRaKlG",
"H1u8M0Fgf",
"S13Nk4a7M",
"SJN1xV67z",
"HkbykEpmf",
"rkYvA7TQf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The paper proposes to replace single-sample discriminators in adversarial training with discriminators that explicitly operate on distributions of examples, so as to incentivize the generator to cover the full distribution of the training data and not collapse to isolated modes. \n\nThe idea of avoiding mode collapse by providing multiple samples to the discriminator is not new; the paper acknowledges prior work on minibatch discrimination but does not really describe the differences with previous work in any technical detail. Not being highly familiar with this literature, my reading is that the scheme in this paper grounds out into a somewhat different architecture than previous minibatch discriminators, with a nice interpretation in terms of a sample-based approximation to a neural mean embedding. However the paper does not provide any empirical evidence that their approach actually works better than previous approaches to minibatch discrimination. By comparing only to one-sample discriminators it leaves open the (a priori quite plausible) possibility that minibatch discrimination is generally a good idea but that other architectures might work equally well or better, i.e., the experiments do not demonstrate that the MMD machinery that forms the core of the paper has any real purchase.\n\nThe paper also proposes a two-sample objective DAN-2S, in which the discriminator is asked to classify two sets of samples as coming from the same or different distributions. This is an interesting approach, although empirically it does not appear to have any advantage over the simpler DAN-S -- do the authors agree with this interpretation? If so it is still a worthwhile negative result, but the paper should make this conclusion explicit. Alternately if there are cases when the two-sample test is actually recommended, that should be made explicit as well. \n\nOverall this paper seems borderline -- a nice theoretical story, grounding out into a simple architecture that does seem to work in practice (the domain adaptation results are promising), but with somewhat sloppy writing and experimentation that doesn't clearly demonstrate the value of the proposed approach. I hope the authors continue to improve the paper by comparing to other minibatch discrimination techniques. It would also be helpful to see value on a real-world task where mode collapse is explicitly seen as a problem (and/or to provide some intuition for why this would be the case in the Amazon reviews dataset). \n\nSpecific comments:\n- Eqn (2.2) is described as representing the limit of a converged discriminator, but it looks like this is just the general gradient of the objective --- where does D* enter into the picture?\n- Fig 1: the label R is never explained; why not just use P_x?\n- Section 5.1 \"we use the pure distributional objective for DAN (i.e., setting λ != 0 in (3.5))\" should this be λ = 0? \n- \"Results\" in the domain adaptation experiments are not clearly explained -- what do the reported numbers represent? (presumably accuracy(stddev) but the figure caption should say this). It is also silly to report accuracy to 2 decimal places when they are clearly not significant at that level.",
"I really enjoyed reading this paper. Very well-grounded on theory of two-sample tests and MMD and how these ideas can be beneficially incorporated into the GAN framework. It's very useful purely because of its theoretical support of minibatch discrimination, which always seemed like a bit of a hack. So, excellent work RE quality and clarity. One confusion I had was regarding the details of the how the neural embedding feature embedding/kernel \\phi(x) is trained in practice and how it affects performance -- must be extremely significant but certainly under-explored. \n\nI think it's fair to say that the whole paper is approximately minibatch-discrimination + MMD. While this is a very useful and a much more principled combination, supported by good experimental results, I'm not 100% sure if it is original enough. \n\nI agree with the authors that discriminator \"overpowering\" of generators is a significant issue and perhaps a little more attention ought to have been given for the generators being more effective w.r.t. 2S tests and distributional discrimination, as opposed to the regularization-based \"hack\". \n\nI would've also liked to have seen more results, e.g. CIFAR10 / SVHN. One of the best ways to evaluate GAN algorithms is not domain adaptation but semi-supervised learning, and this is completely lacking in this paper. \n\nOverall I would like to see this paper accepted, especially if some of the above issues are improved. ",
"This paper looks at the problem of mode collapsing when training GANs, and proposes a solution that uses discriminators to compare distributions of samples as opposed to single observations. Using ideas based on the Maximum Mean Discrepancy (MMD) to compare distributions of mini-batches, the authors generalize the idea of mini batch discrimination. Their tweaked architecture achieves nice results on extensive experiments, both simulated and on real data sets. \n\nThe key insight is that the training procedure usually used to train GANs does not allow the discriminator to share information across the samples of the mini batch. That is, the authors write out the gradient of the objective and show that the gradient with respect to each observation in the batch is multiplied by a scaling factor based on the discriminator, which is then summed across the batch. As a result, this scaling factor can vanish in certain regions and cause mode collapsing. Instead, the authors look at two different discrepancy metrics, both based on what they call neural mean embeddings, which is based on MMD. After describing them, they show that these discriminators allow the gradient weights to be shared across observations when computing gradients, thus solving the collapsing mode problem. The experiments verify this. \n\nAs the authors mentioned, the main idea is a modification of mini batch discrimination, which was also proposed to combat mode collapsing. Thus, the only novel contributions come in Section 3.1, where the authors introduce the mini-batch discriminators. Nevertheless, based on the empirical results and the coherence of the paper (along with the intuitive gradient information sharing explanation), I think it should be accepted. \n\nSome minor points: \n-How sensitive is the method to various neural network architectures, initializations, learning rates, etc? I think it's important to discuss this since it's one of the main challenges of training GANs in general.\n-Have you tried experiments with respect to the size of the mini batch? E.g. at what mini batch size do we see noticeable improvements over other training procedures?\n-Have you tried decreasing lambda as the iteration increases? This might be interesting to try since it was suggested that distributional adversaries can overpower G particularly early in training.\n-Figure 1 is interesting but it could use better labelling (words instead of letters)\n\nOverall:\nPros: Well-written, good empirical results, well-motivated and intuitively explained\nCons: Not particularly novel, a modification of an existing idea, more sensitivity results would be nice",
"Comparisons to methods based on minibatch discrimination:\n For a discussion on conceptual differences, please refer to the general comments. On the experimental side, we have already included comparisons against methods based on minibatch discrimination. RegGAN, one of the methods that we have compared to (Fig. 3), is already operating on a sample by including a “pull-away” loss within minibatches. The comparisons show that DAN’s are superior in performance. In the revised version, we have included a new comparison to GMMN [1], a method based on MMD. The detailed experimental setting is described in Appendix C and E. The results show that our DAN framework is very competitive (on SVHN), and very often better (on MNIST/Fashion-MNIST) than GMMN. \n\nComparisons between DAN-S and DAN-2S: \n Our experimental results show that there is no clear-cut winner between them. While in cases of MNIST, Fashion-MNIST and SVHN, DAN-S obtains slightly better mean results, DAN-2S is much more stable and has less variance in performances. There are also some other important differences worth pointing out:\n 1) DAN-2S is more stable over different lambda parameters (Fig. 10 vs Fig. 11). We observed it to be less prone to mode collapse in simple settings.\n 2) The two methods have distinctly different training dynamics (e.g. Fig. 6 in Appendix A).\n 3) The (new) results on batch-size show that DAN-2S is more robust with larger batch-sizes, while DAN-S is more robust with smaller batch-sizes.\n 4) DAN-S is less computationally expensive.\nOwing to these differences, we decided to keep both methods in the paper, since they have properties which might be appealing in different applications. \n\nEq 2.2: \n We need to use the optimal discriminator D* to characterize explicitly the form of the weighting term (namely, the denominator). If D is close to D*, then we know low D implies low P(x). That's the only part where we use it.\n\nAll the additional comments and suggestions have been included in the revised version. \n\n[1] Generative moment matching networks. Li, Yujia and Swersky, Kevin and Zemel, Rich. ICML-15",
"We thank the anonymous reviewers for their thorough feedback. We apologize for the delay in this response; properly addressing their concerns required multiple additional experiments. We believe, however, that the results from these experiments, combined with the responses below, address all of their suggestions and concerns. \n\nFirst, we address two general points raised by the reviewers:\n\nDifferentiation from minibatch discrimination / MMD: \n We propose a new distributional adversarial training framework that is driven by theoretical insights of two-sample tests; this setup makes our framework conceptually very different from minibatch discrimination. Moreover, minibatch discrimination requires hand-crafting of minibatch losses, which may be non-trivial and data-dependent. On the other hand, in the DAN framework, the form of the loss between sets of examples (i.e., samples) is parametrized via a neural network, and is thus adaptive to datasets.\n We concede that MMD and DAN bear similar intuitions, but their implementation is very different. MMD-based methods require hand-crafted kernels with hyperparameters that have to be pre-defined via cross-validation. Our framework goes beyond MMD by parameterizing the kernel with a neural network and learning the kernel in a data-driven way, thus is more expressive and adaptive. Beyond this connection, our framework goes further and generalizes the mean square loss in MMD with another neural network, greatly enriching the adaptivity of the model.\n Empirically, our distributional adversarial framework leads to more stable training and significantly better mode coverage than common single-observation methods. Moreover, it is competitive with --and very often better than-- methods based on minibatch discrimination and MMD (See Fig. 3, 12, 13, 15, 16 and 17). \n\nOther network architecture/loss function/training dynamics: \n Some reviewers suggested changing the network architecture or training dynamics to further boost the performance of our models. While we agree that these directions are promising, our goal for this work is to motivate theoretically the “distributional approach” of our method and to empirically demonstrate its usefulness. We leave further improvement of network architecture and training dynamics as an interesting avenue of future work.\n\nIn the revised version, we have:\n 1) Added experiments on additional datasets: SVHN, CIFAR10 (Appendices E and F).\n 2) Included Generative Moment Matching Networks [1] into the comparison (Section 5.2).\n 3) Added experiments showing performance as a function of batch-size for both DAN-S and DAN-2S (Appendix D).\n 4) Modified various additional minor issues and typos pointed out by the reviewers.\n",
"Training of NME: \n The Neural Mean Embedding (NME) module is trained simultaneously with all the rest of the network. The distinction between the NME and the rest of the adversary is conceptual, made here to emphasize the fact that NME is shared across elements in the sample. In practice, the model is trained end-to-end as one network.\n\nRelation to minibatch discrimination / MMD: \n Although the implementation of our approach might resemble a combination of minibatch discrimination and MMD, this analogy does not extend to the motivation, theoretical grounding, adaptability nor the performance of our approach. The methods proposed here do not arise as a post-hoc decision to combine these two aspects, but is rather driven by theoretical insights from two-sample tests and analysis of mode-collapse shown in Sec. 2 & 3. This leads to a general, unifying framework that is grounded on an extensive body of literature on two-sample tests. Further, we go beyond both minibatch discrimination and MMD by learning the minibatch losses and kernels in a data-driven way, while in both minibatch discrimination and MMD one must hand-craft all losses / kernels. \n\nChoice of loss function:\n Our goal for this work is to motivate theoretically the method and empirically demonstrate its usefulness. While we concede that other approaches to mitigate the “overpowering effect” are very interesting avenues, we employ the regularization-based approach due to its simplicity. \n\nExtra experiments:\n Please refer to the updated manuscript, Appendix E and F, for results on SVHN and CIFAR10, which show that DAN’s are among the top-performing models in terms of mode coverage and generation quality in these additional tasks too.\n",
"Our contributions: \n Besides the introduction of the adversarial framework in 3.1, other conceptual contributions are: the way to set up the adversarial objective (Sec 3.1) and the analysis of mode collapse in Sec 2 & 3. Differentiation of our work from minibatch discrimination and MMD is mentioned in our general comments.\n\nSensitivity to learning rates and architectures:\n Experiments on sensitivity to learning rates have been included in Appendix B. We did not devote more space to other sensitivity results since our main goal was to control for these sources of variability (i.e. fix them across models) and focus on the phenomenon which our model is attempting to solve: mode collapse. Delving into loss sensitivity issues, though interesting, falls outside the scope of this paper. \n\nSensitivity to batch-size:\n Following the reviewer’s suggestion about sensitivity results, we included additional experiments on the impact of batch-size on performance (Appendix D). The results on varying batch-sizes show that DAN-2S is more robust with larger batch-sizes, while DAN-S is more robust with smaller batch-sizes.\n\nFig. 1:\n We have regenerated it with more explicit labels. \n"
] | [
6,
6,
6,
-1,
-1,
-1,
-1
] | [
3,
4,
3,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SyKoKWbC-",
"iclr_2018_SyKoKWbC-",
"iclr_2018_SyKoKWbC-",
"Hy1iAsugf",
"iclr_2018_SyKoKWbC-",
"S1raRaKlG",
"H1u8M0Fgf"
] |
iclr_2018_BkM3ibZRW | Adversarially Regularized Autoencoders | While autoencoders are a key technique in representation learning for continuous structures, such as images or wave forms, developing general-purpose autoencoders for discrete structures, such as text sequence or discretized images, has proven to be more challenging. In particular, discrete inputs make it more difficult to learn a smooth encoder that preserves the complex local relationships in the input space. In this work, we propose an adversarially regularized autoencoder (ARAE) with the goal of learning more robust discrete-space representations. ARAE jointly trains both a rich discrete-space encoder, such as an RNN, and a simpler continuous space generator function, while using generative adversarial network (GAN) training to constrain the distributions to be similar. This method yields a smoother contracted code space that maps similar inputs to nearby codes, and also an implicit latent variable GAN model for generation. Experiments on text and discretized images demonstrate that the GAN model produces clean interpolations and captures the multimodality of the original space, and that the autoencoder produces improvements in semi-supervised learning as well as state-of-the-art results in unaligned text style transfer task using only a shared continuous-space representation. | workshop-papers | In general, the reviewers and myself find this work of some interest, though potentially somewhat incremental in terms of technical novelty compared to the work for Makhzani et al. Another bothersome aspect is the question of evaluation and understanding how well the model actually does; I am not convinced that the interpolation experiments are actually giving us a lot of insights. One interesting ablation experiment (suggested privately by one of the reviewers) would be to try AAE with Wasserstein and without a learned generator -- this would disambiguate which aspects of the proposed method bring most of the benefit. As it stands, the submission is just shy of the acceptance bar, but due to its interesting results in the natural language domain, I do recommend it being presented at the workshop track. | val | [
"rkhahIeBz",
"By5yPxlBz",
"HyqcNqaEG",
"Bk0IN5TEz",
"BJntMDTEf",
"S1Q6dxjlz",
"rkzhgMpgM",
"rkevbtAgf",
"B1C3Tr4ZM",
"rkS56HVWG",
"SJoSaBN-G"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Indeed, the discussion around parametrized prior versus the classical prior is very interesting. In this work we only explore this in the universe of autoencoders, i.e., ARAE/AAE. The study of a generic form of parametrized prior may require research on numerous other machine learning framework/schemes, and that in our opinion is beyond the scope of this paper.\n1. To address the similarity and dissimilarity between ARAE and AAE: a possible reason for why learning a prior works much better empirically could be the same reason why autoregressive flows perform much better for VAEs. Indeed, Chen et al. 2017 observe improvements by transforming the spherical Gaussian to a more complex prior through parameterized neural networks. \n2. Our informal justification is that RNNLM is a *very* good model for scoring text (e.g. it is frequently combined with speech recognition/machine translation systems). So unlike the case with Parzen windows where a good Parzen window score does not necessarily imply good generations (Theis et al. 2016), we think that it will be very hard to game the Reverse PPL metric. Of course, a formal justification would be nice (e.g. if (i) the score between p_rnn(x) from an RNNLM and p_star(x) from the true distribution is within some bound epsilon1 for all x and (ii) reverse PPL from real data vs generated data is smaller than another bound epsilon2, then KL(p_theta(x) || p_star(x)) is below some bound delta that is a function of epsilon1 and epsilon2), but perhaps beyond the scope of this paper.\n\nLastly did you mean you were fine with (9, conf 3) and (8, conf 4)? Because the interpolation of that gives (8.5, conf 3.5) in the latent space :)\n",
"For the record, I completely disagree with AnonReviewer1 and completely do agree with the authors' response: the page limit is soft and this submission did not exceed it in any significant way. \n\nI found AnonReviewer4 raised some interesting points and questions. \nI believe that they are generally addressable, to different extents. \nExposition issues aside, there are two key issues that would give me inclination to change my score, if at all:\n1) the question of similarity to AAE is perhaps the most important one in terms of revising my score. I do believe the authors' response where they say that they tried it on the same task and it didn't work. I would suggest mentioning this in the paper itself. e.g. Is it something about the language modelling task where allowing a learnable prior becomes a significant advantage? Is there more to be said about this?\n2) the other criticism that I find particularly interesting is requesting justification of the reverse-PPL. I still find this metric very interesting in this context and I don’t *require* that justification but I think including it will only strengthen the paper (and the comparison with Parzen windows etc). Given the general lousiness of evaluation methods for generative models, this is an interesting discussion. And again, as with point #1 above, there are differences between what \"works\" for generating images and generating language, and identifying those differences is worthwhile.\n\nI am still OK with my score of (Score 9, Conf 3), although (Score 8, Conf 3) would work too. My latent score (i.e. in a continuous space) might be around (8.5, 3.5).\n",
"- In section 2, \"This [pre-training or co-training with maximum likelihood]\n precludes there being a latent encoding of the sentence.\" It is not at all\n clear to me why this would be the case.\n\nResponse: When pre-training/co-training with a language model, there is no latent vector z, as the language model objective is given by log p(x) = \\sim_{t=1}^T logp(x_t | x_{<t}) (i.e. p(x_t) just depends on the previous tokens x_{<t}, not a latent vector).\n\n- \"One benefit of the ARAE framework is that it compresses the input to a\n single code vector.\" This is true of any autoencoder.\n\nResponse: Right. We wanted to emphasize the fact that having a fixed-dimensional vector representation of a sentence allows for simpler manipulations in the latent space (compared to, for example, sequential VAEs (Chung et al. 2015) that have a latent vector for each time step). We will change the wording to get this point across better.\n\n- It would be worth explaining, in a sentence, the approach in Shen et al for\n those who are not familiar with it, seeing as it is used as a baseline.\n\nResponse: Good point! We will describe Shen et al. in more detail.\n\n- We are told that the encoder's output is l2-normalized but the generator's\n is not, instead output units of the generator are squashed with the tanh\n activation. The motivation for this choice would be helpful. Shortly\n thereafter we are told that the generator quickly learns to produce norm 1\n outputs as evidence that it is matching the encoder's distribution, but this\n is something that could have just as easily have been built-in, and is a\n trivial sort of \"distribution matching\"\n\nResponse: Mainly based on empirical experiments: l2-normalized output from the encoder stabilizes training; squashing the output from the generator by tanh was adopted from DCGAN. We will try to discuss more on this via experiments in the next revision.\n\n- In general, tables that report averages would do well to report error bars as\n well. In general some more nuanced statistical analysis of these results\n would be worthwhile, especially where they concern human ratings.\n\nResponse: We will add error bars for the various measures where applicable (e.g. Human ratings). Some metrics are inherently at the corpus level and thus error bar estimation is not so straightforward.\n\n- The dataset fractions chosen for the semi-supervised experience seem\n completely arbitrary. Is this protocol derived from some other source?\n Putting these in a table along with the results would improve readability. \n\nResponse: This was arbitrary. We will make it clearer in the table/text.\n\n- Linear interpolation in latent space may not be the best choice here\n seeing as e.g. for a Gaussian code the region near the origin has rather low\n probability. Spherical interpolation as recommended by White (2016) may\n improve qualitative results.\n\nResponse: Yes, spherical interpolation is an interesting alternative and we can certainly try it out. Given the relatively low dimension z-space however, people have found simple linear interpolation to work well enough in images.\n\n- For the interpolation results you say \"we output the argmax\", what is meant?\n Is beam search performed in the case of sequences?\n\nResponse: We perform greedy decoding. We will make this clearer.\n\n- Finally, a minor point: I will challenge the authors to justify their claim\n that the learned generative model is \"useful\" (their word). Interpolating\n between two sentences sampled from the prior is a neat parlour trick, but the\n model as-is has little utility. Even some speculation on how this aspect\n could be applied would be appreciated (admittedly, many GAN papers could use\n some reflection of this sort).\n\nResponse: We completely agree! Usefulness of GANs (whether on images/text) is still an open issue and could definitely use more reflection. But recent results on unaligned transfer (DiscoGAN/CycleGAN/Text Style transfer/Unsupervised NMT), including the results presented in this work, give a compelling case for the utility of latent representations learned via adversarial training. We will make sure to temper the language to reflect the preliminary nature of work in this area though.\n",
"We thank the reviewer for a very thoughtful review. Before responding to more specific points, we want to point out that unlike the case with images where established architectures/baselines/metrics exist (e.g. DCGAN), GANs for text is still very much an open problem, and there is no consensus on which approach works best (policy gradients/Gumbel-softmax, etc). Given the current exploratory landscape of text GANs, we believe that our work represents a simple but interesting alternative to other approaches, backed up by quantitative and qualitative experiments. We therefore ask the reviewer to kindly reconsider the work in the context of existing work on GANs for text.\n\nSpecific points:\n\n- The difference from the original AAE is rather small and straightforward, making the\nnovelty mainly in the choice of task, focusing on discrete vectors and sequences.\n\nResponse: Indeed, from a methodological standpoint our method is similar to the AAE. However, this small difference (i.e. learning a prior through a parameterized generator) was crucial in making the model work. When we tried training AAEs for this dataset and we observed severe mode-collapse (reverse PPL: ~900).\n\n- The exposition leaves ample room for improvement. For one thing, there is the\nirksome and repeated use of \"discrete structure\" when discrete *sequences* are\nconsidered almost exclusively (with the exception of discretized MNIST digits).\nThe paper is also light on discussion of related work other than Makhzani et al\n-- the wealth of literature on combining autoencoders (or autoencoder-like\nstructures such as ALI/BiGAN) and GANs merits at least passing mention.\n\nResponse: Thank you for pointing this out. We will change the wording for more clarity. We will also add more discussion regarding ALI/BiGAN. We do want to point out however that while these works are similar in that they work with (x,z) space, they typically perform discrimination/generation in the joint (x,z) space, and therefore would face difficulties when applied directly to discrete spaces.\n\n- The empirical work is somewhat compelling, though I am not an expert in this\ntask domain. The annealed importance sampling technique of Wu et al (2017) for\nestimating bounds on a generator's log likelihood could be easily applied in\nthis setting and would give (for example, on binarized MNIST) a quantitative\nmeasurement of the degree of overfitting, and this would have been preferable\nthan inventing new heuristic measures. The \"Reverse PPL\" metric requires more\njustification, and it looks an awful lot like the long-since-discredited Parzen\nwindow density estimation technique used in the original GAN paper.\n\nResponse: Our understanding is that using log-likelihood estimates from Parzen windows is bad because Parzen windows are (very) bad models of images. In contrast, an RNN LM has been well-established to be quite good (in fact, state-of-the-art) at *scoring* text. We thus believe that Reverse PPL is a fair metric for quantitatively assessing generative models of text, despite its ostensible similarity/motivation to Parzen windows. The AIS technique from Wu et al. (2017) would not be applicable in our case because we need to be able to calculate p_\\theta(x_test) (Wu et al. (2017) actually use Parzen windows combined with AIS to give log-likelihood estimates of GAN-based models).\n\n- It's not clear why the optimization is done in 3 separate steps. Aside\nfrom the WGAN critic needing to be optimized for more steps, couldn't the\nremaining components be trained jointly, with a weighted sum of terms for the\nencoder?\n\nResponse: We optimized the objectives separately as this is the standard setup in GAN training. The remaining objectives could indeed be trained jointly, but we did not try this.\n",
"I was asked to contribute this review rather late in the process, and in order\nto remain unbiased I avoided reading other reviews. I apologize if some of\nthese comments have already been addressed in replies to other reviewers.\n\nThis paper proposes a regularization strategy for autoencoders that is very\nsimilar to the adversarial autoencoder of Makhzani et al. The main difference\nappears to be that rather than using the classic GAN loss to shape the\naggregate posterior of an autoencoder to match a chosen, fixed distribution,\nthey instead employ a Wasserstein GAN loss (and associated weight magnitude\nconstraint, presumably enforced with projected gradient descent) on a system\nwhere the matched distribution is instead learned via a parameterized sampler\n(\"generator\" in the GAN lingo). Gradient steps that optimize the encoder,\ndecoder and generator are interleaved. The authors apply an extension of this\nmethod to topic and sentiment transfer and show moderately good latent space\ninterpolations between generated sentences.\n\nThe difference from the original AAE is rather small and straightforward, making the\nnovelty mainly in the choice of task, focusing on discrete vectors and sequences.\n\nThe exposition leaves ample room for improvement. For one thing, there is the\nirksome and repeated use of \"discrete structure\" when discrete *sequences* are\nconsidered almost exclusively (with the exception of discretized MNIST digits).\nThe paper is also light on discussion of related work other than Makhzani et al\n-- the wealth of literature on combining autoencoders (or autoencoder-like\nstructures such as ALI/BiGAN) and GANs merits at least passing mention.\n\nThe empirical work is somewhat compelling, though I am not an expert in this\ntask domain. The annealed importance sampling technique of Wu et al (2017) for\nestimating bounds on a generator's log likelihood could be easily applied in\nthis setting and would give (for example, on binarized MNIST) a quantitative\nmeasurement of the degree of overfitting, and this would have been preferable\nthan inventing new heuristic measures. The \"Reverse PPL\" metric requires more\njustification, and it looks an awful lot like the long-since-discredited Parzen\nwindow density estimation technique used in the original GAN paper.\n\nHigh-level comments:\n\n- It's not clear why the optimization is done in 3 separate steps. Aside\nfrom the WGAN critic needing to be optimized for more steps, couldn't the\nremaining components be trained jointly, with a weighted sum of terms for the\nencoder?\n- In section 2, \"This [pre-training or co-training with maximum likelihood]\n precludes there being a latent encoding of the sentence.\" It is not at all\n clear to me why this would be the case.\n- \"One benefit of the ARAE framework is that it compresses the input to a\n single code vector.\" This is true of any autoencoder.\n- It would be worth explaining, in a sentence, the approach in Shen et al for\n those who are not familiar with it, seeing as it is used as a baseline.\n- We are told that the encoder's output is l2-normalized but the generator's\n is not, instead output units of the generator are squashed with the tanh\n activation. The motivation for this choice would be helpful. Shortly\n thereafter we are told that the generator quickly learns to produce norm 1\n outputs as evidence that it is matching the encoder's distribution, but this\n is something that could have just as easily have been built-in, and is a\n trivial sort of \"distribution matching\"\n- In general, tables that report averages would do well to report error bars as\n well. In general some more nuanced statistical analysis of these results\n would be worthwhile, especially where they concern human ratings.\n- The dataaset fractions chosen for the semi-supervised experience seem\n completely arbitrary. Is this protocol derived from some other source?\n Putting these in a table along with the results would improve readability. \n- Linear interpolation in latent space may not be the best choice here\n seeing as e.g. for a Gaussian code the region near the origin has rather low\n probability. Spherical interpolation as recommended by White (2016) may\n improve qualitative results.\n- For the interpolation results you say \"we output the argmax\", what is meant?\n Is beam search performed in the case of sequences?\n- Finally, a minor point: I will challenge the authors to justify their claim\n that the learned generative model is \"useful\" (their word). Interpolating\n between two sentences sampled from the prior is a neat parlour trick, but the\n model as-is has little utility. Even some speculation on how this aspect\n could be applied would be appreciated (admittedly, many GAN papers could use\n some reflection of this sort).",
"the paper presents a way to encode discrete distributions which is a challenging problem. they propose to use a latent variable gan with one continuous encoding and one discrete encoding. \n\ntwo questions linger around re practices:\n1. gan is known to struggle with discriminating distributions with different supports. the problem also persists here as the gan is discriminating between a continuous and a discrete distribution. it'll interesting to see how the proposed approach gets around this issue.\n\n2. the second question is related. it is unclear how the optimal distribution would look like with the latent variable gan. ideally, the discrete encoding be simply a discrete approximation of the continuous encoding. but optimization with two latent distributions and one discriminator can be hard. what we get in practice is pretty unclear. also how this could outperform classical discrete autoencoders is unclear. gan is an interesting idea to apply to solve many problems; it'll be helpful to get the intuition of which properties of gan solves the problem in this particular application to discrete autoencoders.",
"This paper introduces a model for learning robust discrete-space representations with autoencoders. The proposed method jointly trains an RNN encoder with a GAN to produce latent representations which are designed to better encode similarity in the discrete input space. A variety of experiments are conducted that demonstrate the efficacy of the proposed methodology.\n\nGenerally speaking, I like the overall idea, which, as far as I know, is a novel approach for dealing with discrete inputs. The generated textual samples look good and offer strong support for the model. However, I would have preferred to see more quantitative evaluation and less qualitative evaluation, but I understand that doing so is challenging in this domain.\n\nI will refrain from adding additional detailed commentary in this review because I am unable to judge this paper fairly with respect to other submissions owing to its large deviation from the suggested length limits. The call for papers states that \"we strongly recommend keeping the paper at 8 pages\", yet the current submission extends well into its 10th page. In addition (and more importantly), the margins appear to have been reduced relative to the standard latex template. Altogether, it seems like this paper contains a significant amount of additional text beyond what other submissions enjoyed. I see no strong reason why this particular paper needed the extra space. In fact, there are obvious places where the exposition is excessively verbose, and there are clear opportunities to reduce the length of the submission. While I fully understand that the length suggestions are not requirements, in my opinion this paper did not make an adequate effort to abide by these suggestions. Moreover, as a result, I believe this extra length has earned this paper an unfair advantage relative to other submissions, which themselves may have removed important content in order to abide by the length suggestions. As such, I find it difficult or impossible to judge this paper fairly relative to other submissions. I regrettably cannot recommend this paper for acceptance owing to these concerns.\n\nThere are many good ideas and experiments in this paper and I would strongly encourage the authors to resubmit this work to a future conference, making sure to reorganize the paper to adhere to the relevant formatting guidelines.",
"The authors present a new variation of autoencoder, in which they jointly train (1) a discrete-space autoencoder to minimize reconstuction loss, and (2) a simpler continuous-space generator function to learn a distribution for the codes, and (3) a GAN formulation to constrain the distributions in the latent space to be similar.\n\nThe paper is very clearly written, very clearly presented, addresses an important issue, and the results are solid.\n\nMy primary suggestion is that I would like to know a lot more (even qualitatively, does not need to be extensively documented runs) about how sensitive the results were--- and in what ways were they sensitive--- to various hyperparameters. Currently, the authors mention in the conclusion that, as is known to often be the case with GANS, that the results were indeed sensitive. More info on this throughout the paper would be a valuable contribution. Clearly the authors were able to make it work, with good results. When does it not work? Any observations about how it breaks down?\n\nIt is interesting how strong the denoising effect is, as simply a byproduct of the adversarial regularization.\n\nSome of the results are quite entertaining indeed. I found the yelp transfer results particularly impressive.\n\n(The transfer from positive->negative on an ambiguous example was interesting: Original \"service is good but not quick\" -> \"service is good but not quick, but the service is horrible\", and \"service is good, and horrible, is the same and worst time ever\". I found it interesting to see what it does with the mixed signals of the word \"but\": on one hand, keeping it helps preserve the structure of the sentence, but on the other hand, keeping it makes it hard to flip the valence. I guess the most accurate opposite would have been \"The service is quick but not good\"... )\n\nI really like the reverse perplexity measure. Also, it was interesting how that was found to be high on AAE due to mode-collapse.\n\nBeyond that, I only have a list of very insignificant typos:\n-p3, end of S3, \"this term correspond to minimizing\"\n-p3, S4, \"to approximate Wasserstein-1 term\" --> \"to approximate the Wasserstein-1 term\"\n-Figure 1, caption \"which is similarly decoded to $\\mathbf{\\~x}$\" . I would say that it is \"similarly decoded to $\\mathbf{c}$\", since it is \\mathbf{c} that gets decoded. Unless the authors meant that it \"is similarly decoded to produce $\\mathbf{\\~x}$. Alternately, I would just say something like \"to produce a code vector, which lies in the same space as \\mathbf{c}\", since the decoding of the generated code vector does not seem to be particularly relevant right here.\n\n-p5, beginning of Section 6.1: \"to regularize the model produce\" --> \"to regularize the model to produce\" ?\n-p6, end of first par. \"is quite high for the ARAE than in the case\" --> quite a bit higher than? etc...\n-p7, near the bottom \"shown in figure 6\". --> table, not figure...\n-p8 \"ability mimic\" -->\"ability to mimic\"\n-p9 Fig 3 -- the caption is mismatched with the figure.. top/bottom/left/right/etc.... Something is confusing there...\n-p9 near the bottom \"The model learns a improved\" --> \"The model learns an improved\"\n-p14 left side, 4th cell up, \"Cross-AE\"-->\"ARAE\"\n\nThis is a very nice paper with a clear idea (regularize discrete autoencoder using a flexible rather than a fixed prior), that makes good sense and is very clearly presented. \n\nIn the words of one of the paper's own examples: \"It has a great atmosphere, with wonderful service.\" :)\nStill, I wouldn't mind knowing a little more about what happened in the kitchen...\n\n",
"Thanks for the review.\n\nWe feel there is perhaps a misunderstanding in the review, and apologize if it came from our end. This work uses continuous encodings only. Specifically, we use an encoder to convert discrete sequence (e.g. text) into continuous code space. Our GAN distribution is given by transforming a continuous random variable into the continuous code space. The adversarial training happens only within the continuous space. If you could revisit the model diagram Figure 1 in the paper, the P_r and P_g are both pointed to the continuous code space. Therefore, the critic is *not* built to discriminate between a continuous and discrete distribution. As such, ARAE does not aim for a discrete approximation of the continuous encoding.\n\nThe question regarding the optimal distribution is very interesting. The intuition of ARAE is that it learns a contraction of the discrete sample space into a continuous one through the encoder, and smoothly assigns similar codes c and c' to similar x and x'. We provide some intuition in the text of section 4, and corresponding experiments in section 6.1.\n\nWe also want to point out another two submissions to this ICLR that are similar to our paper: [1] and [2].\n\n[1] Wasserstein Auto-Encoders\n[2] Learning Priors for Adversarial Autoencoders",
"Thank you for the review.\n\nWhile the reviewer accurately notes that the work exceeds 8 pages (it is 9 pages), we believe that unlike other conferences, ICLR purposefully makes this a “suggestion” and not a hard requirement. This interpretation seems to be the clear consensus of the ICLR community. Specifically, in the top-20 reviewed papers listed here: https://chillee.github.io/OpenReviewExplorer/, what we found was: 11 out of 20 papers go over 8 pages; 9 out of 20 papers go over or stay at 9 pages; 3 papers go over 10 pages. Additionally the reviewer claims that we changed the underlying template, which we do not believe is true. Figure 3 and Table 4 extend to the margins, but this seems common as well in other papers. \n\nGiven these points we ask for the reviewer to please do a content based assessment of the paper. If they really find that length is an issue, we are happy to move some content to the appendix, but given the above statistics and purposeful relaxness of ICLR rules, it seems arbitrary to reject a paper strictly on formatting terms.\n",
"Thanks for the comments.\n\nAs has been observed for many GANs, training ARAE required hyperparameter tuning for learning rate, weight clipping factor and the architecture. We found that suboptimal hyperparameter led to mode collapse. We used reverse PPL as a proxy to test for this issue. In this work we used the original setting of WGAN with weight clipping. It is possible that using the updated version WGAN-GP with gradient penalty could help with stability. We will make these points more clear in our next version.\n\nThank you for the helpful points on the experiments and for pointing out the typos. We will correct them in the next version.\n"
] | [
-1,
-1,
-1,
-1,
5,
6,
3,
9,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
4,
3,
4,
3,
-1,
-1,
-1
] | [
"By5yPxlBz",
"rkevbtAgf",
"Bk0IN5TEz",
"BJntMDTEf",
"iclr_2018_BkM3ibZRW",
"iclr_2018_BkM3ibZRW",
"iclr_2018_BkM3ibZRW",
"iclr_2018_BkM3ibZRW",
"S1Q6dxjlz",
"rkzhgMpgM",
"rkevbtAgf"
] |
iclr_2018_Skk3Jm96W | Some Considerations on Learning to Explore via Meta-Reinforcement Learning | We consider the problem of exploration in meta reinforcement learning. Two new meta reinforcement learning algorithms are suggested: E-MAML and ERL2. Results are presented on a novel environment we call 'Krazy World' and a set of maze environments. We show E-MAML and ERL2 deliver better performance on tasks where exploration is important. | workshop-papers | Overall, the paper is missing a couple of ingredients that would put it over the bar for acceptance:
- I am mystified by statements such as "RL2 no longer gets the best final performance." from one revision to another, as I have lower confidence in the results now.
- More importantly, the paper is missing comparisons of the proposed methods on *already existing* benchmarks. I agree with Reviewer 1 that a paper that only compares on benchmarks introduced in the very same submission is not as strong as it could be.
In general, the idea seems interesting and compelling enough (at least on the Krazy World & maze environments) that I can recommend inviting to the workshop track. | train | [
"SJ0Q_6Hlf",
"SkE07mveG",
"ryse_yclM",
"H1NWlrYmM",
"BkQIdmTMG",
"S1Sbum6ff",
"BJCjPQpMM",
"rJfePXpMG",
"ByDI8XpMz",
"rJyE8QpzG",
"rkYuBQTfG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"This is an interesting paper about correcting some of the myopic bias in meta RL. For two existing algorithms (MAML, RL2) it proposes a modification of the metaloss that encourages more exploration in the first (couple of) test episodes. The approach is a reasonable one, the proposed methods seem to work, the (toy) domains are appropriate, and the paper is well-rounded with background, motivation and a lot of auxiliary results.\n\nNevertheless, it could be substantially improved:\n\nSection 4 is of mixed rigor: some aspects are formally defined and clear, others are not defined at all, and in the current state many things are either incomplete or redundant. Please be more rigorous throughout, define all the terms you use (e.g. \\tau, R, \\bar{\\tau}, ...). Actually, the text never makes it clear how \\tau and \\ber{\\tau} relate to each other: make this connection in a formal way, please.\n\nIn your (Elman) formulation, “L” is not an RNN, but just a feed-forward mapping?\n\nEquation 3 is over-complicated: it is actually just a product of two integrals, because all the terms are separable. \n\nThe integral notation is not meaningful: you can’t sample something in the subscript the way you would in an expectation. Please make this rigorous.\n\nThe variability across seems extremely large, so it might be worth averaging over mores seeds for the learning curves, so that differences are more likely to be significant.\n\nFigure fontsizes are too small to read, and the figures in the appendix are impossible to read. Also, I’d recommend always plotting std instead of variance, so that the units or reward remain comparable.\n\nI understand that you built a rich, flexible domain. But please describe the variant you actually use, cleanly, without all the other variants. Or, alternatively, run experiments on multiple variants.",
"The paper proposes a trick of extending objective functions to drive exploration in meta-RL on top of two recent so-called meta-RL algorithms, Model-Agnostic Meta-Learning (MAML) and RL^2. \n\nPros:\n\n+ Quite simple but promising idea to augment exploration in MAML and RL^2 by taking initial sampling distribution into account. \n\n+ Excellent analysis of learning curves with variances across two different environments. Charts across different random seeds and hyperparameters indicate reproducibility. \n\n\nCons/Typos/Suggestions:\n\n- The brief introduction to meta-RL is missing lots of related work - see below.\n\n- Equation (3) and equations on the top of page 4: Mathematically, it looks better to swap \\mathrm{d}\\tau and \\mathrm{d}\\bar{\\tau}, to obtain a consistent ordering with the double integrals. \n\n- In page 4, last paragraph before Section 5, “However, during backward pass, the future discounted returns for the policy gradient computation will zero out the contributions from exploratory episodes”: I did not fully understand this - please explain better. \n\n- It is not very clear if the authors use REINFORCE or more advanced approaches like TRPO/PPO/DDPG to perform policy gradient updates?\n\n- I'd like to see more detailed hyperparameter settings. \n\n- Figures 10, 11, 12, 13, 14: Too small to see clearly. I would propose to re-arrange the figures in either [2, 2]-layout, or a single column layout, particularly for Figure 14. \n\n- Figures 5, 6, 9: Wouldn't it be better to also use log-scale on the x-axis for consistent comparison with curves in Krazy World experiments ?\n\n3. It could be very interesting to benchmark also in Mujoco environments, such as modified Ant Maze. \n\nOverall, the idea proposed in this paper is interesting. I agree with the authors that a good learner should be able to generalize to new tasks with very few trials compared with learning each task from scratch. This, however, is usually called transfer learning, not metalearning. As mentioned above, experiments in more complex, continuous control tasks with Mujoco simulators might be illuminating. \n\nRelation to prior work:\n\np 2: Authors write: \"Recently, a flurry of new work in Deep Reinforcement Learning has provided the foundations for tackling RL problems that were previously thought intractable. This work includes: 1) Mnih et al. (2015; 2016), which allow for discrete control in complex environments directly from raw images. 2) Schulman et al. (2015); Mnih et al. (2016); Schulman et al. (2017); Lillicrap et al. (2015), which have allowed for high-dimensional continuous control in complex environments from raw state information.\"\n\nHere it should be mentioned that the first RL for high-dimensional continuous control in complex environments from raw state information was actually published in mid 2013:\n\n(1) Koutnik, J., Cuccu, G., Schmidhuber, J., and Gomez, F. (July 2013). Evolving large-scale neural networks for vision-based reinforcement learning. GECCO 2013, pages 1061-1068, Amsterdam. ACM.\n\np2: Authors write: \"In practice, these methods are often not used due to difficulties with high-dimensional observations, difficulty in implementation on arbitrary domains, and lack of promising results.\"\n\nNot quite true - RL robots with high-dimensional video inputs and intrinsic motivation learned to explore in 2015: \n\n(2) Kompella, Stollenga, Luciw, Schmidhuber. Continual curiosity-driven skill acquisition from high-dimensional video inputs for humanoid robots. Artificial Intelligence, 2015.\n\np2: Authors write: \"Although this line of work does not explicitly deal with exploration in meta learning, it remains a large source of inspiration for this work.\"\n\np2: Authors write: \"To the best of our knowledge, there does not exist any literature addressing the topic of exploration in meta RL.\"\n\nBut there is such literature - see the following meta-RL work where exploration is the central issue:\n\n(3) J. Schmidhuber. Exploring the Predictable. In Ghosh, S. Tsutsui, eds., Advances in Evolutionary Computing, p. 579-612, Springer, 2002.\n\nThe RL method of this paper is the one from the original meta-RL work:\n\n(4) J. Schmidhuber. On learning how to learn learning strategies. Technical Report FKI-198-94, Fakultät für Informatik, Technische Universität München, November 1994.\n\nWhich then led to:\n\n(5) J. Schmidhuber, J. Zhao, N. Schraudolph. Reinforcement learning with self-modifying policies. In S. Thrun and L. Pratt, eds., Learning to learn, Kluwer, pages 293-309, 1997.\n\np2: \"In hierarchical RL, a major focus is on learning primitives that can be reused and strung together. These primitives will frequently enable better exploration, since they’ll often relate to better coverage over state visitation frequencies. Recent work in this direction includes (Vezhnevets et al., 2017; Bacon & Precup, 2015; Tessler et al., 2016; Rusu et al., 2016).\"\n\nThese are very recent refs - one should cite original work on hierarchical RL including:\n\nJ. Schmidhuber. Learning to generate sub-goals for action sequences. In T. Kohonen, K. Mäkisara, O. Simula, and J. Kangas, editors, Artificial Neural Networks, pages 967-972. Elsevier Science Publishers B.V., North-Holland, 1991.\n\nM. B. Ring. Incremental Development of Complex Behaviors through Automatic Construction of Sensory-Motor Hierarchies. Machine Learning: Proceedings of the Eighth International Workshop, L. Birnbaum and G. Collins, 343-347, Morgan Kaufmann, 1991.\n\nM. Wiering and J. Schmidhuber. HQ-Learning. Adaptive Behavior 6(2):219-246, 1997\n\nReferences to original work on meta-RL are missing. How does the approach of the authors relate to the following approaches? \n\n(6) J. Schmidhuber. Gödel machines: Fully Self-Referential Optimal Universal Self-Improvers. In B. Goertzel and C. Pennachin, eds.: Artificial General Intelligence, p. 119-226, 2006. \n\n(7) J. Schmidhuber. Evolutionary principles in self-referential learning, or on learning how to learn: The meta-meta-... hook. Diploma thesis, TUM, 1987. \n \nPapers (4,5) above describe a universal self-referential, self-modifying RL machine. It can implement and run all kinds of learning algorithms on itself, but cannot learn them by gradient descent (because it's RL). Instead it uses what was later called the success-story algorithm (5) to handle all the meta-learning and meta-meta-learning etc.\n\nRef (7) above also has a universal programming language such that the system can learn to implement and run all kinds of computable learning algorithms, and uses what's now called Genetic Programming (GP), but applied to itself, to recursively evolve better GP methods through meta-GP and meta-meta-GP etc. \n\nRef (6) is about an optimal way of learning or the initial code of a learning machine through self-modifications, again with a universal programming language such that the system can learn to implement and run all kinds of computable learning algorithms.\n\nGeneral recommendation: Accept, provided the comments are taken into account, and the relation to previous work is established.\n",
"Summary: this paper proposes algorithmic extensions to two existing RL algorithms to improve exploration in meta-reinforcement learning. The new approach is compared to the baselines on which they are built on a new domain, and a grid-world.\n\nThis paper needs substantial revision. The first and primary issue is that authors claim their exists not prior work on \"exploration in Meta-RL\". This appears to be the case because the authors did not use the usual names for this: life-long learning, learning-to-learn, continual learning, multi-task learning, etc. If you use these terms you see that much of the work in these settings is about how to utilize and adapt exploration. Either given a \"free learning phases\", exploration based in internal drives (curiosity, intrinsic motivation). These are subfields with too much literature to list here. The paper under-review must survey such literature and discuss why these new approaches are a unique contribution.\n\nThe empirical results do not currently support the claimed contributions of the paper. The first batch of results in on a new task introduced by this paper. Why was a new domain introduced? How are existing domains not suitable. This is problematic because domains can easily exhibit designer bias, which is difficult to detect. Designing domains are very difficult and why benchmark domains that have been well vetted by the community are such an important standard. In the experiment, the parameters were randomly sampled---is a very non-conventional choice. Usually one performance a search for the best setting and then compares the results. This would introduce substantial variance in the results, requiring many more runs to make statistically significant conclusions.\n\nThe results on the first task are not clear. In fig4 one could argue that e-maml is perhaps performing the best, but the variance of the individual lines makes it difficult to conclude much. In fig5 rl2 gets the best final performance---do you have a hypothesis as to why? Much more analysis of the results is needed.\n\nThere are well-known measures used in transfer learning to access performance, such as jump-start. Why did you define new ones here?\n \nFigure 6 is difficult to read. Why not define the Gap and then plot the gap. These are very unclear plots especially bottom right. It's your job to sub-select and highlight results to clearly support the contribution of the paper---that is not the case here. Same thing with figure 7. I am not sure what to conclude from this graph.\n\nThe paper, overall is very informal and unpolished. The text is littered with colloquial language, which though fun, is not as precise as required for technical documents. Meta-RL is never formally and precisely defined. There are many strong statements e.g., : \"which indicates that at the very least the meta learning is able to do system identification correctly.\">> none of the results support such a claim. Expectations and policies are defined with U which is never formally defined. The background states the problem of study is a finite horizon MDP, but I think they mean episodic tasks. The word heuristic is used, when really should be metric or measure. ",
"The revised paper is not perfect, but improved substantially, and addresses multiple issues. I raised my review score.",
"\n“p2: \"In hierarchical RL, a major focus is on learning primitives that can be reused and strung together. These primitives will frequently enable better exploration, since they’ll often relate to better coverage over state visitation frequencies. Recent work in this direction includes (Vezhnevets et al., 2017; Bacon & Precup, 2015; Tessler et al., 2016; Rusu et al., 2016).\"\n\n“These are very recent refs - one should cite original work on hierarchical RL including:\n\nJ. Schmidhuber. Learning to generate sub-goals for action sequences. In T. Kohonen, K. Mäkisara, O. Simula, and J. Kangas, editors, Artificial Neural Networks, pages 967-972. Elsevier Science Publishers B.V., North-Holland, 1991.\n\nM. B. Ring. Incremental Development of Complex Behaviors through Automatic Construction of Sensory-Motor Hierarchies. Machine Learning: Proceedings of the Eighth International Workshop, L. Birnbaum and G. Collins, 343-347, Morgan Kaufmann, 1991.”\n\nM. Wiering and J. Schmidhuber. HQ-Learning. Adaptive Behavior 6(2):219-246, 1997”\n\n\nThese refs cite older work in the area, which in turn cites the work you mention. This is not a review paper and hence mentioning every prior work in a field as large as hierarchical RL is not practical nor desired. We have added a review article by Barto and your last reference on HQ learning to account for this. \n\n=========================================================================\n\n\n\n\n“References to original work on meta-RL are missing. How does the approach of the authors relate to the following approaches? \n\n(6) J. Schmidhuber. Gödel machines: Fully Self-Referential Optimal Universal Self-Improvers. In B. Goertzel and C. Pennachin, eds.: Artificial General Intelligence, p. 119-226, 2006. \n\n(7) J. Schmidhuber. Evolutionary principles in self-referential learning, or on learning how to learn: The meta-meta-... hook. Diploma thesis, TUM, 1987. \n \nPapers (4,5) above describe a universal self-referential, self-modifying RL machine. It can implement and run all kinds of learning algorithms on itself, but cannot learn them by gradient descent (because it's RL). Instead it uses what was later called the success-story algorithm (5) to handle all the meta-learning and meta-meta-learning etc.\n\nRef (7) above also has a universal programming language such that the system can learn to implement and run all kinds of computable learning algorithms, and uses what's now called Genetic Programming (GP), but applied to itself, to recursively evolve better GP methods through meta-GP and meta-meta-GP etc. \n\nRef (6) is about an optimal way of learning or the initial code of a learning machine through self-modifications, again with a universal programming language such that the system can learn to implement and run all kinds of computable learning algorithms.”\n\nWe added several sentences regarding this to our paper. We have also connected this idea to a more broad interpretation of our work. Please see appendix B which cites this work in reference to our algorithm derivation. \n=========================================================================\n\n\nGeneral recommendation: Accept, provided the comments are taken into account, and the relation to previous work is established\n\nWe feel the paper now is substantially improved and we exerted significant energy addressing your concerns. Please see in particular the improved figures and heuristic metrics, as well as the improved works cited section, which address the majority of the major issues you had with this work. We would appreciate it if you could reconsider your score in light of these new revisions. \n\n\n\n=========================================================================",
"=========================================================================\n\n\n\np2: Authors write: \"In practice, these methods are often not used due to difficulties with high-dimensional observations, difficulty in implementation on arbitrary domains, and lack of promising results.\"\n\n“Not quite true - RL robots with high-dimensional video inputs and intrinsic motivation learned to explore in 2015: \n\n(2) Kompella, Stollenga, Luciw, Schmidhuber. Continual curiosity-driven skill acquisition from high-dimensional video inputs for humanoid robots. Artificial Intelligence, 2015.”\n\n\nWe have adjusted the discussion and added this reference. \n\n=========================================================================\n\np2: Authors write: \"Although this line of work does not explicitly deal with exploration in meta learning, it remains a large source of inspiration for this work.\"\n\np2: Authors write: \"To the best of our knowledge, there does not exist any literature addressing the topic of exploration in meta RL.\"\n\n“But there is such literature - see the following meta-RL work where exploration is the central issue:\n\n(3) J. Schmidhuber. Exploring the Predictable. In Ghosh, S. Tsutsui, eds., Advances in Evolutionary Computing, p. 579-612, Springer, 2002.”\n\n\nWe have adjusted the discussion and added this reference. \n\n=========================================================================\n\n\n“J. Schmidhuber, J. Zhao, N. Schraudolph. Reinforcement learning with self-modifying policies. In S. Thrun and L. Pratt, eds., Learning to learn, Kluwer, pages 293-309, 1997.” \n\n\nWe have added this reference. \n\n=========================================================================\n",
"\nFirst and foremost, we would like to apologize for having missed the relevant prior work by Schmidhuber et al. We have taken care to better connect our work to this prior work, as detailed below. \n\n=========================================================================\n\n“Equation (3) and equations on the top of page 4: Mathematically, it looks better to swap \\mathrm{d}\\tau and \\mathrm{d}\\bar{\\tau}, to obtain a consistent ordering with the double integrals.” \n\nAgreed. This change has been made. \n\n=========================================================================\n\n\n“In page 4, last paragraph before Section 5, “However, during backward pass, the future discounted returns for the policy gradient computation will zero out the contributions from exploratory episodes”: I did not fully understand this - please explain better.”\n\nPlease see equation 4 in the latest draft and the accompanying text. We have better explained the procedure. \n\n=========================================================================\n\n\nIt is not very clear if the authors use REINFORCE or more advanced approaches like TRPO/PPO/DDPG to perform policy gradient updates? \n\nFor E-MAML/MAML, the inner update is VPG and the outer update is PPO. For E-RL2/RL2, PPO is used. We have noted this in the experiments section of the paper. \n\n=========================================================================\n\n\n“I'd like to see more detailed hyperparameter settings.”\nWe have included some further discussion on the training procedure in the experiments section. Further, it is our intention to release the code for this paper, which will include the hyper-parameters used in these algorithms. We can also put these hyper-parameters into a table in an appendix of this paper, to ensure redundancy in their availability. \n\n=========================================================================\n\n\n“Figures 10, 11, 12, 13, 14: Too small to see clearly. I would propose to re-arrange the figures in either [2, 2]-layout, or a single column layout, particularly for Figure 14.”\n\nWe agree. We have switched to a [2, 2]-layout. The figures are still somewhat small, though when viewed on a computer one can easily zoom in and read them more easily. Of course, we would be willing to move to a single column layout in the final version if the present figures are still too difficult to read. \n\n=========================================================================\n\n\n“Figures 5, 6, 9: Wouldn't it be better to also use log-scale on the x-axis for consistent comparison with curves in Krazy World experiments ?”\n\nWe have updated the figures and made the axes consistent. \n\n=========================================================================\n\n\n“It could be very interesting to benchmark also in Mujoco environments, such as modified Ant Maze.” \n\nWe have been working on continuous control tasks and would hope to include them in the final version. The difficulties we have thus far encountered with these tasks are interesting, but perhaps outside the scope of this paper at the present time. \n\n=========================================================================\n\n\n“Overall, the idea proposed in this paper is interesting. I agree with the authors that a good learner should be able to generalize to new tasks with very few trials compared with learning each task from scratch. This, however, is usually called transfer learning, not metalearning. As mentioned above, experiments in more complex, continuous control tasks with Mujoco simulators might be illuminating. “\n\nSee the above comment regarding continuous control. As for difficulties with terminology, some of this stems from following the leads set in the prior literature (the MAML and RL2 papers) which refer to the problem as meta learning. We have attempted to give a more thorough overview of lifelong learning/transfer learning in this revised draft. Please see our response to the first review for further details. \n\n=========================================================================\n\n\n“(1) Koutnik, J., Cuccu, G., Schmidhuber, J., and Gomez, F. (July 2013). Evolving large-scale neural networks for vision-based reinforcement learning. GECCO 2013, pages 1061-1068, Amsterdam. ACM.” \n\n\nWe have added this citation. Apologies for having missed it. This reference was actually in our bib file but for some reason did not make it into the paper proper. ",
"This is an interesting paper about correcting some of the myopic bias in meta RL. For two existing algorithms (MAML, RL2) it proposes a modification of the metaloss that encourages more exploration in the first (couple of) test episodes. The approach is a reasonable one, the proposed methods seem to work, the (toy) domains are appropriate, and the paper is well-rounded with background, motivation and a lot of auxiliary results.\n\nThank you for this excellent summary and compliment of the work! \n\n=========================================================================\n\nSection 4 is of mixed rigor: some aspects are formally defined and clear, others are not defined at all, and in the current state many things are either incomplete or redundant. Please be more rigorous throughout, define all the terms you use (e.g. \\tau, R, \\bar{\\tau}, ...). Actually, the text never makes it clear how \\tau and \\ber{\\tau} relate to each other: make this connection in a formal way, please.\n\nWe have made the suggested improvements, clarifying notation and more explicitly defining tau and \\bar{tau}. R was defined in the MDP notation section and means the usual thing for MDPs. \n\n=========================================================================\n\nEquation 3 is over-complicated: it is actually just a product of two integrals, because all the terms are separable. \n\nYes, this is true. It was not our intention to show off or otherwise make this equation seem more complex than it is. In fact, we were trying to simplify things by not skipping steps and separating the integrals prematurely. We asked our colleagues about this, and the response was mixed with half of them preferring the current notation and the other half preferring earlier separation. If you have strong feelings about this, we are willing to change it for the final version. \n=========================================================================\n\n\nThe integral notation is not meaningful: you can’t sample something in the subscript the way you would in an expectation. Please make this rigorous.\n\nThis is a fair comment. We were simply trying to make explicit the dependence on the sampling distribution, since it is one of the key insights of our method. However, we agree with you and have changed the notation. We have added an appendix B which seeks to examine some of these choices in a more rigorous context. \n\n=========================================================================\n\n\nThe variability across seems extremely large, so it might be worth averaging over mores seeds for the learning curves, so that differences are more likely to be significant.\n\nWe did this and it helped substantially with obtaining more smooth results with more significant differences. Thank you for the suggestion it was very helpful! \n\n=========================================================================\n\n\nFigure fontsizes are too small to read, and the figures in the appendix are impossible to read. Also, I’d recommend always plotting std instead of variance, so that the units or reward remain comparable.\n\nFixed. Thanks! \n=========================================================================\n\n\nI understand that you built a rich, flexible domain. But please describe the variant you actually use, cleanly, without all the other variants. Or, alternatively, run experiments on multiple variants.\n\nWe plan to release the source for the domain we used. But the variant we used is the one pictured in the paper, with all options turned on. We can add the environment hyperparameters to an appendix of the paper with a brief description if you think this would be useful. \n\n=========================================================================\n\nRating: 6: Marginally above acceptance threshold\n\nIn light of the fact we have addressed your major concerns with this work, we would appreciate it if you would consider revising your score. \n",
"\nFigure 6 is difficult to read. \n\nThe figures have been dramatically improved. We apologize for the poor initial pass. \n\n=========================================================================\n\n\nWhy not define the Gap and then plot the gap. \n\nWe feel it is illustrative to see the initial policy and the post-update policy in the same place. Actually seeing the gap between the two algorithms can be easier to interpret than the gap itself, which is a scalar. \n\n=========================================================================\n\n\nThese are very unclear plots especially bottom right. It's your job to sub-select and highlight results to clearly support the contribution of the paper---that is not the case here. Same thing with figure 7. I am not sure what to conclude from this graph.\n\nWe took these comments to heart and exerted a lot of effort on improving the plots. We solicited feedback from our colleagues who suggest the new plots are much more clear, readable, and better convey our points. We also took better care to clarify this in our captions. \n\n=========================================================================\n\nThe paper, overall is very informal and unpolished. The text is littered with colloquial language, which though fun, is not as precise as required for technical documents. Meta-RL is never formally and precisely defined. There are many strong statements e.g., : \"which indicates that at the very least the meta learning is able to do system identification correctly.\">> none of the results support such a claim. Expectations and policies are defined with U which is never formally defined. The background states the problem of study is a finite horizon MDP, but I think they mean episodic tasks. The word heuristic is used, when really should be metric or measure. \n\nThank you for these comments. We have cleaned up the writing. \n=========================================================================",
"The first and primary issue is that authors claim their exists not prior work on \"exploration in Meta-RL\"....The paper under-review must survey such literature and discuss why these new approaches are a unique contribution.\n\nWe have added numerous references to these fields in the related literature section of the paper and clarified our contribution in this context. We are interested in the problem of meta-learning for RL (which largely deals with finding initializations that are quick to adapt to new domains). This problem ends up having a different formulation from the areas mentioned above. Our specific contribution is the creation of two new algorithms that find good initializations for RL algorithms to quickly adapt to new domains, yet do not sacrifice exploratory power to obtain these initializations. We show further that one can consider a large number of interesting algorithms for finding initializations that are good at exploring. This is also a novel contribution. \n=========================================================================\n\n\nThe empirical results do not currently support the claimed contributions of the paper. \n\nThe results have been strengthened since the initial submission. It is now clear that our methods provide substantially better performance. Further, the heuristic metrics indicate they are superior at exploration. \n\n=========================================================================\n\nThe first batch of results in on a new task introduced by this paper. Why was a new domain introduced? How are existing domains not suitable. \n\nThe domains are gridworlds and mazes, neither of which should require this sort of justification prior to use. The gridworld does not use a standard reference implementation (we am not aware of any such implementation) and was designed so that its level of difficulty could be more easily controlled during experimentation. \n\n=========================================================================\n\nDesigning domains are very difficult and why benchmark domains that have been well vetted by the community are such an important standard\nWe agree with this. And indeed, we ourselves have designed reference domains for RL problems that are extremely popular in the community. In these cases, the domains were usually derived from an initial paper such as this one and subsequently improved upon by the community over time. In our experience, releasing a new domain in the context of this paper aligns well with how our previous successful domains have been released. \n=========================================================================\n\nIn the experiment, the parameters were randomly sampled---is a very non-conventional choice. Usually one performance a search for the best setting and then compares the results. This would introduce substantial variance in the results, requiring many more runs to make statistically significant conclusions.\n\nWe have averaged over many more trials and this has significantly smoothed the curves. We were trying to avoid overfitting, which is a systematic problem in the way RL results are typically reported. \n\n=========================================================================\n\n\nThe results on the first task are not clear. In fig4 one could argue that e-maml is perhaps performing the best, but the variance of the individual lines makes it difficult to conclude much. In fig5 rl2 gets the best final performance---do you have a hypothesis as to why? Much more analysis of the results is needed.\n\nThe result are more clear now and RL2 no longer gets the best final performance. Also, an important thing to consider is how fast the algorithms approach their final performance. For instance, in the referenced graph, E-MAML converged within ~10 million timesteps whereas RL2 took nearly twice as long. We apologize for not making this important point more explicit in the paper. In any case, this particular comment has been outmoded. \n\n=========================================================================\n\n\nThere are well-known measures used in transfer learning to access performance, such as jump-start. Why did you define new ones here?\n\nJump start is quite similar to the gap metric we consider in the paper. We have clarified this. \n\n=========================================================================\n",
"The following concerns were listed across multiple reviewers: \n\n1) Our paper misses citations wherein the similar problems are considered under different names. This problem is quite a large one, and it is unfortunate that the literature is at times disjoint and difficult to search. You will notice that the first and second reviewer both told us that we missed many essential references, but the crucial missed references provided by both are entirely different. Further, the third reviewer did not indicate any issues with the literature we cited. We believe this indicates the difficulty in accurately capturing prior work in this area. \n\n2) The graphs suffered from a variety of deficiencies. These deficiencies were both major (not clearly and convincingly demonstrating the strengths of our proposed methods) and minor (the text or graphs themselves being at times too small). \n\n3) There were portions of the paper that appeared hastily written or wherein spelling and grammatical mistakes were present. Further, there were claims that the reviewers felt were not sufficiently substantiated and parts of the paper lacked rigor. \n\nWe have addressed these concerns in the following ways: \n\n1) We have made an effort to address relevant prior literature. In particular, we have better explained the work’s connection to prior work by Schmidhuber et al and better explained what distinguishes this work from prior work on lifelong learning. See responses to individual reviewers for a more thorough explanation of these changes. Further, we have included an additional appendix which highlights our algorithmic development as a novel process for investigating exploration in meta-RL. We feel this appendix should completely remove any doubts regarding the novelty of this work. \n\n2) As for the graphs, we have fixed the presentation and layout issues. We have averaged over more seeds, which decreased the overall reported standard deviation across all algorithms, thus making the graphs more legible. We have also separated the learning curves onto multiple plots so that we can directly plot the standard deviations onto the learning curves without the plots appearing too busy. \n\n3) We have carefully edited the paper and fixed any substandard writing. We have also taken care to properly define notation, and made several improvements to the notation. We improved the writing’s clarity, and better highlighted the strength of our contributions. We removed several claims that the reviewers felt were too strong, and replaced them with more agreeable claims that are better supported by the experimental results. We have added an interesting new appendix which considers some of our insights in a more formal and rigorous manner. Finally, we have completely rewritten the experiments section, better explaining the experimental procedure. \n\n\nPlease see the responses to individual reviews below for further elaboration on specific changes we made to address reviewer comments. \n"
] | [
7,
6,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_Skk3Jm96W",
"iclr_2018_Skk3Jm96W",
"iclr_2018_Skk3Jm96W",
"rJfePXpMG",
"SkE07mveG",
"SkE07mveG",
"SkE07mveG",
"SJ0Q_6Hlf",
"ryse_yclM",
"ryse_yclM",
"iclr_2018_Skk3Jm96W"
] |
iclr_2018_BkA7gfZAb | Stable Distribution Alignment Using the Dual of the Adversarial Distance | Methods that align distributions by minimizing an adversarial distance between them have recently achieved impressive results. However, these approaches are difficult to optimize with gradient descent and they often do not converge well without careful hyperparameter tuning and proper initialization. We investigate whether turning the adversarial min-max problem into an optimization problem by replacing the maximization part with its dual improves the quality of the resulting alignment and explore its connections to Maximum Mean Discrepancy. Our empirical results suggest that using the dual formulation for the restricted family of linear discriminators results in a more stable convergence to a desirable solution when compared with the performance of a primal min-max GAN-like objective and an MMD objective under the same restrictions. We test our hypothesis on the problem of aligning two synthetic point clouds on a plane and on a real-image domain adaptation problem on digits. In both cases, the dual formulation yields an iterative procedure that gives more stable and monotonic improvement over time. | workshop-papers | All the reviewers noted that the dual formulation, as presented, only applies to the logistic family of classifiers. The kernelization is of course something that *can* be done, as argued by the authors, but is not in fact approached in the submission, only in the rebuttal. The toy-ish nature of the problems tackled in the submission limits the value of the presentation.
If the authors incorporate their domain adaptation results (SVHN-->MNIST and others) using the kernelized approach and do the stability analysis for those cases, and obtain reasonable results on domain adaptation benchmarks (70% on SVHN-->MNIST is for instance on the low side compared to the pixel-transfer-based GAN approaches out there!) then I think it'd be a great paper.
As such, I can only recommend it as an invitation to the workshop track, as the dual formulation is interesting and potentially useful. | train | [
"r1u3MhYxG",
"S1eNChYxG",
"S1sYODRlG",
"SyfSHuTmG",
"HyKfsBd7f",
"B10bZyYMM",
"H1iDp4dfG",
"Sy6J64Ozz",
"rJRn3EOMG",
"SkMB24dMM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The paper deals with “fixing GANs at the computational level”, in a similar sprit to f-GANs and WGANs. The fix is very specific and restricted. It relies on the logistic regression model as the discriminator, and the dual formulation of logistic regression by Jaakkola and Haussler. \n\nComments: \n1) Experiments are performed by restricting alternatives to also use a linear classifier for the discriminator. It is mentioned that results are expected to be lower than those produced by methods with a multi-layer classifier as the discriminator (e.g. Shen et al., Wasserstein distance guided representation learning for domain adaptation, Ganin et al., Domain-adversarial training of neural networks?). \n2) Considering this is an unsupervised domain adaption problem, how do you set the hyper-parameters lambda and the kernel width? The “reverse validation” method described in Ganin et al., Domain-adversarial training of neural networks, JMLR, 2016 might be helpful. \n\nMinor comments: on the upper-bound of the distance, alpha_i instead of alpha^\\top, and please label the axes in your figures. ",
"This paper studies a dual formulation of an adversarial loss based on an upper-bound of the logistic loss. This allows the authors to turn the standard min max problem of adversarial training into a single minimization problem, which is easier to solve. The method is demonstrated on a toy example and on the task of unsupervised domain adaptation.\n\nStrengths:\n- The derivation of the dual formulation is novel\n- The dual formulation simplifies adversarial training\n- The experiments show the better behavior of the method compared to adversarial training for domain adaptation\n\nWeaknesses:\n- It is unclear that this idea would generalize beyond a logistic regression classifier, which might limit its applicability in practice\n- It would have been nice to see results on other tasks than domain adaptation, such as synthetic image generation, for which GANs are often used\n- It would be interesting to see if the DA results with a kernel classifier are better (comparable to the state of the art)\n- The mathematical derivations have some errors\n\n\nDetailed comments:\n- The upper bound used to derive the formulation applies to a logistic regression classifier. While effective, such a classifier might not be as powerful as multi-layer architectures that are used as discriminators. I would be interested to know if they authors see ways to generalize to better classifiers.\n\n- The second weakness listed above might be related to the first one. Did the authors tried their approach to non-DA tasks, such as generating images, as often done with GANs? Showing such results would be more convincing. However, I wonder if the fact that the method has to rely on a simple classifier does not limit its ability to tackle other tasks.\n\n- The DA results are shown with a linear classifier, for the comparison to the baselines to be fair, which I appreciate. However, to evaluate the effectiveness of the method, it would be interesting to also report results with a kernel-based classifier, so as to see how it compares to the state of the art.\n\n- There are some errors and unclear things in the mathematical derivations:\n* In the equation above Eq. 2, \\alpha should in fact be \\alpha_i, and it is not a vector (no need to transpose it)\n* In Eq. 2, it should be \\alpha_i \\alpha_j instead of \\alpha^T\\alpha\n* In Eq. 3, it is unclear to me where the constraint 0 \\leq \\alpha \\leq 1 comes from. The origin of the last equality constraints on the sums of \\alpha_A and \\alpha_B is also unclear to me.\n* In Eq. 3, it is also not clear to me why the third term has a different constant weight than the first two. This would have an impact on the relationship to the MMD\n\n- The idea of sample reweighting within the MMD was in fact already used for DA, e.g., Huang et al., NIPS 2007, Gong et al., ICML 2013. What is done here is quite different, but I think it would be worth discussing these relationships in the paper.\n\n- The paper is reasonably clear, but could be improved with some more details on the mathematical derivations (e.g., explaining where the constraints on \\alpha come from), and on the experiments (it is not entirely clear how the distributions of accuracies were obtained).\n",
"This paper proposes to re-formulate the GAN saddle point objective (for a logistic regression discriminator) as a minimization problem by dualizing the maximum likelihood objective for regularized logistic regression (where the dual function can be obtained in closed form when the discriminator is linear). They motivate their approach by repeating the previously made claim that the naive gradient approach is non-convergent for generic saddle point problems (Figure 1); while a gradient approach often works well for a minimization formulation.\n\nThe dual problem of regularized logistic regression is an entropy-regularized concave quadratic objective problem where the Hessian is y_i y_j <x_i, x_j>, thus highlighting the pairwise similarities between the points x_i & x_j; here the labels represent whether the point x comes from the samples A from the target distribution or B from the proposal distribution. This paper then compare this objective with the MMD distance between the samples A & B. It points out that the adversarial logistic distance can be viewed as an iteratively reweighted empirical estimator of the MMD distance, an interesting analogy (but also showing the limited power of the adversarial logistic distance for getting good generating distributions, given e.g. that the MMD has been observed in the past to perform poorly for face generation [Dziugaite et al. UAI 2015]). From this analogy, one could expect the method to improves over MMD, but not necessarily significantly in comparison to an approach which would use more powerful discriminators.\n\nThis paper then investigates the behavior of this adversarial logistic distance in the context of aligning distributions for domain adaptation, with experiments on a visual adaptation task. They observe better performance for their approach in comparison to ADDA, improved WGAN and MMD, when restricting the discriminators to be a linear classifier.\n\n== Evaluation \n\nI found this paper quite clear to read and enjoyed reading it. The observations are interesting, despite being on the toyish side. I am not an expert on GANs for domain adaptation, and thus I can not judge of the quality of the experimental comparison for this application, but it seemed reasonable, apart for the restriction to the linear discriminators (which is required by the framework of this paper).\n\nOne concern about the paper (but this is an unfortunate common feature of most GAN papers) is that it ignores the vast knowledge on saddle point optimization coming from the optimization community. The instability of a gradient method on non-strongly convex-concave saddle point problems (like the bilinear form of Figure 1) is a well-known property, and many alternative *stable* gradient based algorithms have been proposed to solve saddle point problems which do not require transforming them to a minimization problem as suggested in this paper. Moreover, the transformation to the minimization form crucially required the closed form computation of the dual function (with w* just defined above equation (2)), and this is limited to linear discriminators, thus ruling out the use of the proposed approach to more powerful discriminators like deep neural nets. Thus the significance appears a bit limited to me.\n\n== Other comments\n\n1) Note that d(A, B'_theta) is *equal* to min_alpha max_w (...) above equation (2) (it is not just an upper bound). This is a standard result coming from the fact that the Fenchel dual problem to regularized maximum likelihood is the maximum entropy problem with a quadratic objective as (2). See e.g. Section 2.2 of [Collins et al. JMLR 2008] (this is for the more general multiclass logistic regression problem, but (2) is just the binary special case of equation (4) in the [Collins ... ] reference). And note that the \"w(u)\" defined in this reference is the lambda*w*(alpha) optimal relationship defined in this paper (but without the 1/lambda factor because of just slightly different writing; the point though is that strong duality holds there and thus one really has equality).\n\n\n[Collins et al. JMLR 2008] Michael Collins, Amir Globerson, Terry Koo , Xavier Carreras, Peter L. Bartlett, Exponentiated Gradient Algorithms for Conditional Random Fields and Max-Margin Markov Networks, , JMLR 2008.\n\n [Dziugaite et al. UAI 2015] Gintare Karolina Dziugaite, Daniel M. Roy, and Zoubin Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. In UAI, 2015",
"To show that our approach works with non-linear discriminators, we ran a series of experiments with kernel discriminators. Our dual approach achieved similar but more stable results to those achieved by Deep Adaptation Networks (DAN) [1] which uses kernelized MMD. On the SVHN->MNIST shift our dual method obtains top accuracy of 70%, on par with DAN results reported in [2,3]. We tried both single fixed, multiple fixed and multiple varying kernels updated via gradient descent with different learning rates and kernel bandwidth values. Similar more stable behavior was observed when training our dual analog of Domain Confusion [4] method, as reported in the paper. This further confirms our main hypothesis that our proposed cooperative formulation of the problem via optimal alignment lead to better stability of the resulting distribution alignment method wrt the variation of hyperparameters.\n\n[1] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. \"Learning transferable features with deep adaptation networks.\" ICML 2015\n[2] Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada “Asymmetric Tri-training for Unsupervised Domain Adaptation” CoRR 2017\n[3] Philip Haeusser, Thomas Frerix, Alexander Mordvintsev, Daniel Cremers “Associative Domain Adaptation” CoRR 2017\n[4] Eric Tzeng, Judy Hoffman, Trevor Darrell, Kate Saenko “Simultaneous Deep Transfer Across Domains and Tasks” ICCV 2015\n",
"Indeed, there seem to be a relationship between instance reweighting approach, theoretically justified and empirically shown to work in the presence of domain shift before, and iteratively reweighted discrimination from this paper, thank you for pointing us towards this! We addressed this in the related work section (page 2) in the last revision. We want to stress again, that the main proposition of the paper is not to \"use an iteratively reweighted MMD\" - this is only a special case of a more general idea that suggests replacing a \"competing\" objective with a cooperative \"correspondence\" objective - and success of instance reweighting of training samples seems to further confirm applicability of the suggested family of \"correspondence\" objectives. We discussed this in more details in the comment section above titled \"To all reviewers\".",
"3. Huang et al., NIPS 2007:\nThe title is Correcting Sample Selection Bias by Unlabeled Data\n\n4. You explanation answers my question. I was not sure how DISTRIBUTIONS of accuracies were obtained, as opposed to just accuracies.\n",
"1. “<..> Minor comments: on the upper-bound of the distance, alpha_i instead of alpha^\\top, and please label the axes in your figures. “ - fixed (eq. 2-3, pp. 6, 8)\n\n2. “<..> Experiments are performed by restricting alternatives to also use a linear classifier for the discriminator.” - all three reviewers mentioned this, so we considered writing a single reply and put it into the reply section at the top of the page.\n\n3. “<..> Considering this is an unsupervised domain adaptation problem, how do you set the hyper-parameters lambda and the kernel width?” - we first chose them semi-manually by validating on target, and then tested performance of all models with all hyperparameters spanning a grid between maximal and minimal optimal values from the first step; exact values are given in the supplementary.\n",
"1. “<..> It is unclear that this idea would generalize beyond a logistic regression classifier, which might limit its applicability in practice”\n\nAll three reviewers mentioned this, please see the reply section at the top of the page.\n\n2. “<..> there are some errors and unclear things in the mathematical derivations” \n\nThank you for paying such close attention to our derivations and pointing to these issues. [0, 1]-constraints on \\alpha values are required by the upper bound - otherwise it would not hold; equation 3 illustrates computing a quadratic form in block form by putting down terms that correspond to interactions between points from same and different domains separately; the interaction is symmetric, therefore there are two identical cross-domain terms that results in a multiplier of two in front of that term. We added these details (page 4).\n\nThe constraint on alpha sums (sum over A = sum over B) results from optimality conditions on the bias term of the discriminator - and it indeed does not quite follow from our derivations in their current form, you are completely right. We also added that.\n\n3. “<..> - The idea of sample reweighting within the MMD was in fact already used for DA, e.g., Huang et al., NIPS 2007, Gong et al., ICML 2013. What is done here is quite different, but I think it would be worth discussing these relationships in the paper.”\n\nIndeed, Gong et al. were optimizing a sample reweighted MMD but restricted weights of points from one of domains to always equal one (1.0). They had somewhat different reasoning, as they were choosing points for auxiliary tasks at multiple scales, therefore, no iterative reweighting was performed. But, yes, thank you for these references, they indeed used a very similar idea and we referenced this paper in the related work section.\n\nUnfortunately, we were unable to find a paper you referred as “Huang et al., NIPS 2007”, do you think you could remember its actual name? It would be very interesting to read if it was as closely related to our work as Gong et al.\n\n4. “<..> it is not entirely clear how the distributions of accuracies were obtained”\n\nCould you please comment on what exactly was not entirely clear? We chose reasonable values of hyperparameters for each model by hand and them performed a grid search for values “in-between”. Plotted distributions present how accuracies changed for different runs. More details are given in the supplementary.\n",
"1. “<..> ignores the vast knowledge on saddle point optimization coming from the optimization community”\n\nWe want to thank you for pointing this out. We are concerned about the lack of focus on actual optimization methods in the context of GANs too. We added a paragraph on Mirror Descent and Fictitious Play (page 3). If you happen to have any other suggestions on methods to discuss in this context, please let us know.\n\n2. “<..> the point though is that strong duality holds there and thus one really has equality”\n\nCompletely correct, thank you. We put that inequality to emphasise the log-sigmoid upper bound, but considering that it is also tight for optimal choice of alpha anyway, this indeed must be very confusing, we changed that (eq. 2-3, p. 4).",
" “<..> limited to linear discriminators, thus ruling out the use of the proposed approach to more powerful discriminators like deep neural nets” \n “<..> It is unclear that this idea would generalize beyond a logistic regression classifier, which might limit its applicability in practice”\n“<..> It is mentioned that results are expected to be lower than those produced by methods with a multi-layer classifier as the discriminator”\n\nThe reviewers are correct in that our proposed dual formulation in its current form only applies to the logistic family of classifiers, however, this is not limited to linear classifiers, because we can use kernels to obtain a nonlinear classifier. For example, MMD-based methods also use kernels and obtain state of the art results for domain adaptation. \n\nWhile we agree that the exact dual exists only in the logistic discriminator case, when one can use the duality and solve the inner problem in closed form, we want to stress that our paper presents a more general framework for alignment that can be extended to other classes of functions. More specifically, one can rewrite the quadratic form in kernel logistic regression [eq. 3, page 4] as a Frobenius inner product of a kernel matrix Q with a symmetric rank 1 alignment matrix S (outer product of alpha with itself). The kernel matrix specifies distances between points and S chooses pairs that minimize the total distance. This way the problem reduces to “maximizing the maximum agreement between the alignment and similarity matrices” that in turn might be seen as replacing “adversity” in the original problem with “cooperation” in the dual maximization problem. In our paper, S is a rank 1 matrix, but we could choose different alignment matrix parameterizations that would correspond to having a neural network discriminator in the adversarial problem. It is not exactly dual to minimization of any existing adversarial distances, but exploits same underlying principle of “iteratively-reweighted alignment matrix fitting” discussed in this paper. We assert that in order to understand the basic properties of the resulting formulation, an in-depth discussion of the well-studied logistic case is no less important than the discussion involving complicated deep models, which deserves a paper of its own. This paper proposes a more stable “cooperative” problem reformulation rather than a new adversarial objective as many recent papers do.\n\nWe added a section discussing relations to existing methods from this “cooperative point of view” and exciting future possibilities. (page 8)\n"
] | [
5,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BkA7gfZAb",
"iclr_2018_BkA7gfZAb",
"iclr_2018_BkA7gfZAb",
"S1eNChYxG",
"B10bZyYMM",
"Sy6J64Ozz",
"r1u3MhYxG",
"S1eNChYxG",
"S1sYODRlG",
"S1sYODRlG"
] |
iclr_2018_H18WqugAb | Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks | Humans can understand and produce new utterances effortlessly, thanks to their systematic compositional skills. Once a person learns the meaning of a new verb "dax," he or she can immediately understand the meaning of "dax twice" or "sing and dax." In this paper, we introduce the SCAN domain, consisting of a set of simple compositional navigation commands paired with the corresponding action sequences. We then test the zero-shot generalization capabilities of a variety of recurrent neural networks (RNNs) trained on SCAN with sequence-to-sequence methods. We find that RNNs can generalize well when the differences between training and test commands are small, so that they can apply "mix-and-match" strategies to solve the task. However, when generalization requires systematic compositional skills (as in the "dax" example above), RNNs fail spectacularly. We conclude with a proof-of-concept experiment in neural machine translation, supporting the conjecture that lack of systematicity is an important factor explaining why neural networks need very large training sets. | workshop-papers | Reviewers were somewhat lukewarm about this paper, which seeks to present an analysis of the limitations of sequence models when it comes to understanding compositionality. Somewhat synthetic experiments show that such models generalise poorly on patterns not attested during training, even if the information required to interpret such patterns is present in the training data when combined with knowledge of the compositional structure of the language. This conclusion seems as unsurprising to me as it does to some of the reviewers, so I would be inclined to agree with the moderate enthusiasm two out of three reviewers have for the paper, and suggest that it be redirected to the workshop track.
Other criticisms found in the review have to do with the lack of any discussion on the topic of how to address these limitations, or what message to take home from these empirical observations. It would be good for the authors to consider how to evaluate their claims against "real" data, to avoid the accusation that the conclusion is trivial from the task set up.
Therefore, while well written, it is not clear that the paper is ready for the main conference. It could potentially generate interesting discussion, so I am happy for it to be invited to the workshop track, or failing that, to suggest that further work on this topic be done before the paper is accepted somewhere. | test | [
"S1m7bXulz",
"By_6glcgf",
"BkpqEH5gf",
"ByhFlFIZM",
"BkQM5uI-f",
"rJVyquUZG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"This paper focuses on the zero-shot learning compositional capabilities of modern sequence-to-sequence RNNs. Through a series of experiments and a newly defined dataset, it exposes the short-comings of current seq2seq RNN architectures. The proposed dataset, called the SCAN dataset, is a selected subset of the CommonAI navigation tasks data set. This subset is chosen such that each command sequence corresponds to exactly one target action sequence, making it possible to apply standard seq2seq methods. Existing methods are then compared based on how accurately they can produce the target action sequence based on the command input sequence.\n\nThe introduction covers relevant literature and nicely describes the motivation for later experiments. Description of the model architecture is largely done in the appendix, this puts the focus of the paper on the experimental section. This choice seems to be appropriate, since standard methods are used. Figure 2 is sufficient to illustrate the model to readers familiar with the literature.\n\nThe experimental part establishes a baseline using standard seq2seq models on the new dataset, by exploring large variations of model architectures and a large part of the hyper-parameter space. This papers experimentation sections sets a positive example by exploring a comparatively large space of standard model architectures on the problem it proposes. This search enables the authors to come to convincing conclusions regarding the shortcomings of current models. The paper explores in particular:\n1.) Model generalization to unknown data similar to the training set.\n2.) Model generalization to data-sequences longer than the training set.\n3.) Generalization to composite commands, where a part of the command is never observed in sequence in the training set.\n4.) A recreation of a similar problem in the machine translation context.\nThese experiments show that modern sequence to sequence models do not solve the systematicity problem, while making clear by application to machine translation, why such a solution would be desirable. The SCAN data-set has the potential to become an interesting test-case for future research in this direction.\n\nThe experimental results shown in this paper are clearly compelling in exposing the weaknesses of current seq2seq RNN models. However, where the paper falls a bit short is in the discussion / outlook in terms of suggestions of how one can go about tackling these shortcomings. ",
"The paper analyzed the composition abilities of Recurrent Neural Networks. The authors analyzed the the generalization for the following scenarios\n\n- the generalization ability of RNNs on random subset of SCAN commands\n- the generalization ability of RNNs on longer SCAN commands\n- The generalization ability of composition over primitive commands.\n\nThe experiments supported the hypothesis that the RNNs are able to \n\n- generalize zero-shot to new commands. \n- difficulty generalizing to longer sequence (compared to training sequences) of commands.\n- the ability of the model generalizing to composition of primitive commands seem to depend heavily on the whether the action is seen during training. The model does not seem to generalize to completely new action and commands (like Jump), however, seems to generalize much better for Turn Left, since it has seen the action during training (even though not the particular commands)\n\nOverall, the paper is well written and easy to follow. The experiments are complete. The results and analysis are informative.\n\nAs for future work, I think an interesting direction would also be to investigate the composition abilities for RNNs with latent (stochastic) variables. For example, analyzing whether the latent stochastic variables may shown to actually help with generalization of composition of primitive commands. \n\n ",
"This paper argues about limitations of RNNs to learn models than exhibit a human-like compositional operation that facilitates generalization to unseen data, ex. zero-shot or one-shot applications. The paper does not present a new method, it only focuses on analyzing learning situations that illustrate their main ideas. To do this, they introduce a new dataset that facilitates the analysis of a Seq2Seq learning case. They conduct a complete experimentation, testing different popular RNN architectures, as well as parameter and hyperparameters values. \n\nThe main idea in the paper is that RNNs applied to Seq2Seq case are learning a representation based only on \"memorizing\" a mixture of constructions that have been observed during training, therefore, they can not show the compositional learning abilities exhibit by humans (that authors refer as systematic compositionality). Authors present a set of experiments designed to support this observation. \n\nWhile the experiments are compelling, as I explain below, I believe there is an underlying assumption that is not considered. Performance on training set by the best model is close to perfect (99.5%), so the model is really learning the task. Authors are then testing the model using test sets that do not follow the same distribution than training data, example, longer sequences. By doing so, they are breaking one of the most fundamental assumptions of inductive machine learning, i.e., the distribution of train and test data should be equal. Accordingly, my main point is the following: the model is indeed learning the task, as measured by performance on training set, so authors are only showing that the solution selected by the RNN does not follow the one that seems to be used by humans. Importantly, this does not entail that using a better regularization a similar RNN model can indeed learn such a representation. In this sense, the paper would really produce a more significant contribution is the authors can include some ideas about the ingredients of a RNN model, a variant of it, or a different type of model, must have to learn the compositional representation suggested by the authors, that I agree present convenient generalization capabilities. \n\n\nAnyway, I believe the paper is interesting and the authors are exposing interesting facts that might be worth to spread in our community, so I rate the paper as slightly over the acceptance threshold.",
"We thank the reviewers for very constructive feedback. We will thoroughly incorporate your suggestions in our revisions. We refer here to reviews by the name posted with the review (“AnonReviewer1”, etc.) rather than OpenReview order. Thus, R1 refers to “AnonReviewer1”.\n\nFirst, we would like to remark that our paper is not just about shortcomings of seq2seq models, but also about their impressive generalization strengths. Experiment 1 shows strong generalization to zero-shot cases, even when the training data covers a small fraction of task space (e.g., 8%). Even in the most challenging Experiment 3, the best models do generalize, to a certain extent, to composed usages of some primitive commands with familiar output actions (\"turn left\"). These are positive results (in line with the empirical achievements of seq2seq models), and in our revisions we will emphasize them more.\n\nIn other cases, there were dramatic generalization failures. We see these cases as challenges, encouraging researchers to design new models that can more successfully address compositional learning. We are already seeing a positive impact from the public release of SCAN: We were contacted by several teams actively working on our challenges, and we are very excited to see the new ideas the SCAN tasks will stimulate.\n\nWe were pleased that the paper was well received by all reviewers, and there was substantial agreement on its strengths (introducing an interesting challenge, detailed experimentation, careful hyperparameter search, clarity, etc.). The most significant critique, raised by R2 and R3, was that they would like more discussion of how the shortcomings of current seq2seq models can be tackled. In response to this, we will substantially expand our discussion of promising ways of addressing these shortcomings. Our current thinking on how to tackle these problems is outlined below.\n\nWe believe that the crucial component that current models are missing is the ability to extract systematic rules from the training data. R2 observes that some of our experiments violate the basic assumption that training and test data should come from the same distribution. We appreciate this point, and we believe it also depends on the degree of abstraction that a model performs on the input data. A model operating in \"rule space\" could extract translation rules such as:\n\ntranslate(x and y) -> translate(x) translate(y)\ntranslate(x twice) -> translate(x) translate(x)\n\nThen, if the meaning of a new command, translate(“jump”), is learned at training time and acts as a variable the rules can be applied to, no further learning is needed at test time. When represented in this more abstract way, the training and test distributions are quite similar, even if they differ in terms of shallower statistics such as word frequency. We conjecture that humans generalize in this way when learning novel compositional systems, and we are currently designing behavioral experiments to verify this hypothesis.\n\nHow can we encourage a general seq2seq model to extract rules from data, rather than shallower generalizations? We are considering several possibilities:\n\n1) Learning to learn: exposing a model to a number of different environments regulated by similar rules; an objective function requiring successful generalization to new environments would force models to learn the shared general rules;\n\n2) More structure/stronger priors: models akin to recent neural program induction or related could provide RNNs with access to a set of manually-encoded or (ideally) learned functions; the RNN job would then be to learn to compose these functions as appropriate;\n\n3) Differentiable data structures: extending recent work on Memory Networks, Neural Turing Machines and related formalisms, a seq2seq model could be equipped with quasi-discrete memory structures, enabling separate storage of variables, which in turn might encourage abstract rule learning.\n\nR1 proposal of a model with latent stochastic variables is also interesting, and we will further explore it.\n\nOther ideas might work specifically for the SCAN tasks (e.g., ad hoc \"copying\" mechanisms, or special ways to induce new word embeddings). The research lines described above point instead to more general models, combining the effectiveness of current seq2seq models with more human-like generalization capabilities. In our revisions, we will include a broader discussion of the implications of our results, and these new ideas for addressing the SCAN tasks.\n\nGiven our substantial positive results in some cases, and strong interest from other teams in tackling the SCAN tasks, there is significant opportunity to make progress on compositional learning and advance the state-of-the-art in seq2seq models. Our paper offers a set of concrete tasks for catalyzing this progress. We hope that the discussion phase can resolve some of the questions regarding their significance, and how to best make progress in compositional learning.",
"Thanks for your supportive review. We reply to all reviewers jointly in our first comment to the third reviewer below.",
"Thanks for your constructive feedback. We reply to all reviewers jointly in our first comment to the third reviewer below."
] | [
6,
7,
6,
-1,
-1,
-1
] | [
3,
4,
5,
-1,
-1,
-1
] | [
"iclr_2018_H18WqugAb",
"iclr_2018_H18WqugAb",
"iclr_2018_H18WqugAb",
"S1m7bXulz",
"By_6glcgf",
"BkpqEH5gf"
] |
iclr_2018_HJ3d2Ax0- | Benefits of Depth for Long-Term Memory of Recurrent Networks | The key attribute that drives the unprecedented success of modern Recurrent Neural Networks (RNNs) on learning tasks which involve sequential data, is their ever-improving ability to model intricate long-term temporal dependencies. However, a well established measure of RNNs' long-term memory capacity is lacking, and thus formal understanding of their ability to correlate data throughout time is limited. Though depth efficiency in convolutional networks is well established by now, it does not suffice in order to account for the success of deep RNNs on inputs of varying lengths, and the need to address their 'time-series expressive power' arises. In this paper, we analyze the effect of depth on the ability of recurrent networks to express correlations ranging over long time-scales. To meet the above need, we introduce a measure of the information flow across time that can be supported by the network, referred to as the Start-End separation rank. Essentially, this measure reflects the distance of the function realized by the recurrent network from a function that models no interaction whatsoever between the beginning and end of the input sequence. We prove that deep recurrent networks support Start-End separation ranks which are exponentially higher than those supported by their shallow counterparts. Moreover, we show that the ability of deep recurrent networks to correlate different parts of the input sequence increases exponentially as the input sequence extends, while that of vanilla shallow recurrent networks does not adapt to the sequence length at all. Thus, we establish that depth brings forth an overwhelming advantage in the ability of recurrent networks to model long-term dependencies, and provide an exemplar of quantifying this key attribute which may be readily extended to other RNN architectures of interest, e.g. variants of LSTM networks. We obtain our results by considering a class of recurrent networks referred to as Recurrent Arithmetic Circuits (RACs), which merge the hidden state with the input via the Multiplicative Integration operation. | workshop-papers | This paper attempts a theoretical treatment of the influence of depth in RNNs on their ability to capture dependencies in the data. All reviewers found the theoretical contribution of the paper interesting, and while there were problems raised regarding formalisation, they appear to have been adequately addressed in the revisions to the paper. The main concern in all three reviews surrounds the evaluation, and weakness thereof. The overarching point of contention seems to be that the theory relates to a particular formulation of RNNs (RAC), causing doubts that the results lift to other architectural variants which are used to obtain state-of-the-art results on tasks such as language modelling. It seems that the paper could be significantly improved by the provision of stronger empirical results to support the theory, or a more convincing argument as to why the results should transfer from, say, RAC to LSTMs. The authors point to two papers on the matter in their response, but it is not clear this is a substitute for experimental validation. I find the paper a bit borderline because of this, and recommend redirection to the workshop. | train | [
"B1L-7zDgz",
"SJl6_ov-G",
"BJpAdoezz",
"Hkb2oX6XG",
"BJCcFQTXG",
"S1XOyeaQf",
"rJvf61TXM",
"BJQjt8d-G",
"BysD9IxbG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"public"
] | [
"This paper investigates an effect of time dependencies in a specific type of RNN.\n\nThe idea is important and this paper seems sound. However, I am not sure that the main result (Theorem 1) explains an effect of depth sufficiently.\n\n--Main comment\nAbout the deep network case in Theorem 1, how $L$ affects the bound on ranks? In the current statement, the result seems independent to $L$ when $L \\geq 2$. I think that this paper should quantify the effect of an increase of $L$.\n\n--Sub comment\nNumerical experiments for calculating the separation rank is necessary to provide evidence of the main result. Only a simple example will make this paper more convincing.",
"After reading the authors's rebuttal I increased my score from a 7 to a 6. I do think the paper would benefit from experimental results, but agree with the authors that the theoretical results are non-trivial and interesting on their own merit.\n\n------------------------\nThe paper presents a theoretical analysis of depth in RNNs (technically a variant called RACs) i.e. stacking RNNs on top of one another, so that h_t^l (i.e. hidden state at time t and layer l is a function of h_t^{l-1} and h_{t-1}^{l})\n\nThe work is inspired by previous results for feed forward nets and CNNs. However, what is unique to RNNs is their ability to model long term dependencies across time. \n\nTo analyze this specific property, the authors propose a concept called \"start-end rank\" that essentially models the richness of the dependency between two disjoint subsets of inputs. Specifically, let S = {1, . . . , T/2} and E === {T/2 + 1, . . . , T}. sep_{S,E}(y) models the dependence between these two sets of time points. Specifically sep_{S,E}(y) = K means there exists g_s^k and g_e^k for k=1...K such that y(x) = \\sum_{k} g_s^k(x_S) g_e^k(x_E).\n\nTherefore sep_{S,E}(y) is the rank of a particular matricization of y (with respect to the partition S,E). If sep_{S,E}=1 then it is rank 1 (and would correspond to independence if y(x) was a probability distribution). A higher rank would correspond to more dependence across time. \n\n(Comment: I believe if I understood the above correctly, it would be easier to explain tensors/matricization first and then introduce separation rank, since I think it much makes it clearer to explain. Right now the authors explain separation rank first and then discuss tensors / matricization).\n\nUsing this concept, the authors prove that deep recurrent networks can express functions that have exponentially higher start/end ranks than shallow RNNs.\n\nI overall like the paper's theoretical results, but I have the following complaints:\n\n(1) I have the same question as the other reviewer. Why is Theorem 1 not a function of L? Do the papers that prove similar theorems about ConvNets able to handle general L? What makes this more challenging? I feel if comparing L=2 vs L=3 is hard, the authors should be more up front about that in the introduction/abstract.\n\n(2) I think it would have been stronger if the authors would have provided some empirical results validating their claims. \n\n",
"The paper proposes to use the start-end rank to measure the long-term dependency in RNNs. It shows that deep RNN is signficantly better than shallow one in this metric. \n\nThe theory part seems to be technical enough and interesting, though I haven't checked all the details. The main concern with the paper is that I am not sure whether the RAC studied by the paper is realistic enough for practice. Certain gating in RNN is very useful but I don't know whether one can train any reasonable RNN with all multiplicative gates. The paper will be much stronger if it has some experiments along this line. ",
"In accordance with main comments raised by reviewers 2 and 4 we have uploaded a version of the paper that has a new subsection, enumerated 4.2. Our main result (theorem 1), rigorously proves a lower bound on the Start-End separation rank of depth L=2 recurrent networks. This proved L=2 lower bound also trivially applies to all networks of depth L>2, and thus constitutes a first of its kind exponential separation in memory capacity between deep recurrent networks and shallow ones. In the added subsection, we present a quantitative conjecture by which a tighter, depth dependent, lower bound holds for recurrent networks of depth L>2. We formally motivate this conjecture by the Tensor Networks construction of deep RACs. We emphasize that the originally submitted version included the Tensor Networks construction of deep RACs (Appendices A1-A3), which yields the added conjecture in a straightforward manner, as described in a newly added appendix section A4. Beyond this, the paper kept its original form. \n\nThis addition, which meets central questions raised by the reviewers, outlines further insight that is achieved by our analysis regarding the dependence of the Start-End separation rank on depth, and poses further investigation of this avenue as an open problem. We believe that the presented novel approach for theoretical analysis of long term memory in recurrent networks, along with the solidly proved main results separating L=2 deep networks from L=1 shallow networks, constitutes an important contribution, which is well-supplemented by the formally motivated conjecture in the added section 4.2.",
"We encourage the reviewer to examine our separate official comment regarding the upload of a paper revision, which addresses the dependence on L of the bounds.",
"We thank the reviewer for the time and feedback. Our response follows. \n\nRACs are brought forth in our paper as a theoretical platform to investigate more common RNNs. The depth related effects studied in this paper depend on the recurrent network's structure rather than on specific activations. Empirical support for the specific RAC activations can be found in [1], which we mention in the paper. There, Multiplicative Integration Networks are shown to outperform common RNNs with additive integration. In section 3.1 they investigate RNNs with only multiplicative gates (in our terms - RACs) and find they preform comparably to vanilla RNNs in [2]. Furthermore, reference [1] shows evidence that under multiplicative integration the effect of squashing nonlinearities is diminished as they mostly operate in their linear regime, as opposed to the additive case where they are heavily influential (Fig. 1 (c),(d)). \n\nThus, there is a clear empirical validation addressing the reviewer's concern, that RACs can be trained to perform relevant sequential tasks. Our paper merely uses RACs as surrogates to common RNNs and does not propose to use them in practice - even though, as shown in empirical studies mentioned above, they can be used in practice.\n\nReferences\n___________________________\n[1] Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan R Salakhutdinov. On multiplicative integration with recurrent neural networks. In Advances in Neural Information Processing Systems, 2016.\n[2] David Krueger and Roland Memisevic. Regularizing rnns by stabilizing activations. arXiv:1511.08400, 2015.",
"We thank the reviewer for considering our response and for supporting the paper.",
"We thank the reviewer for his feedback and suggestion, which we will take into consideration. We address the reviewer's reservations below. \n\n1) The lower bound presented in the paper for a deep network is tied to the minimal case of L=2, which is the technical reason for the lower bound in theorem 1 lacking a dependence on L. We do not claim it is unachievable to receive specific bounds on L=3 etc. However, we deem a separation between higher L's subsidiary in terms of the simple and unprecedented message of this paper, which brings forth a formal explanation for superiority of the prevalent architectural choice of stacked recurrent networks. Regarding the line of work that deals with depth separation in feed-forward neural networks, certainly there are important papers for which the formal results only separate between L=1 and higher L, such as the ones mentioned in the reply to reviewer 2. A separation between two networks of higher L's is of interest, however our work is the first to raise and formulate the depth efficiency question in terms of recurrent networks, and thus we consider the clear message attained by the presented separation in Theorem 1 important.\n\n2) As we emphasize in the paper, our analysis provides a theoretical framework for a phenomenon that is empirically well-established in the literature. Moreover, our main result in Theorem 1 specifically addresses the issue of enhanced long-term memory in deep recurrent networks, for which a comprehensive empirical study was presented by [1]. We refer to this study in the second paragraph of the introduction and in last paragraph of the conclusion. The work in [1] constitutes an empirical motivator for our work, which is positioned to provide a first of its kind theoretical perspective for the above findings. We found the methodological experimental indications in [1], strengthened by a variety of other empirical works which employ deep recurrent networks for demanding sequential tasks, sufficient. Thus, we did not include further experimental validations, which in our view were not required given the evidence in the literature cited in the paper. However, we would appreciate suggestions by the reviewer for experiments not present in the mentioned literature that are required for strengthening our presentation. Alternatively, we suggest to emphasize the existence of such supporting experiments in our introduction, so that their role in motivating our theoretical results is even clearer. \n\n\nReferences\n----------------\n[1] Michiel Hermans and Benjamin Schrauwen. Training and analysing deep recurrent neural networks. In Advances in Neural Information Processing Systems, pages 190–198, 2013.",
"We thank the reviewer for the time and feedback. Following is our response. \n\nAddressing the reviewer's main comment, indeed Theorem 1 does not include a separation between two deep recurrent networks of different depths - our analysis refers to the enhanced memory capacity of multiple layered recurrent networks versus single layered recurrent networks. Our treatment is analogous to a much wider and more mature line of work that deals with depth separation in feed-forward neural networks. In the feed-forward domain, various important and widely accepted papers give results which only separate between shallow and deep networks in order to show the advantages of depth in these networks (such as [1,2]). Formal theoretical literature which provides any such similar results on recurrent networks is scarce at best, and such questions have not been answered (or formally asked) to date for recurrent networks. Our focus on separating between shallow and deep networks is the natural starting point for this line of research; Theorem 1, which establishes a separation in the ability to integrate data throughout time between shallow and deep recurrent networks, constitutes a first of its kind theoretical assertion of superiority of deep recurrent networks. \n\nWe wish to emphasize that even once the question is raised - \"can the notion of depth enhanced long term-memory in recurrent networks be formalized?\" and a mathematical infrastructure is set-up in the form of the Start-End separation rank with grid tensors rank bounding it, establishing Theorem 1 is highly non-trivial. As can be seen in the supplementary material, the proof involves a considerable \"legwork\" which integrates tools and results from various fields (measure theory, tensorial analysis, combinatorics, graph theory and quantum physics). Accordingly, given the importance and contribution of the result in Theorem 1, we found the suggested task of separating the memory capacity of two arbitrary deep recurrent networks subsidiary in terms of contribution to the message of this paper. We agree that a finer investigation separating two recurrent networks of arbitrary depth is of relevance - it is in fact a part of a follow-up work, indicated by this paper, which we are presently pursuing (described in the last paragraph of the discussion section).\n\nRegarding the sub-comment made by the reviewer, our theoretical results guarantee that for almost any setting of the recurrent network's weights, theorem 1 holds. We have performed several sanity checks, which agreed with our conclusions. Having proved the theorem, we did not feel the need to include such empirical validations. However, we will gladly do so if it helps clarify or convey any message. We would appreciate a clarification regarding what specific convincing experiments the reviewer had in mind.\n\nReferences\n----------------\n[1] Olivier Delalleau and Yoshua Bengio. Shallow vs. deep sum-product networks. In Advances in Neural Information Processing Systems, pages 666–674, 2011.\n[2] Guido F. Montúfar, Razvan Pascanu, KyungHyun Cho, and Yoshua Bengio. On the number of linear regions of deep neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014."
] | [
5,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
2,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HJ3d2Ax0-",
"iclr_2018_HJ3d2Ax0-",
"iclr_2018_HJ3d2Ax0-",
"iclr_2018_HJ3d2Ax0-",
"B1L-7zDgz",
"BJpAdoezz",
"SJl6_ov-G",
"SJl6_ov-G",
"B1L-7zDgz"
] |
iclr_2018_ryG6xZ-RZ | DLVM: A modern compiler infrastructure for deep learning systems | Deep learning software demands reliability and performance. However, many of the existing deep learning frameworks are software libraries that act as an unsafe DSL in Python and a computation graph interpreter. We present DLVM, a design and implementation of a compiler infrastructure with a linear algebra intermediate representation, algorithmic differentiation by adjoint code generation, domain- specific optimizations and a code generator targeting GPU via LLVM. Designed as a modern compiler infrastructure inspired by LLVM, DLVM is more modular and more generic than existing deep learning compiler frameworks, and supports tensor DSLs with high expressivity. With our prototypical staged DSL embedded in Swift, we argue that the DLVM system enables a form of modular, safe and performant frameworks for deep learning. | workshop-papers | This is a fascinating paper, and representative of the sort of work which is welcome in our field and in our community. It presents a compiler framework for the development of DSLs (and models) for Deep Learning and related methods. Overall, reviewers were supportive of and excited by this line of work, but questioned its suitability for the main conference. In particular, the lack of experimental demonstrations of the system, and the disconnect between domain-specific technical knowledge required to appreciate this work and that of the average ICLR attendee were some of the main causes for concern. It is clear to me that this paper is not suitable for the main conference, not due to its quality, but due to its subject matter. I would be happy, however, to tentatively recommend it for acceptance to the workshop as this topic deserves discussion at the conference, and this would provide the basis for a useful bridge between the compilers community and the deep learning community. | train | [
"SyW2zW3VM",
"Syj85_q4M",
"H1YMhD9Vf",
"BJYvCt8Ef",
"ByG1QoMxz",
"rJs8O8Bgz",
"SJ_WummXM"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I appreciate the author's response (I'm a little confused why everything is promised in a future update, e.g. clarifications and improvements in the paper, is there no opportunity to update the pdf on openreview? This would make it easier to appreciate these changes).\n\nHowever, I still feel the lack of comparison with other frameworks makes this work feel unfinished, or a position paper that will be of limited interested to an ICLR audience. The author's argue that even without empirical evaluation it will still be of interest, but I think it will be of more specialized interest to a small subset who are writing frameworks. As a researcher, I don't come away with any feeling for 'in this situation you should really consider using this tool.'\n\nFor this reason I'm leaving my evaluation unchanged.",
"If the paper is accepted, we will add a description based on the above blurb to the related work section, describing how our work differs from SysML and TACO.",
"The title of my comment says it all. Given that most ICLR audience do not have a background to make nuanced distinctions among the various deep learning engines available (myself included), do the authors feel it would help to include the blurb about SystemML and TACO into the draft/paper?",
"First, thank you to the reviewers for their time and effort in reviewing this submission. We very much appreciate their attention and their efforts.\n\nAnonReviewer5 wrote: \"This paper is not well-adapted for an ICLR audience, many of which are not experts in compilers or LLVM. For example, the Figure 3, table 1 would be benefit from being shorter with more exposition on what the reader should understand and take away from them.\"\n\nWe agree that most ICLR attendees are not experts in compilers or LLVM. It is for that very reason that we believe this paper would be a valuable addition to ICLR. Many in the ICLR audience make heavy use of deep learning toolkits, despite any shortcomings of those toolkits. We argue that our paper is well situated as a position paper designed to make ICLR attendees more aware of the inherent design shortcomings of existing deep learning toolkits, and present a sound alternative. If accepted, we would abbreviate Figure 3 and Table 1, and provide additional textual exposition on what the reader should take away.\n\n\nAnonReviewer3 wrote: \"Related work is not adequately referenced. Here are two (others should be easy to find): Apache SystemML (https://systemml.apache.org/), TACO (http://tensor-compiler.org/)\"\n\nApache SystemML is a high-level language and framework for writing and executing machine learning problems, especially targeting Apache Spark. TACO is much more similar to Halide than it is to our work. TACO is a C++ library for compiling and optimizing kernels. Neither TACO nor SystemML are closely related to our work. Our work argues that deep learning (and in particular the creation of neural network topologies) is itself a compilers problem, and should be addressed using mature compiler techniques. SystemML does not consider this issue at all. TACO does use compiler optimization, but only at a very low level to generate individual kernels. There is existing work that is related to ours, namely XLA, TVM, and NNVM. In Section 2, we examine those systems and describe in detail how our work differs from those.\n\n\nAnonReviewer2 wrote: \"I find the paper very interesting and the findings can have a significant impact on how we develop deep learning systems in the future. The paper addresses an important problem, is very well written, and is easy to follow. ... The main drawback of the paper is the lack of evaluation. Although the framework is well described, its application and use are only demonstrated with a very small code example.\"\n\nWe concur with this reviewer that our approach is likely to have a substantial impact on the development of deep learning systems. Given this likely impact, we believe that there is significant value in presenting this paper to the ICLR community, despite the fact that we were not yet able to present quantifiable evaluation results.\n",
"Modern-day deep learning engines (e.g., tensorflow, caffe) perform code generation (to generated the backward pass) and a host of other optimizations to run today's deep learning workloads. Unfortunately, the manner in which they go about doing this is ad hoc and does not adopt best practices developed in the compilers and programming languages communities. The obvious consequence is missed opportunities for optimizing deep learning workloads. This paper proposes to fix this by re-designing from ground up the deep learning engine placing particular focus on code generation, compilation (e.g., type checking), optimization (e.g., fusion, matrix chain reordering, common subexpression elimination) etc. Unfortunately, the paper falls short in two significant respects: It does not adequately cite related work and it does not present any experiments to quantify the benefits they claim will be achieved by their new compiler.\n\nPros:\n- The paper proposes a very relevant and timely proposal to design a modern day deep learning compiler framework.\n- Their design includes a number of optimizations that are missing from currently available deep learning engines which can lead to significant benefits.\n\nCons:\n- Related work is not adequately referenced. Here are two (others should be easy to find): Apache SystemML (https://systemml.apache.org/), TACO (http://tensor-compiler.org/)\n- Experiments section is conspicuous by its absence. They provide no micro benchmarks or end-to-end deep learning use cases to quantify the benefits of their compiler vs. some of the currently available ones.",
"Deep learning is a technique that has attracted a lot of attention. Typically, when using a deep learning framework we describe the network using some kind of computation graph, e.g., as in TensorFlow. A drawback is that the execution performance can be limited, e.g., due to run-time interpretation of the computation graph. \n\nThis paper takes a different approach and presents a compiler framework that allows definition of domain-specific languages (DSLs) for deep learning system, defines a number of compilation stages that can take advantage of standard compiler optimizations as well as specialized optimizations for neural networks using an intermediate representation, and also a back-end. Thus a computation graph is compiled directly to binary code, which increases the performance. For example, the compiler infrastructure enables optimization over multiple kernels using kernel fusion. \n\nI find the paper very interesting and the findings can have a significant impact on how we develop deep learning systems in the future. The paper addresses an important problem, is very well written, and is easy to follow. The different optimization stages are well describe and also motive why they improve performance over existing techniques. The intention is to provide the framework as open source in the future. \n\nThe main drawback of the paper is the lack of evaluation. Although the framework is well described, its application and use are only demonstrated with a very small code example. No comparison with existing frameworks is done, and no evaluation of the actual performance is done. \n",
"The success of Deep Learning is, in no small part, due the development of libraries and frameworks which have made building novel models much easier, faster and less error prone and also make taking advantage of modern hardware (such as GPUs) more accessible. This is still a vital area of work, as new types of models and hardware are developed.\n\nThis work argues that prior solutions do not take advantage of the fact that a tensor compiler is, essentially, just a compiler. They introduce DLVM (and NNKit) which comprises LLVM based compiler infrastructure and a DSL allowing the use of Swift to describe a typed tensor graph. Unusually, compared to most frameworks, gradients are calculated using source code transformation, which is argued to allow for easier optimization.\n\nThis paper is not well-adapted for an ICLR audience, many of which are not experts in compilers or LLVM. For example, the Figure 3, table 1 would be benefit from being shorter with more exposition on what the reader should understand and take away from them.\n\nThe primary weakness of this work is the lack of careful comparison with existing framework. The authors mention several philosophical arguments in favor of their approach, but is there a concrete example of an model which is cumbersome to write in an existing framework but easy here? (e.g. recent libraries pytorch, TF eager can express conditional logic much more simply than previous approaches, its easy to communicate why you might use them). Because of this work seems likely to be of limited interest to the ICLR audience, most of which are potentially interested users rather than compiler experts. There is also no benchmarking, which is at odds with the claims the compiler approaches allows easier optimization.\n\nOne aspect that seemed under-addressed and which often a crucial aspect of a good framework, is how general purpose code e.g. for loading data or logging interacts with the accelerated tensor code."
] | [
-1,
-1,
-1,
-1,
5,
7,
5
] | [
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"SJ_WummXM",
"H1YMhD9Vf",
"BJYvCt8Ef",
"iclr_2018_ryG6xZ-RZ",
"iclr_2018_ryG6xZ-RZ",
"iclr_2018_ryG6xZ-RZ",
"iclr_2018_ryG6xZ-RZ"
] |
iclr_2018_HkGJUXb0- | Learning Efficient Tensor Representations with Ring Structure Networks | \emph{Tensor train (TT) decomposition} is a powerful representation for high-order tensors, which has been successfully applied to various machine learning tasks in recent years. In this paper, we propose a more generalized tensor decomposition with ring structure network by employing circular multilinear products over a sequence of lower-order core tensors, which is termed as TR representation. Several learning algorithms including blockwise ALS with adaptive tensor ranks and SGD with high scalability are presented. Furthermore, the mathematical properties are investigated, which enables us to perform basic algebra operations in a computationally efficiently way by using TR representations. Experimental results on synthetic signals and real-world datasets demonstrate the effectiveness of TR model and the learning algorithms. In particular, we show that the structure information and high-order correlations within a 2D image can be captured efficiently by employing tensorization and TR representation.
| workshop-papers | This paper proposes a new way of learning tensors representation with ring-structured decompositions rather than through Tensor Train methods. The paper investigates the mathematical properties of this decomposition and provides synthetic experiments. There was some debate, with the reviewers, about the novelty and impact of this method, where overall the feeling was this work was too preliminary to be accepted. The idea, from my understanding, is interesting and would benefit from discussion at the workshop track, but the authors are investigated to make a stronger case for the novelty of this method in any further work and, in particular, to consider showing empirical improvement on "real" data where TT methods are currently applied. | train | [
"BkQlqDiBG",
"BJSfkUNzf",
"SkhraDqVM",
"ry_11ijeM",
"SJvcooagM",
"r1hXHTCzG",
"BJSzKFRGz",
"BJE1YYAGz"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"We appreciate the reviewer’s positive feedbacks on our revised paper. \n\nFor experiments on one image, the compression rate achieved by our method is almost 2 times over TT method. For tensorizing neural networks, although the classification performance of our method is only slightly better than TT, given the same testing error 2.2%, the compression rate of our method is 1300, while TT is 290. As shown in [Novikov, NIPS 2015], the main advantage of TT based tensorizing neural networks is high compressive ability rather than classification performance. Hence, the improvement over TT is significant. \n\nAlthough we employed well-known ALS and SGD techniques, our paper provides new algorithms for tensor ring decomposition. This paper also provides firstly the detailed theoretical analysis of rank of tensor ring and the mathematic computations of tensor ring format, as well as relationship with other popular tensor models. These results are very helpful and important for future applications of tensor ring representation to machine learning problems. ",
"The paper addresses the problem of tensor decomposition which is relevant and interesting. The paper proposes Tensor Ring (TR) decomposition which improves over and bases on the Tensor Train (TT) decomposition method. TT decomposes a tensor in to a sequences of latent tensors where the first and last tensors are a 2D matrices. \n\nThe proposed TR method generalizes TT in that the first and last tensors are also 3rd-order tensors instead of 2nd-order. I think such generalization is interesting but the innovation seems to be very limited. \n\nThe paper develops three different kinds of solvers for TR decomposition, i.e., SVD, ALS and SGD. All of these are well known methods. \n\nFinally, the paper provides experimental results on synthetic data (3 oscillated functions) and image data (few sampled images). I think the paper could be greatly improved by providing more experiments and ablations to validate the benefits of the proposed methods.\n\nPlease refer to below for more comments and questions.\n\n-- The rating has been updated.\n\nPros:\n1. The topic is interesting.\n2. The generalization over TT makes sense.\n\nCons:\n1. The writing of the paper could be improved and more clear: the conclusions on inner product and F-norm can be integrated into \"Theorem 5\". And those \"theorems\" in section 4 are just some properties from previous definitions; they are not theorems. \n2. The property of TR decomposition is that the tensors can be shifted (circular invariance). This is an interesting property and it seems to be the major strength of TR over TT. I think the paper could be significantly improved by providing more applications of this property in both theory and experiments.\n3. As the number of latent tensors increase, the ALS method becomes much worse approximation of the original optimization. Any insights or results on the optimization performance vs. the number of latent tensors?\n4. Also, the paper mentions Eq. 5 (ALS) is optimized by solving d subproblems alternatively. I think this only contains a single round of optimization. Should ALS be applied repeated (each round solves d problems) until convergence?\n5. What is the memory consumption for different solvers?\n6. SGD also needs to update at least d times for all d latent tensors. Why is the complexity O(r^3) independent of the parameter d?\n7. The ALS is so slow (if looking at the results in section 5.1), which becomes not practical. The experimental part could be improved by providing more results and description about a guidance on how to choose from different solvers.\n8. What does \"iteration\" mean in experimental results such as table 2? Different algorithms have different cost for \"each iteration\" so comparing that seems not fair. The results could make more sense by providing total time consumptions and time cost per iteration. also applies to table 4.\n9. Why is the \\epsion in table 3 not consistent? Why not choose \\epsion = 9e-4 and \\epsilon=2e-15 for tensorization?\n10. Also, table 3 could be greatly improved by providing more ablations such as results for (n=16, d=8), (n=4, d=4), etc. That could help readers to better understand the effect of TR.\n11. Section 5.3 could be improved by providing a curve (compression vs. error) instead of just providing a table of sampled operating points.\n12. The paper mentions the application of image representation but only experiment on 32x32 images. How does the proposed method handle large images? Otherwise, it does not seem to be a practical application.\n13. Figure 5: Are the RSE measures computed over the whole CIFAR-10 dataset or the displayed images?\n\nMinor:\n- Typo: Page 4 Line 7 \"Note that this algorithm use the similar strategy\": use -> uses",
"Thanks for your thorough rebuttal. I think the quality of the current version is greatly improved. However, considering the technical contribution and the very limited improvement over TT by the proposed method (shown in the new results). I still have a negative feeling about this paper. The rating is upgraded but still in the negative direction.",
"This paper proposes a tensor train decomposition with a ring structure for function approximation and data compression. Most of the techniques used are well-known in the tensor community (outside of machine learning). The main contribution of the paper is the introduce such techniques to the ML community and presents experimental results for support.\n\nThe paper is rather preliminary in its examination. For example, it is claimed that the proposed decomposition provides \"enhanced representation ability\", but this is not justified rigorously either via more comprehensive experimentation or via a theoretical justification. Furthermore, the paper lacks in novelty aspect, as it is uses mostly well-known techniques. ",
"This paper presents a tensor decomposition method called tensor ring (TR) decomposition. The proposed decomposition approximates each tensor element via a trace operation over the sequential multilinear products of lower order core tensors. This is in contrast with another popular approach based on tensor train (TT) decomposition which requires several constraints on the core tensors (such as the rank of the first and last core tensor to be 1).\n\nTo learn TR representations, the paper presents a non-iterative TR-SVD algorithm that is similar to TT-SVD algorithm. To find the optimal lower TR-ranks, a block-wise ALS algorithms is presented, and an SGD algorithm is also presented to make the model scalable.\n\nThe proposed method is compared against the TT method on some synthetic high order tensors and on an image completion task, and shown to yield better results.\n\nThis is an interesting work. TT decompositions have gained popularity in the tensor factorization literature recently and the paper tries to address some of their key limitations. This seems to be a good direction. The experimental results are somewhat limited but the overall framework looks appealing.",
"We would like to thank the reviewer for the constructive and insightful comments. \n\nThe proposed TR generalizes TT not only in that the first and last tensors are 3rd-order instead of 2nd-order, but also a tensor contraction (link) between the first tensor and last tensor is added. This additional operation makes TR having essentially different computation principle and mathematical properties. As a result, TR has several advantages over TT such as enhanced representation ability, smaller ranks than TT, circular permutation invariance. \n\nSince there is a loop connection in TR, the developments of TR-SVD and TR-BALS are not trivial as compared to TT. In particular, BALS enables us to automatically determine the rank between the first and last cores, which is not possible in the TT model. Although ALS and SGD are well-known methods, our novelty is how we can apply these standard techniques to solve TR decomposition. \n\nThe most notable experiment is to represent an image by using our proposed tensorization and tensor decomposition (sec. 5.2). By converting an image to 4th, 8th, and 16th order tensors, TT/TR can represent it by using much less parameters than gold standard SVD on the original matrix. In addition, TR needs 0.56 times parameters of TT (see Table 4 in revision). This is the first time to show significant advantages of artificially converting a 2D matrix (real image) to a high-order tensor. \n\nAs suggested by AnonReviewer5, we added a new experiment on two datasets in our revision (Sec. 5.4). TTs/TRs are applied to approximate a dense weight matrix of fully connected layers in neural networks. The results show that TR can always achieve better training and testing error than TT by using different ranks. TR-layer can achieve much better compression rate than TT-layer under the same level of test error. In particular, the compression factor of parameters by using TR is up to 1300 times. \n\nResponse to Cons:\n1.\tWe improved the clarity of the paper as AnonReviewer5 suggested. \n2.\tIn the revised manuscript, we added an experiment to demonstrate the benefit of this property (see Sec. 5.1 and Table 2). \n3.\t In Table 3, as the number of latent tensors increases, the original tensor data has totally different size, leading to a slow convergence rate of ALS. In addition, ALS is prone to get stuck in a local minimum. Thus, for large-scale tensors, TR-SGD would be more promising. Here, we mainly show TR-SGD can achieve similar approximation to TR-ALS by using partially observed tensor data. \n4.\tALS should be applied repeatedly till convergence. We mentioned this in our revision.\n5.\tWe provide memory costs for different solvers in our revision (Sec. 3.3). \n6.\tThe complexity of TR-SGD should be O(d^2r^3) for each data point. \n7.\tIn Sec. 5.1, TT-ALS is so slow due to many iterations for convergence, while TT-SVD is a non-iterative method. We provided discussions and descriptions about how to choose an appropriate solver in our revision (Sec. 3.3). \n8.\t“Iteration” in Tables 3 and 5 should be changed to “Epoch”, which indicates how many times the whole data tensor is used for optimization. The main point is that TR-SGD can achieve similar error by using 1% of tensor elements. For each epoch, the cost of TR-ALS is O(Ndr^4+dr^6), while the cost of TR-SGD is O(Nd^2r^3). \n9.\tIn Table 3, \\epsilon is the real achieved error, we choose \\epsilon = 1e-1, 1e-2, 1e-3, 1e-15 for all methods, but it is impossible to obtain exactly the same error due to discrete value of ranks. \n10.\t In Table 4, all results are obtained by using the same image data (256x256), and thus (n=16, d=8) and (n=4, d=4) are impossible, because 16^8 and 4^4 are not equal to 256^2.\n11.\t We added more experiments shown in Figure 5 in our revision as suggested by AnonReviewer5.\n12.\t The size of each sample is not a crucial factor, because we consider the whole dataset as one tensor. We also performed experiments on a larger image (256 x 256) in Sec. 5.2. The tensor in Sec. 5.3 is of size 1000 x 32 x 32 (1million entries). Although each image is small, a tensor of the whole dataset is large. In Appendix, the COIL-100 dataset is a tensor of size 32 x 32 x 3 x 7200 (22 million entries) and the KTH dataset is of size 20 x 20 x 32 x 600 (7 million entries). These tensors are large enough for practical applications. For example, we may use TR to represent a weight matrix of size 640,000 in neural networks (newly added Sec. 5.4). \n13.\tIn Figure 6, RSE is computed over the whole CIFAR-10 dataset rather than the displayed images. \n\nThank you for pointing out the typo in Page 4 Line 7.\n",
"We would like to thank the reviewer for the fruitful comments. \n\nTensor ring (TR) decomposition is a newly proposed tensor model in the tensor community. Although TR at a glance seems to simply have one more connection to a tensor train (TT) in terms of geometric\nstructure, its computation principle and mathematical properties are essentially different. As compared to all well-known tensor models, e.g., CP, Tucker, TT and HT, the tensor ring model is the only one\nwhich has a loop connection of latent components.\n\nThe TR model has several significant advantages over TTs. i) The TR-ranks are much smaller than TT-ranks (ref. Theorem 2), given the same approximation error. ii) For TTs, any permutation of tensor dimensions will yield inconsistent results, and thus the performance of TT models is sensitive to the order of dimensions, while TR models have circular permutation invariance. iii) Due to $r_1=r_d =1$ in TTs, the rank of middle core (r_k) usually need to be very large, while the ranks of TR cores in principle can be equally distributed.\n\nFor algorithms, since there is a loop connection in TRs, the developments of TR-SVD and BALS are not trivial as compared to TT-SVD. In particular, BALS enables us to automatically determine the rank between the first and last cores, which is not possible in the TT model. We would like to emphasize that our newly developed SGD algorithm is scalable and efficient by randomly sampling one tensor entry per update, while such an algorithm is not studied for the TT model. Note that TR-SGD can achieve similar results to ALS algorithms by using only 1% tensor entries with each entry used only once (see Table 3 in our revision).\n\nWe proved that the convenient computation principles of the TT-format are mostly retained for the TR-format with slightly different operations on cores (see Sec. 4), which adds further novelty.\n\nIn experiments, the most impressive results are the compactness of the TR expression. TT can approximate one 2D image (matrix) with $\\epsilon= 0.1$ by using 0.53 times parameters of SVD, while TR only needs 0.39 times parameters (see Table 4 in revision). Even when $\\epsilon=1e-15$, TR still needs 0.56 times parameters of SVD while TT needs the equivalent number to SVD. These results are achieved by our proposed tensorization strategy, which is the first time to be studied in the tensor community.\n\nWe firstly studied that the physical meaning of TR-cores corresponds to different scales of images by using our tensorization method, which is shown by adding noise to cores (see Fig. 1).\n\nFor \"enhanced representation ability\", we proved that TT can be considered as a special case of TR when we set $r_1=r_d=1$ (see A.3 in Appendix). Thus, TR is more generalized and flexible than TT. Secondly, TR-ranks are much smaller than TT-ranks (from Theorem 2). Thirdly, TR is proved to be a sum of $r_1$ TT formats with common cores $G_k, 2<=k<=d-1$ (see A.3 in Appendix). \n\nThe experiment results (Table 4 in revision) showed that TR needs only 0.56 times parameters of TT when $\\epsilon=1e-14$. TT always needs more parameters than TR for all settings. For the CIFAR dataset, Table 5 shows that TT needs 1.5 times parameters of TR. The results on the Coil-100 and KTH video datasets also showed similar phenomena (see Tables 6 and 7 in Appendix).\n \nIn our revision, we added many additional experiments. In Sec. 5.4, we applied TR representation to approximate the dense weight matrix of fully-connected layer in neural networks. By deriving the learning algorithm over each small core, all computations can be performed by using core tensors instead of full tensor, yielding improved computation efficiency. In addition, we compared the performance of TT-layer and TR-layer in Fig. 7. The tensorizing neural networks are tested on MNIST and SVHN datasets. In Sec. 5.3, we added extensive experimental comparisons (compression rate vs. approximation error) as shown in Fig. 5. In Sec. 5.1, we added more experiments for investigating the benefit of circular shift invariance of TR, as shown in Table 2. \n",
"Thank you very much for your positive evaluation.\n\nIn our revision, we added many additional experiments. In Sec. 5.4, we applied TR representation to approximate the dense weight matrix of fully-connected layer in neural networks. By deriving the learning algorithm over each small core, all computations can be performed by using core tensors instead of full tensor, yielding improved computation efficiency. In addition, we compared the performance of TT-layer and TR-layer in Fig. 7. The tensorizing neural networks are tested on MNIST and SVHN datasets. In Sec. 5.3, we added extensive experimental comparisons (compression rate vs. approximation error) as shown in Fig. 5. In Sec. 5.1, we added more experiments for investigating the benefit of circular shift invariance of TR, as shown in Table 2. \n"
] | [
-1,
5,
-1,
5,
6,
-1,
-1,
-1
] | [
-1,
4,
-1,
4,
3,
-1,
-1,
-1
] | [
"SkhraDqVM",
"iclr_2018_HkGJUXb0-",
"r1hXHTCzG",
"iclr_2018_HkGJUXb0-",
"iclr_2018_HkGJUXb0-",
"BJSfkUNzf",
"ry_11ijeM",
"SJvcooagM"
] |
iclr_2018_BJypUGZ0Z | Accelerating Neural Architecture Search using Performance Prediction | Methods for neural network hyperparameter optimization and meta-modeling are computationally expensive due to the need to train a large number of model configurations. In this paper, we show that standard frequentist regression models can predict the final performance of partially trained model configurations using features based on network architectures, hyperparameters, and time series validation performance data. We empirically show that our performance prediction models are much more effective than prominent Bayesian counterparts, are simpler to implement, and are faster to train. Our models can predict final performance in both visual classification and language modeling domains, are effective for predicting performance of drastically varying model architectures, and can even generalize between model classes. Using these prediction models, we also propose an early stopping method for hyperparameter optimization and meta-modeling, which obtains a speedup of a factor up to 6x in both hyperparameter optimization and meta-modeling. Finally, we empirically show that our early stopping method can be seamlessly incorporated into both reinforcement learning-based architecture selection algorithms and bandit based search methods. Through extensive experimentation, we empirically show our performance prediction models and early stopping algorithm are state-of-the-art in terms of prediction accuracy and speedup achieved while still identifying the optimal model configurations. | workshop-papers | The paper proposes to use simple regression models for predicting the accuracy of a neural network based on its initial training curve, architecture, and hyper-parameters; this can be used for speeding up architecture search. While this is an interesting direction and the presented experiments look quite encouraging, the paper would benefit from more evaluation, as suggested by reviewers, especially within state-of-the-art architecture search frameworks and/or large datasets. | test | [
"SJwuTOPNz",
"ryMvgqdgM",
"B12MLEclM",
"SJzxff6xM",
"HJeLUNhmz",
"rJEPHE3mG",
"BkaVm42Xz",
"Sy4RWE37f"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Thanks for the careful response; it helped clarify several of my questions and also fixed the previously inflated speedup result of 7x at the end of the paper. Extending Figure 6 was also helpful.\n\nIt is nice to see the new results for BLR. I am confused, though: which features does this use? Why are the results of BLR and OLS are not identical for Table 1 and Figure 2? Since this only depends on means and not uncertainty predictions, they ought to be the same if they used the same features. I am also surprised that Gaussian processes do not work, but that BLR does. BLR is equivalent to a Gaussian process with a particular kernel, so the issue must be the choice of kernel. With very different types of features, a RBF kernel may need an automatic relevant determination (ARD) kernel. \n\nI largely agree with the authors' take on the runtime for prediction. I'd like to mention that even 8.8 GPU days overhead would not matter for the Zoph & Le experiments, which required 800 GPUs for two weeks, making an overhead of 8.8 GPU days less than 0.1% percent overhead. But what the authors write in the updated version is perfectly fine, and it is of course very nice to have extremely cheap predictions. This might also enable using these predictions in an inner loop of a more complex method. So, this is clearly a plus of the method.\n\nThe authors reran all experiments with an automatic choice between a linear and a kernel SVR, and I'm surprised that the results got a lot worse for ResNets/Cuda-ConvNet; compare the numbers for TS+AP+HP in Table 2: R2 values of now 86.05 vs. previously 91.8 or 95.69. Actually, the columns for ResNets & Cuda-ConvNet are now identical, so there must be (at least) a copy-paste error in this table. That, combined with the worsened results, does not leave me very confident about the updated experiments (which, of course, had to be done under time pressure during the rebuttal, increasing the risk of errors). \n\nOverall, I have some doubts remaining concerning the inconsistency between OLS and BLR, and the bugs in Table 2. Nevertheless, I would hope that the authors can fix these for the camera ready version; therefore, I still rate this paper just above the acceptance threshold, since I like the method's simplicity and the strong results. If accepted, I encourage the authors to fix Table 2, OLS vs. BLR, and give GPs another shot with an ARD kernel. (If rejected, I'd propose the same for an eventual resubmission.)",
"\nThis paper explores the use of simple models for predicting the final\nvalidation performance of a neural network, from intermediate values\nduring training. It uses support vector regression to show that a\nrelatively small number of samples of hyperparameters, architectures,\nand validation time series can lead to reasonable predictions of\neventual performance. The paper performs a modest evaluation of such\nsimple models and shows surprisingly good r-squared values. The\nresulting simple prediction framework is then used for early stopping,\nin particular within the Hyperband hyperparameter search algorithm.\n\nThere's a lot that I like about this paper, in particular the ablation\nstudy to examine which pieces matter, and the evaluation of a couple\nof simple models. Ultimately, however, I felt like the paper was\nsomewhat unsatisfying as it left open a large number of obvious\nquestions and comparisons:\n\n- The use of the time series is the main novelty. In the AP, HP and\n AP+HP cases of Table 2, it is essentially the same predictive setup\n of SMAC, BO, and other approaches that are trying to model the map\n from these choices to out-of-sample performance. Doesn't the good\n performance without TS on, e.g., ResNets in Table 2 imply that the\n Deep ResNets subfigure in Figure 3 should start out at 80+?\n\n- In light of the time series aspect being the main contribution, a\n really obvious question is: what does it learn about the time\n series? The linear models do very well, which means it should be\n possible to look at the magnitude of the weights. Are there any\n surprising long-range dependencies? The fact that LastSeenValue\n doesn't do as well as a linear model on TS alone would seem to\n indicate that there are higher order autoregressive coefficients.\n That's surprising and the kind of thing that a scientific\n investigation here should try to uncover; it's a shame to just put\n up a table of numbers and not offer any analysis of why this works.\n\n- In Table 1 the linear SVM uniformly outperforms the RBF SVM, so why\n use the RBF version?\n\n- Given that the paper seeks to use uncertainty in estimates and the\n entire regression setup could be trivially made Bayesian with no\n significant computational cost over a kernelized SVM or OLS,\n especially if you're doing LOOCV to estimate uncertainty in the\n frequentist models. Why not include Bayesian linear regression and\n Gaussian process regression as baselines?\n\n- Since the model gets information from the AP and HP before doing any\n iterations, why not go on and use that to help select candidates?\n\n- I don't understand how speedup is being computed in Figure 4.\n\n- I'd like a more explicit accounting of whether 0.00006 seconds vs\n 0.024 seconds is something we should care about in this kind of\n work, when the steps can take minutes or hours on a GPU.\n\n- How useful is r-squared as a measure of performance in this setting?\n My experience has been that most of the search space has very poor\n performance and the objective is to find the small regions that work\n well.\n\nMinor things:\n\n- y' (prime) gets overloaded in Section 3.1 as a derivative and then\n in Section 4 as a partial learning curve.\n\n- \"... is more computationally and ...\"\n\n- \"... our results for performing final ...\"\n",
"This paper shows a simple method for predicting the performance that neural networks will achieve with a given architecture, hyperparameters, and based on an initial part of the learning curve.\nThe method assumes that it is possible to first execute 100 evaluations up to the total number of epochs.\nFrom these 100 evaluations (with different hyperparameters / architectures), the final performance y_T is collected. Then, based on an arbitrary prefix of epochs y_{1:t}, a model can be learned to predict y_T.\nThere are T different models, one for each prefix y_{1:t} of length t. The type of model used is counterintuitive for me; why use a SVR model? Especially since uncertainty estimates are required, a Gaussian process would be the obvious choice. \n\nThe predictions in Section 3 appear to be very good, and it is nice to see the ablation study.\n\nSection 4 fails to mention that its use of performance prediction for early stopping follows exactly that of Domhan et al (2015) and that this is not a contribution of this paper; this feels a bit disingenious and should be fixed.\nThe section should also emphasize that the models discussed in this paper are only applicable for early stopping in cases where the function evaluation budget N is much larger than 100.\nThe emphasis on the computational demand of 1-3 minutes for LCE seems like a red herring: MetaQNN trained 2700 networks in 100 GPU days, i.e., about 1 network per GPU hour. It trained 20 epochs for the studied case of CIFAR, so 1-3 minutes per epoch on the CPU can be implemented with zero overhead while the network is training on the GPU. Therefore, the following sentence seems sensational without substance: \"Therefore, on a full meta-modeling experiment involving thousands of neural network configurations, our method could be faster by several orders of magnitude as compared to LCE based on current implementations.\"\n\nThe experiment on fast Hyperband is very nice at first glance, but the longer I think about it the more questions I have. During the rebuttal I would ask the authors to extend f-Hyperband all the way to the right in Figure 6 (left) and particularly in Figure 6 (right). Especially in Figure 6 (right), the original Hyperband algorithm ends up higher than f-Hyperband. The question this leaves open is whether f-Hyperband would reach the same performance when continued or not. \nI would also request the paper not to casually mention the 7x speedup that can be found in the appendix, without quantifying this. This is only possible for a large number of 40 Hyperband iterations, and in the interesting cases of the first few iterations speedups are very small. Also, do the simulated speedup results in the appendix account for potentially stopping a new best configuration, or do they simply count how much computational time is saved, without looking at performance? The latter would of course be extremely misleading and should be fixed. I am looking forward to a clarification in the rebuttal period. \nFor relating properly to the literatue, the experiment for speeding up Hyperband should also mention previous methods for speeding up Hyperband by a model (I only know one by the authors' reference Klein et al (2017)).\n\nOverall, this paper appears very interesting. The proposed technique has some limitations, but in some settings it seems very useful. I am looking forward to the reply to my questions above; my final score will depend on these.\n\nTypos / Details: \n- The range of the coefficient of determination is from 0 to 1. Table 1 probably reports 100 * R^2? Please fix the description.\n- I did not see Table 1 referenced in the text.\n- Page 3: \"more computationally and\" -> \"more computationally efficient and\"\n- Page 3: \"for performing final\" -> \"for predicting final\"\n\n\nPoints in favor of the paper:\n- Simple method\n- Good prediction results\n- Useful possible applications identified\n\nPoints against the paper:\n- Methodological advances are limited / unmotivated choice of model\n- Limited applicability to settings where >> 100 configurations can be run fully\n- Possibly inflated results reported for Hyperband experiment",
"This paper proposes the use of an ensemble of regression SVM models to predict the performance curve of deep neural networks. This can be used to determine which model should be trained (further). The authors compare their method, named Sequential Regression Models (SRM) in the paper, to previously proposed methods such as BNN, LCE and LastSeenValue and claim that their method has higher accuracy and less time complexity than the others. They also use SRM in combination with a neural network meta-modeling method and a hyperparameter optimization one and show that it can decrease the running time in these approaches to find the optimized parameters.\n\nPros: The paper is proposing a simple yet effective method to predict accuracy. Using SVM for regression in order to do accuracy curve prediction was for me an obvious approach, I was surprised to see that no one has attempted this before. Using features sur as time-series (TS), Architecture Parameters (AP) and Hyperparameters (HP) is appropriate, and the study of the effect of these features on the performance has some value. Joining SRM with MetaQNN is interesting as the method is a computation hog that can benefit from such refinement. The overall structure of the paper is appropriate. The literature review seems to cover and categorize well the field.\n\nCons: I found the paper difficult to read. In particular, the SRM method, which is the core of the paper, is not described properly, I am not able to make sense of the description provided in Sec. 3.1. The paper is not talking about the weaknesses of the method at all. The practicability of the method can be controversial, the number of attempts require to build the (meta-)training set of runs can be huge and lead to something that would be much more costful that letting the runs going on for more iterations. \n\nQuestions:\n1. The approach of sequential regression SVM is not explained properly. Nothing was given about the combination weights of the method. How is the ensemble of (1-T) training models trained to predict the f(T)?\n2. SRM needs to gather training samples which are 100 accuracy curves for T-1 epochs. This is the big challenge of SRM because training different variations of a deep neural networks to T-1 epochs can be a very time consuming process. Therefore, SRM has huge preparing training dataset time complexity that is not mentioned in the paper. The other methods use only the first epochs of considered deep neural network to guess about its curve shape for epoch T. These methods are time consuming in prediction time. The authors compare only the prediction time of SRM with them which is really fast. By the way still, SRM is interesting method if it can be trained once and then be used for different datasets without retraining. Authors should show these results for SRM. \n3. Discussing about the robustness of SRM for different depth is interesting and I suggest to prepare more results to show the robustness of SRM to violation of different hyperparameters. \n4. There is no report of results on huge datasets like big Imagenet which takes a lot of time for deep training and we need automatic advance stopping algorithms to tune the hyper parameters of our model on it.\n5. In Table 2 and Figure 3 the results are reported with percentage of using the learning curve. To be more informative they should be reported by number of epochs, in addition or not to percentage.\n6. In section 4, the authors talk about estimating the model uncertainty in the stopping point and propose a way to estimate it. But we cannot find any experimental results that is related to the effectiveness of proposed method and considered assumptions.\n\nThere are also some typos. In section 3.3 part Ablation Study on Features Sets, line 5, the sentence should be “Ap are more important than HP”.\n",
"Thanks to all of our reviewers for their thoughtful comments. We have incorporated many if not all of their suggestions into our updated text. We have added much additional analysis of the method into the appendix and hopefully clarified and improved the text. We summarize our additional analysis below.\n\nList of additional analyses performed:\n\na. Because there was no clear winner between SVR kernels (linear versus RBF), we included the kernel into the hyperparameter search space such that each model in the SRM chooses the best kernel dynamically. We have updated all results, both in performance prediction and early stopping experiments, to reflect the new SVR SRM.\n\nb. We have added results for Bayesian Linear Regression (BLR). We include performance prediction results for BLR in Table 1 and Figure 3. We also include results using a BLR SRM (using natural uncertainty estimates instead of ensemble estimates) and find that in some cases it outperforms the SVR SRM (see Figures 4 and 6). We believe this further strengthens our main point that simple models can provide accurate neural network prediction performance.\n\nc. Appendix Section D: We have included results where we use a SVR model trained on only architecture features and hyperparameters (no time-series) as an acquisition function used to choose configurations to evaluate within f-Hyperband (Similar to Klien et al. 2017). We did not find any significant improvement from adding this acquisition function.\n\nd. Appendix Section E: We include analysis of the Gaussian error assumption used to estimate uncertainty in frequentist SRMs. We empirically found that the assumption holds well by comparing the held out error distributions to training error distributions.\n\ne. Appendix Section F: We expound upon the ablative analysis presented in Table 2 to give more intuition for which features are useful in predicting future performance through analyzing the weights of linear performance prediction models. \nWe added Figure 13 to Appendix Section G, which is complementary to Figure 12. In this experiment we show the potential speedup if one uses pretrained SRMs with f-Hyperband.\n\nf. Appendix Section H: We add more results on the robustness of SRMs to out-of-distribution configurations (we originally just included one such experiment in the main text).\n\f\n",
"Thank you for your thoughtful review! Please see below for responses to your questions.\n\n* “Doesn't the good performance without TS on, e.g., ResNets in Table 2 imply that the Deep ResNets subfigure in Figure 3 should start out at 80+?”\n\n- We investigated the difference between Table 2 and Figure 3 and we found that this difference was a result of the different hyperparameter ranges used to optimize the SVRs in the two experiments that we ran to compute results for Table 2 and Figure 3. To ensure that the results are consistent across all the experiment in the paper, we have now updated all results in the paper with SVR models that have the kernel as an additional hyperparameter searched over (with linear and RBF kernel as options). We have updated the results of Table 2 and Figure 3 with these new models, which are now consistent. \n\n* Additional Analysis on time-series features. \n- We apologize for not including this analysis in the original submission. We used the linear nu-SVR model to compare the weights of all features, which are normalized before training the SVR. We found the following main insights across datasets:\n\na. The time-series (TS) features on average have higher weights than HP and AP features (which is confirmed by our ablation studies in Table 2 as well). \nb. The original validation accuracies (y_t) have higher weights on average than the first-order differences (y_t’) and second-order differences (y_t’’). \nc. In general, later epochs in y_t have higher weights. In other words, the latest performance of the model available for prediction is much more important for performance prediction than initial performance, when the model has just started training. This makes intuitive sense. \nd. Early values of (y_t’) also have high weights, which indicates learning quickly in the beginning of training is a predictor of better final performance in our datasets. \ne. Among AP and HP features, the total number of parameters and depth are very important features for the CNNs, and they are assigned weights comparable or higher than late epoch accuracies (y_t). However, they have much lower weight in the LSTM experiment. The number of filters in each convolutional layer also has a high positive weight for CNNs. In general, architectural features are much more important for CNNs as compared to LSTMs. Hyperparameters like initial learning rate and step size were generally not as important as the architectural features, which is corroborated by the ablation study in Table 1. \n\nWe have included these details in Appendix Section F and Figure 10. \n\n\n* “In Table 1 the linear SVM uniformly outperforms the RBF SVM, so why use the RBF version?”\n\n- To ensure consistency across experiments, we have rerun all our results such that SVR now has its kernel as a hyperparameter that is also searched over. \n\n* “how is speedup computed in Figure 4.”\n\n- We compute speedup as (# iterations used without early stopping) / (# of iterations used with early stopping). In Figure 4, we compare the total number of iterations used in a full MetaQNN experiment to a simulation where we early stop models based on the prediction of an SRM (as detailed in Section 4.1. We have included these details in the caption of Figure 4 in the updated version. \n\n* “whether 0.00006 seconds vs 0.024 seconds is something we should care about in this kind of work”\n\n- We do not wish to claim that the difference between 0.00006 seconds vs 0.024 seconds is significant in this context, and we have deemphasised the comparison to prior work on speed in the updated text. However, the difference between the time for one inference required by our method (0.00006 seconds) and LCE (1 minute) can result in difference in overhead. Since it is necessary to continue training a model on GPU while early stopping prediction is being performed, an overhead equal to the time required to evaluate the early prediction model will be added. If we assume that it takes 1 minute to evaluate LCE, the overhead for the Zoph and Le (2017) experiment---which trains 12800 models---would be (12800*1 min) or 8.8 GPUdays. The equivalent time for our method would be (12800*0.00006 s) or 8*10^-6 GPUdays. \n\n\n* “r-squared as a measure of performance”\n\n- R-square allows us to evaluate the accuracy of a performance prediction model across the search space, which is important because we do not want the performance prediction model to overestimate the performance of poorly-performing architectures nor to underestimate the performance of well-performing architectures. Having a predictor that works well in all parts of the search space is useful for sequential model based algorithms. For example, in Q-Learning, if you have two poor models, it is useful for the agent to still know which one was relatively better. Finally, Figure 2 qualitatively shows that our models work well across performance ranges.\n\n* Typos / Details\n- We have fixed these typos/errors. We appreciate your careful reading!\n",
"Thank you for your thoughtful review!\n\n* Why use SVR instead of a Gaussian Process? \n\n- One goal of this work was to make early stopping practical to use for arbitrary, albeit large-scale, architecture searches. This is why we constrained ourselves to simple regression models that have many quick and standardized implementations available. In order to train a model on as few as 100 data points, we also needed models that have low sample complexity. At your suggestion, we ran some experiments with Gaussian Processes and Bayesian Linear Regression (BLR). We found Gaussian Processes with a standard RBF covariance kernel performed very poorly. However, similar to OLS, Bayesian Linear Regression (BLR) performed comparably to SVR in performance prediction (see Figure 3 in the updated text). We also found that a BLR SRM actually achieved a faster speedup on MetaQNN than a SVR SRM; however, the BLR SRM had suboptimal performance when used with f-Hyperband on the SVHN dataset (see Figure 4 and 6 in the updated text). In summary, our observation that simple regression models trained with features based on time-series performance, architecture parameters, and training parameters accurately predict the final performance on networks holds well, irrespective of the regression model used. \n\n* Performance Prediction for Early Stopping in Domhan et al (2015)\n\n- We did not wish to claim in the text that the use of early stopping for performance prediction is a contribution of our paper. To clarify, we have now included a note in the updated text that our early stopping formulation follows that of Domhan et al (2015). \n\n* Computational cost comparison with LCE\n\n- Thanks for your detailed comment on this. You are correct in that early stopping can be implemented on a CPU while the GPU continues to train the architecture. However, since it is necessary to continue training a model on GPU while early stopping prediction is being performed on the CPU, an overhead equal to the time required to evaluate the early prediction model once will be added. If we assume that it takes 1 minute to evaluate LCE, the overhead for the Zoph and Le (2017) experiment---which trains 12800 models---would be (12800*1 minute) or 8.8 GPUdays. The equivalent time for our method would be (12800*0.00006 seconds) or 8*10^-6 GPUdays. That said, we agree with you that comparing computational cost of different empirical methods is tricky and we have updated the text to de-emphasize the comparison with prior work on speed. The most important metric to compare methods should be prediction accuracy.\n\n* Clarification Hyperband Experiment\nResponses to questions on the hyperband experiment. \n\n1. Extending f-Hyperband all the way to the right:\n- In the updated text, we have extended f-Hyperband in both subfigures of Figure 6 as you suggest. When you give f-Hyperband the same number of raw iterations as vanilla Hyperband (i.e. extending Figure 6), f-Hyperband in fact outperforms vanilla hyperband consistently (with more than the standard error between seeds) on both experiments with both settings for kappa. \n\n2. Questions about speedup \n- Our apologies for not explaining the claim on speedup in detail. We indeed did not consider difference in performance when claiming 7x speedup. We have now completed experiments where we use pretrained SRMs, and then calculate the speedup and see the difference in performance. We found that the most aggressive settings of early stopping were detrimental to the performance, and we were only able to safely obtain a 4x speedup on CIFAR-10 and a 3x speedup on SVHN, while not compromising on the ability to find the optimal configuration. We have added the results of this experiment to the appendix and updated our claim. Thank you again for pointing this out!\n\n3. Prior work on speeding up hyperband\n- We have included a reference to the Klein et al. (2017) experiment with hyperband in Section 4.2. This is the only prior work that we are aware of as well. We did not include a direct comparison to Klein et al. in Figure 6, since f-Hyperband relies on early stopping for speedup, while Klein et al. use BNN as an acquisition function for choosing the next models to evaluate. However, we also included a new experiment in the Appendix (Section D) which uses SVR trained on only architecture features and hyperparameters as an acquisition function in a similar manner to Klein et al. (2017). We found that this did not help much over f-Hyperband.\n \n* Typos / Details\n- We have fixed these typos/errors. We appreciate your careful reading!\n",
"Thank you for your thoughtful review and suggestions! We agree with you that predicting final performance of neural networks using a regression model trained with features based on time-series accuracies, architecture parameters, and training parameters is a surprisingly simple and effective idea. We hope that this work is published to advance the literature in this previously under-explored area, and to establish proper, simple baselines in future work in this field. We apologize if the exposition was not clear and we have included detailed explanation of our method below and also in the updated text. \n\nIn response to your specific questions:\n\n1. “How is the ensemble of (1-T) training models trained to predict the f(T)?” \n\n- We train T-1 separate regression models, where each sequential model uses one more point of the validation curve (i.e. the k’th model would use validation measurements from the first k epochs). We do not ensemble these models for prediction, so there are no combination weights. For early stopping, if we have trained a model to k epochs, then we use the k’th regression model to compute a performance estimate.\n\n2.1 Comparing time complexity of training and inference of SRMs and other methods \n\n- While SRMs do require training 100 configurations to build a prediction model, this does not add any extra computational overhead in the large-scale architecture searches---which train several thousand models per experiment, each for a large number of epochs. For example, Zoph and Le (2017) trained 12,800 models. Moreover, as experiments in Figure 4 show, even after taking the time to train a 100 models into account, SRMs are significantly faster than previous methods (e.g., LCE) that do not require a meta-training set. Similarly, our method obtains speedup on Hyperband, because, again, there is no overhead from incorporating our method into any search method. In sum, for appropriate applications like hyperparameter search and neural network architecture search, the computational expense of our method should not be a hindrance especially since most searches involve training models on a GPU, and we train our SRMs on CPU. \n\n2.2 Using SRMs on different datasets without retraining \n\n- Since we used quite different architecture types (ResNets, LSTMs, basic CNNs) and hyperparameter sets (e.g. stepwise exponential learning rate decay) in individual experiments to demonstrate the versatility of our method, the learning curves across datasets were too dissimilar for transfer learning to work well. However, this is certainly an important area of future research with useful applications. \n\n\n3. “More results to show the robustness of SRM to violation of different hyperparameters.” \n\n- According to your suggestion, we ran several additional experiments in this vein, where we trained an SRM with models with hyperparameter values below/above the median and tested their performance on the remainder of the models. SRMs generally showed consistent performance across such splits. For example, an SRM trained using LSTMs with # layers below the median obtained an r-squared of 0.967 on the remainder and the performance for the other split was 0.986. We have included results of several such experiments in Appendix Section H. \n\n4. No results on huge datasets like big Imagenet\n\n- Unfortunately we do not have enough resources to experiment with this method on big Imagenet. We hope that our work inspires such investigations, especially in the industry setting, where our method can be valuable. \n\n5. “Results In Table 2 and Figure 3 should be reported by number of epochs, in addition or not to percentage.”\n\n- We report the results in terms of percentage for easy visual comparison across datasets within the figure. In the camera-ready version, we will replicate this figure in the Appendix using number of epochs. The total number of epochs for each experiment are found in Section 3.2.\n\n6. “Experimental results on estimating the model uncertainty that are related to the effectiveness of proposed method and considered assumptions.”\n\n- We conducted more analysis on our considered assumptions (Appendix Section E). We test the assumption on Gaussian-distributed errors using examples of the held out set error distributions compared with the Gaussian computed from training set errors. This assumption holds reasonably well (Appendix Figure 8). Figure 9 shows plots of the mean log likelihood of the held out errors being drawn from the Gaussian parameterized by the mean and variance of the training errors. These plots show that the log likelihood is very close to the baseline (mean log likelihood of samples drawn from the same Gaussian), which also shows that the assumption holds well. Finally, the effectiveness of the proposed method is also illustrated by results in Figure 4, where our algorithm successfully recovers the optimal model in most cases. \n\n"
] | [
-1,
6,
6,
4,
-1,
-1,
-1,
-1
] | [
-1,
4,
5,
3,
-1,
-1,
-1,
-1
] | [
"BkaVm42Xz",
"iclr_2018_BJypUGZ0Z",
"iclr_2018_BJypUGZ0Z",
"iclr_2018_BJypUGZ0Z",
"iclr_2018_BJypUGZ0Z",
"ryMvgqdgM",
"B12MLEclM",
"SJzxff6xM"
] |
iclr_2018_SkOb1Fl0Z | A Flexible Approach to Automated RNN Architecture Generation | The process of designing neural architectures requires expert knowledge and extensive trial and error.
While automated architecture search may simplify these requirements, the recurrent neural network (RNN) architectures generated by existing methods are limited in both flexibility and components.
We propose a domain-specific language (DSL) for use in automated architecture search which can produce novel RNNs of arbitrary depth and width.
The DSL is flexible enough to define standard architectures such as the Gated Recurrent Unit and Long Short Term Memory and allows the introduction of non-standard RNN components such as trigonometric curves and layer normalization. Using two different candidate generation techniques, random search with a ranking function and reinforcement learning,
we explore the novel architectures produced by the RNN DSL for language modeling and machine translation domains.
The resulting architectures do not follow human intuition yet perform well on their targeted tasks, suggesting the space of usable RNN architectures is far larger than previously assumed. | workshop-papers | The paper presents a domain-specific language for RNN architecture search, which can be used in combination with learned ranking function or RL-based search. While the approach is interesting and novel, the paper would benefit from an improved evaluation, as pointed out by reviewers. For example, the paper currently evaluated coreDSL+ranking for language modelling and extendedDSL+RL for machine translation. The authors should use the same evaluation protocol on all tasks, and also compare with the state-of-the-art MT approaches. | test | [
"SkQLSjmSG",
"H11b-JnVf",
"rkLDKb5Nf",
"SJgAX9tVf",
"r1wiKz5ef",
"SJ4ObrLEG",
"S1exhDQJf",
"BkuT3b9ef",
"rJOkNuzVf",
"Ska-dZ6mG",
"SkOgOWpmz",
"HyOUwZTQM"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Agree that is not straight forward, but would be in my view important so that the community understands what do we gain with a new method. One could for example run the different methods (for different initial conditions) and quantify how often they end up in better solutions (or 'radically' new solutions).",
"Did you have a specific quantitative comparison in mind? We do compare with the architectures found by e.g. Zoph and Le (2017) in Table 1.\nWe agree that it would be great to have a metric \"radical novelty of architecture\" or similar but it seems like at this point the qualitative differences of architectures with novel operators and the flexibility of our DSL along with good quantitative results are the key metrics that we can report.\nWe also updated the paper to report our hardware infrastructure and the specific computation times to make our work better comparable to existing methods.",
"Thanks for clarifying this point. I understand that the manuscript contains a section briefly highlighting the diferences, but to properly evaluate what does the community gain with this method. Therefore having a more quantitative comparison would be important.",
"From your previous comment, it was not clear to us what parts you specifically wanted to have contrasted with previous work - we do have a section that highlights what we think are our major improvements and distinctions over previous work. If you could please clarify what previous work you want us to discuss in more detail, that would be greatly appreciated and we are happy to extend the paper in that direction.\nTo reiterate, we see Zoph and Le (2017) as the most similar approach to ours and improve over that with two major contributions: 1. we introduce a domain-specific-language (DSL) that allows for the generation of much more flexible and thus more radically novel architectures (see Section 5, paragraph 1 for details) and 2. we expand on commonly used operators with so far widely unexplored operators like sine curves, division and others.\n",
"The authors introduce a new method to generate RNNs architectures. The authors propose a domain-specific language two types of generators (random and RL-based) together with a ranking function and evaluator. The results are promising, and this research introduces a framework that might enable the community to find new interesting models. However, it's not clear how these framework compare with previous ones (see below). Also, the clarity of the text could be improved.\n\nPros:\n1. An interesting automatic method for generating RNNs is introduced by the authors (but is not entirely clear how does ir compare with previous different approaches)\n2. The approach is tested in a number of tasks: Language modelling (PTB and wikipedia-2) and machine translation (\n3. In these work the authors tested a wide range of different RNNs\n3. It is interesting that some of the best performing architectures (e.g. LSTM, residual nets) are found during the automatic search\n\n\nCons:\n1. It would be nice if the method didn’t rely on defining a specific set of functions and operators upon which the proposed method works.\n2. The text has some typos: for example: “that were used to optimize the generator each batch”\n3. In section 5, the authors briefly discuss other techniques using RL and neuroevolution, but they never contrast these approaches with theirs. Overall, it would be nice if the authors had made a more direct comparison with other methods for generating RNNs.\n4. The description of the ranking function is not clear. What kind of networks were used? This appears to introduce a ranking-network-specific bias in the search process.\n\nMinor comments:\n1. The authors study the use of subtractive operators. Recently a new model has considered the use of subtractive gates in LSTMs as a more biologically plausible implementation (Cortical microcircuits as gated-recurrent neural networks, NIPS 2017).\n2. Figure 4 missing label on x-axis\n3. End of Related Work period is missing.\n3. The authors state that some of the networks generated do not follow human intuition, but this doesn’t appear to discussed. What exactly do the authors mean?\n4. Not clear what happens in Figure 4 in epoch 19k or so, why such an abrupt change?\n5. Initial conditions are key for such systems, could the init itself be included in this framework?",
"Thanks for your reply and clarifications.\n\nOne of the key issues that the authors did not address in their reply is that of the comparison with previous work, this is of importance to properly assess the potential impact of this work. Therefore, I have decided to revise my rating slightly down. It is therefore, unlikely that this is going to be accepted. However, I encourage the authors in finishing this work with our comments in mind.",
"This work tries to cast the search of good RNN Cell architectures as a black-box optimization problem where examples are represented as an operator tree and are either 1. Sampled randomly and scored based on a learnt function OR 2. Generated by a RL agent.\nWhile the overall approach appears to generalize previous work, I see a few serious flaws in this work:\n\nLimited scope\nAs far as I can tell this work only tries to come up with a design for a single RNN cell, and then claims that the optimality of the design will carry over to stacking of such modules, not to mention more complicated network designs. \nEven the design of a single cell is heavily biased by human intuition (section 4.1) It would have been more convincing to see that the system learn heuristics such as “don’t stack two matrix mults” rather than have these hard coded.\n\nNo generalization guarantees:\nNo attempt is made to optimize hyperparameters of the candidate architectures. This leads one to wonder if the winning architecture has won only because of the specific parameters that were used in the evaluation.\nIndeed, the experiments in the paper show that a cell which was successful on one task isn’t necessary successful on a different one, which questions the competence of the scoring function / RL agent.\n\nOn the experimental side:\nNo comparison is made between the two optimization strategies, which leaves the reader wondering which one is better.\nControl for number of network variables is missing when comparing candidate architectures. \n",
"This paper investigates meta-learning strategy for automated architecture search in the context of RNN. To constraint the architecture search space, authors propose a DSL that specifies the RNN recurrent operations. This DSL allows to explore RNN architectures using either random search or a reinforcement-learning strategy. Candidate architectures are ranked using a TreeLSTM that tries to predict the architecture performances. The top-k architectures are then evaluated by fully training them on a given task.\n\nAuthors evaluate their approach on PTB/Wikitext 2 language modeling and Multi30k/IWSLT'16 machine translation. In both experiments, authors show that their approach obtains competitive results and can sometime outperforms RNN cells such as GRU/LSTM. In the PTB experiment, their architecture however underperforms other LSTM variant in the literatures.\n\n\n- Quality/Clarity\nThe paper is overall well written and pleasant to read.\n\nFew details can be clarified. In particular how did you initialize the weight and bias for both the LSTM/GRU baselines and the found architectures? Is there other works leveraging RNN that report results on the Multi30k/IWSLT datasets?\n\nYou state in paragraph 3.2 that human experts can inject the previous best known architecture when training the ranking networks. Did you use this in the experiments? If yes, what was the impact of this online learning strategy on the final results? \n\n\n- Originality\nThe idea of using DSL + ranking for architecture search seems novel.\n\n\n- Significance\nAutomated architecture search is a promising way to design new networks. However, it is not clear why the proposed approach is not able to outperforms other LSTM-based architectures on the PTB task. Could the problem arise from the DSL that constraint too much the search space ? It would be nice to have other tasks that are commonly used as benchmark for RNN to see where this approach stand.\n\nIn addition, authors propose both a DSL, a random and RL generator and a ranking function. It would be nice to disentangle the contributions of the different components. In particular, did the authors compare the random search vs the RL based generator or the performances of the RL-based generator when the ranking network is not used?\n\nAlthough authors do show that they outperform NAScell in one setting, it would be nice to have an extended evaluation (using character level PTB for instance).",
"This work is indeed nice, but I still think it is not very useful for ML practitioners, for the reasons I mentioned: limited scope, no generalization across hyper-parameters (which I believe we agree on).\nI also don't think this method contributes to theoretical understanding of RNNs. ",
"Thanks for your response. While we agree that our work helps generalize previous work in this area we do also agree that it doesn’t resolve all the issues that you’ve note.\n\nIn terms of limited scope, our work does show that a single RNN cell trained on a two layer setup is able to extend to a three layer setup, as seen with our best BC3 cell results being reported on a three layer setup. Our baseline experimental setup involved a two layer cell setup for this very reason, to ensure that discovered RNN cells could be stacked. We believe we show this is indeed true.\n\nThe heuristics introduced for that section were introduced to limit the search space. Whilst it is likely that the architecture generator would learn to avoid models which our heuristics filtered we decided that the computation expended for learning those relatively simple concepts was better spent on the architecture search process itself.\n\nWhilst we kept the number of parameters equal for the language modeling experiment, keeping the number of parameters equal for the translation experiment was more complicated. The RNN cells discovered residual connections and hence prevented easy scaling up or down of the overall hidden size without an additional projection layer.\n\nIn terms of generalization guarantees, like other machine learning models the architecture generator is only informed by the training data it receives. In our instance this is from a single task with limited sized models due to the computational constraints but this could be extended to larger models and across more varied tasks. We agree that the competence of the scoring function / RL agent would be dependent on the training data it receives.",
"Thank you for your review.\n\nIn regards to initialization of the weights and bias within our models, we used the default that was set within PyTorch. For both weights and bias this was uniform sampling from +-(1/sqrt(HIDDEN)).\n\nWe are not aware of results that utilize only an RNN on the Multi30k/IWSLT datasets, likely as the LSTM or GRU are considered the standard baselines for such work, consistently outperforming RNNs.\n\nWe did not inject human known architectures into our search as we were interested in understanding whether the given architecture search process could generate architectures of comparable accuracy. Extending existing architectures would be an interesting extension and could provide an even more computationally efficient starting point for the ranking function’s search.\n\nFor the LM task, the LSTM has been finely tuned over quite some time for these tasks. The work of Melis et al and Merity et al go into quite an amount of detail about the hyperparameter search they perform, with the former leveraging large scale automated hyperparamter optimization on Google’s infrastructure and the latter featuring substantial manual tuning building on the work of many others.\n\nAnother important consideration that we believe may have been problematic for beating the LSTM’s performance is that the baseline experiments we utilized featured smaller models for faster training time. Whilst necessary for our setup it is possible that larger models with fewer iterations would have been a better choice.\n",
"Thank you for your review.\n\nWe agree that a more flexible set of operators is highly desirable, especially for finding radically novel architectures. Here, we tried to find a trade-off between flexible operators and a reasonably sized search space. In the future, expanding that search space even further will be an interesting avenue to find even more diverse architectures.\n\nThe recent work regarding biologically plausible models that utilize subtractive gates is fascinating - thanks for the reference!\n\nFor the ranking function, all the nodes except for those that are positional dependent are ChildSum TreeLSTM nodes, whilst those nodes requiring positional information are N-ary TreeLSTM nodes (more detail in 3.2). The ranking function hyper parameters and details are described in greater detail in Appendix B2.\n\n\nIn regards to Figure 4, epoch 19k, we are not entirely certain what occurred there. This was a continuous run and there were no explicit changes during that section. In the text, we briefly mention the hypothesis that the generator first learns to build robust architectures and is only then capable of inserting more varied operators without compromising the RNN's overall stability.\n\nFor the initialization, it is the default that was found within PyTorch. As an example, the initializations for the RNNs were all equivalent to PyTorch’s Linear, which performs uniform initialization between +-(1/sqrt(HIDDEN)).\nhttps://github.com/pytorch/pytorch/blob/b06276994056ccde50a6550f2c9a49ab9458df8f/torch/nn/modules/linear.py#L48"
] | [
-1,
-1,
-1,
-1,
6,
-1,
4,
5,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
4,
-1,
4,
4,
-1,
-1,
-1,
-1
] | [
"H11b-JnVf",
"rkLDKb5Nf",
"SJgAX9tVf",
"SJ4ObrLEG",
"iclr_2018_SkOb1Fl0Z",
"HyOUwZTQM",
"iclr_2018_SkOb1Fl0Z",
"iclr_2018_SkOb1Fl0Z",
"Ska-dZ6mG",
"S1exhDQJf",
"BkuT3b9ef",
"r1wiKz5ef"
] |
iclr_2018_SySaJ0xCZ | Simple and efficient architecture search for Convolutional Neural Networks | Neural networks have recently had a lot of success for many tasks. However, neural
network architectures that perform well are still typically designed manually
by experts in a cumbersome trial-and-error process. We propose a new method
to automatically search for well-performing CNN architectures based on a simple
hill climbing procedure whose operators apply network morphisms, followed
by short optimization runs by cosine annealing. Surprisingly, this simple method
yields competitive results, despite only requiring resources in the same order of
magnitude as training a single network. E.g., on CIFAR-10, our method designs
and trains networks with an error rate below 6% in only 12 hours on a single GPU;
training for one day reduces this error further, to almost 5%. | workshop-papers | The paper proposes a method for architecture search using network morphisms, which allows for faster search without retraining candidate models. The results on CIFAR are worse than the state of the art, but reasonably competitive, and achieved using limited computation resources. It would have been interesting to see how the method would perform on large datasets (ImageNet) and/or other tasks and search spaces. I would encourage the authors to extend the paper with further experimental evaluation. | val | [
"BkGYrcIlG",
"HJ8nXbKeM",
"Hk4ciAteG",
"ByYaDP9XG",
"ByYtFtLQf",
"SJJLYtL7f",
"HkFZKt8mM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper proposes a neural architecture search method that achieves close to state-of-the-art accuracy on CIFAR10 and takes much less computational resources. The high-level idea is similar to the evolution method of [Real et al. 2017], but the mutation preserves net2net properties, which means the mutated network does not need to retrain from scratch.\n\nCompared to other papers on neural architecture search, the required computational resource is impressively small: close to state-of-the-art result in one day on a single GPU. However, it is not clear to me what contribute to the massive improvement of speed. Is it due to the network morphing that preserve equality? Is it due to a good initial network structure? Is it due to the well designed mutation operations? Is it due to the simple hill climbing procedure (basically evolution that only preserve the elite)? Is it due to a well crafted search space that is potentially easier?\n\nThe experiments in this paper does not provide enough evidence to tease apart the possible causes of this dramatic reduction on computational resources. And the comparisons to other papers seems not fair since they all operate on different search space. \n\nIn summary, getting net2net to work for architecture search is interesting. And I love the results. These are very impressive numbers for neural architecture search. However, I am not convinced that the improve is resulted from a better algorithm. I would suggest that the paper carefully evaluates each component of the algorithm and understand why the proposed method takes far less computational resources.",
"This paper presents a method to search neural network architectures at the same time of training. It does not require training from scratch for each architecture, thus dramatically saves the training time. The paper can be understood with no problem. Moderate novelty, network morphism is not novel, applying it to architecture search is novel.\n\nPros:\n1. The required time for architecture searching is significantly reduced.\n2. With the same number or less of parameters, this method is able to outperform previous methods, with much less time.\n\nHowever, the method described is restricted in the following aspects.\n\n1. The accuracy of the training set is guaranteed to ascend because network morphism is smooth and number of params is always increasing, this also makes the search greedy , which could be suboptimal. In addition, the algorithm in this paper selects the best performing network at each step, which also hampers the discover of the optimal model.\n\n2. Strong human prior, network morphism IV is more general than skip connection, for example, a two column structure belongs to type IV. However, in the implementation, it is restricted to skip connection by addition. This choice could be motivated from the success of residual networks. This limits the method from discovering meaningful structures. For example, it is difficult to discover residual network denovo. This is a common problem of architecture searching methods compared to handcrafted structures.\n\n3. The comparison with Zoph & Le is not fair because their controller is a meta-network and the training happens only once. For example, the RNNCell discovered can be fixed and used in other tasks, and the RNN controller for CNN architecture search could potentially be applied to other tasks too (though not reported).\n",
"This paper proposes a variant of neural architecture search. It uses established work on network morphisms as a basis for defining a search space. Experiments search for effective CNN architectures for the CIFAR image classification task.\n\nPositives:\n\n(1) The approach is straightforward to implement and trains networks in a reasonable amount of time.\n\n(2) An advantage over prior work, this approach integrates architectural evolution with the training procedure. Networks are incrementally grown; child networks are initialized with learned parameters from their parents. This eliminates the need to restart training when making an architectural change, and drastically speeds the search.\n\nNegatives:\n\n(1) The state-of-the-art CNN architectures are not mysterious or difficult to find, despite the paper's characterization of them being so. Indeed, ResNet and DenseNet designs are both guided by extremely simple principles: stack a series of convolutional layers, pool occasionally, and use some form of skip-connection throughout. The need for architectural search is unclear.\n\n(2) The proposed search space is boring. As described in Section 4, the possibly evolutionary changes are limited to deepening the network, widening the network, and adding a skip connection. But these are precisely the design aspects that have been well-explored by human trial and error and for which good rules of thumb are already available.\n\n(3) As a consequence of (1) and (2), the result is essentially rigged. Since only depth, width, and skip connections are considered, the end network must end up looking like a ResNet or DenseNet, but with some connections pruned. There is no way to discover a network outside of the principled design space articulated in point (1) above. Indeed, the discovered network diagrams (Figures 4 and 5) fall in this space.\n\n(4) Performance is worse than the best hand-designed baselines. One would hope that, even if the search space is limited, the discovered networks might be more efficient or higher performing in comparison to the human designs which fall within that same space. However, the results in Tables 3 and 4 show this not to be the case. The best human designs outperform the evolved networks. Moreover, the evolved networks are woefully inefficient in terms of parameter count.\n\nTogether, these negatives imply the proposed approach is not yet at the point of being useful in practice. I think further work is required (perhaps expanding the search space) to resolve the current limitations of automated architecture search.\n\nMisc:\n\nTables 3 and 4 would be easier to parse if resources were simply reported in terms of total GPU hours.",
"Dear readers,\n\nwe updated the experimental section of our paper.\n\n",
"Thank you for the comment. \nRegarding your concern what actually yields the massive speed improvement: We will update the paper in just a few days, where you will find an additional experiment: We run our algorithm without the network morphism constraint but rather only ‘inherit’ unchanged weights (this is in essence what Real et al. in Large-Scale Evolution of Image Classifiers did). You will then find in section 5.1.1 (baseline experiments on CIFAR10) that performance deteriorates if we either 1) turn model selection off or 2) turn SGDR off or 3) turn the network morphism constraint off. In conclusion, all three aspects of the method are essential and contribute to the speed up.\n",
"Thank you for the comment. \nRegarding 1. and 2.: Yes, we agree with that. We will work on this.\nRegarding 3.: Even though we searched for the whole architecture, our method could in principle also be used to learn cells/blocks, which could then be reused for other problems.\n",
"Thank you for the comment. \nRegarding your negatives:\n(1)\tWe agree with the simple design principles pointed out. However, it took quit a long time till people came up with them. Beside that, these principles depend on the input and target domain. There are certainly other, not well understood, domains, were such principles do not exist, and ideally (at some point in the future) an architectures search algorithm can help in these situations. We agree that there may not be need for architecture search to improve CIFAR image classification (because it is already well understood and basically ‘solved’), but our paper presents a general method applicable to arbitrary domains. The reason why we chose CIFAR is a) a simple data set to start with, b) benchmarking and c) computational constraints. \n(2)\tWe would like to note that we give just a few examples of network morphisms – they can certainly be extended by other, not so boring, ones. \n(4)\tYes, we do not reach hand-designed baselines. However note that even training some of these networks (e.g., shake shake network) takes longer than our architecture search algorithm. Also hyperparameters and the training procedure are often highly optimized for state of the art hand crafted architectures, which is not the case for our networks.\n"
] | [
6,
5,
4,
-1,
-1,
-1,
-1
] | [
4,
5,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SySaJ0xCZ",
"iclr_2018_SySaJ0xCZ",
"iclr_2018_SySaJ0xCZ",
"iclr_2018_SySaJ0xCZ",
"BkGYrcIlG",
"HJ8nXbKeM",
"Hk4ciAteG"
] |
iclr_2018_BkCV_W-AZ | Regret Minimization for Partially Observable Deep Reinforcement Learning | Deep reinforcement learning algorithms that estimate state and state-action value functions have been shown to be effective in a variety of challenging domains, including learning control strategies from raw image pixels. However, algorithms that estimate state and state-action value functions typically assume a fully observed state and must compensate for partial or non-Markovian observations by using finite-length frame-history observations or recurrent networks. In this work, we propose a new deep reinforcement learning algorithm based on counterfactual regret minimization that iteratively updates an approximation to a cumulative clipped advantage function and is robust to partially observed state. We demonstrate that on several partially observed reinforcement learning tasks, this new class of algorithms can substantially outperform strong baseline methods: on Pong with single-frame observations, and on the challenging Doom (ViZDoom) and Minecraft (Malmö) first-person navigation benchmarks. | workshop-papers | The reviewers agree this is a really interesting paper, with an interesting idea (in particular
the use of regret clipping might provide a benefit over typical policy gradient methods). However,
there are two major concerns: 1) clarity / exposition and more importantly 2) lack of a strong
empirical motivation for the new approach (why do standard methods work just as well on these
partially observable domains?). | train | [
"ByyLE_tgG",
"S17ehb1WM",
"SkYyvPyWf",
"SkNPS_pXz",
"rJIfV_TXz",
"H1a1BOTQM",
"r1fM8U3Xz",
"Hk29NGimM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public"
] | [
"This paper presents Advantage-based Regret Minimization, somewhat similar to advantage actor-critic with REINFORCE.\nThe main focus of the paper seems to be the motivation/justification of this algorithm with connection to the regret minimization literature (and without Markov assumptions).\nThe claim that ARM is more robust to partially observable domains is supported by experiments where it outperforms DQN.\n\nThere are several things to like about this paper:\n- The authors do a good job of reviewing/referencing several papers in the field of \"regret minimization\" that would probably be of interest to the ICLR community + provide non-obvious connections / summaries of these perspectives.\n- The issue of partial observability is good to bring up, rather than simply relying on the MDP framework that is often taken as a given in \"deep reinforcement learning\".\n- The experimental results show that ARM outperforms DQN on a suite of deep RL tasks.\n\nHowever, there are also some negatives:\n- Reviewing so much of the CFR-literature in a short paper means that it ends up feeling a little rushed and confused.\n- The ultimate algorithm *seems* like it is really quite similar to other policy gradient methods such as A3C, TRPO etc. At a high enough level, these algorithms can be written the same way... there are undoubtedly some key differences in how they behave, but it's not spelled out to the reader and I think the connections can be missed.\n- The experiment/motivation I found most compelling was 4.1 (since it clearly matches the issue of partial observability) but we only see results compared to DQN... it feels like you don't put a compelling case for the non-Markovian benefits of ARM vs other policy gradient methods. Yes A3C and TRPO seem like they perform very poorly compared to ARM... but I'm left wondering how/why?\n\nI feel like this paper is in a difficult position of trying to cover a lot of material/experiments in too short a paper.\nA lot of the cited literature was also new to me, so it could be that I'm missing something about why this is so interesting.\nHowever, I came away from this paper quite uncertain about the real benefits/differences of ARM versus other similar policy gradient methods... I also didn't feel the experimental evaluations drove a clear message except \"ARM did better than all other methods on these experiments\"... I'd want to understand how/why and whether we should expect this universally.\nThe focus on \"regret minimization perspectives\" didn't really get me too excited...\n\nOverall I would vote against acceptance for this version.\n",
"This paper introduces the concepts of counterfactual regret minimization in the field of Deep RL. Specifically, the authors introduce an algorithm called ARM which can deal with partial observability better. The results is interesting and novel. This paper should be accepted.\n\nThe presentation of the paper can be improved a bit. Much of the notation introduced in section 3.1 is not used later on. There seems to be a bit of a disconnect before and after section 3.3. The algorithm in deep RL could be explained a bit better.\n\nThere are some papers that could be connected. Notably the distributional RL work that was recently published could be very interesting to compare against in partially observed environments.\n\nIt could also be interesting if the authors were to run the proposed algorithm on environments where long-term memory is required to achieve the goals.\n\nThe argument the authors made against recurrent value functions is that recurrent value could be hard to train. An experiment illustrating this effect could be illuminating.\n\nCan the proposed approach help when we have recurrent value functions? Since recurrence does not guarantee that all information needed is captured.\n\n\nFinally some miscellaneous points:\n\nOne interesting reference: Memory-based control with recurrent neural\nnetworks by Heess et al.\n\nPotential typos: in the 4th bullet point in section 3.1, should it be \\rho^{\\pi}(h, s')?",
"Quality and clarity:\n\nThe paper provides a game-theoretic inspired variant of policy-gradient algorithm based on the idea of counter-factual regret minimization. The paper claims that the approach can deal with the partial observable domain better than the standard methods. However the results only show that the algorithm converges, in some cases, faster than the previous work reaching asymptotically to a same or worse performance. Whereas one would expect that the algorithm achieve a better asymptotic performance in compare to methods which are designed for fully observable domains and thus performs sub-optimally in the POMDPs. \n\nThe paper dives into the literature of counter-factual regret minimization without providing much intuition on why this type of ideas should provide improvement in the case of partial observable domain. To me it is not clear at all why this idea should help in the partial observable domains beside the argument that this method is designed in the game-theoretic settings which makes no Markov assumption . The way that I interpret this algorithm is that by adding A+ to the return the algorithm introduces some bias for actions which are likely to be optimal so it is in some sense implements the optimism in the face of uncertainty principle. This may explains why this algorithm converges faster than the baseline as it produces better exploration strategy. To me it is not clear that the boost comes from the fact that the algorithm deals with partial observability more efficiently.\n\n\nOriginality and Significance:\n\nThe proposed algorithm seems original. However, as it is acknowledged by the authors this type of optimistic policy gradient algorithms have been previously used in RL (though maybe not with the game theoretic justification). I believe the algorithm introduced in this paper, if it is presented well, can be an interesting addition to the literature of Deep RL, e.g., in terms of improving the rate of convergence. However, the current version of paper does not provide conclusive evidence for that as in most of the domains the algorithm only converge marginally faster than the standard ones. Given the fact that algorithms like dueling DQN and DDPG are for the best asymptotic results and not for the best convergence rate, this improvement can be due to the choice of hyper parameter such as step size or epsilon decay scheduling. More experiments over a range of hyper parameter is needed before one can conclude that this algorithm improves the rate of convergence.\n ",
"Thanks for your comments! We have addressed the clarity issues raised by the reviewers and added additional experiments to address reviewer concerns, which we detailed below.\n\n- The presentation of the paper can be improved a bit\n\nWe admit Section 3 can be cleaned up -- in the final we will address this.\n\n- There are some papers that could be connected. Notably the distributional RL work that was recently published could be very interesting to compare against in partially observed environments. ... One interesting reference: Memory-based control with recurrent neural networks by Heess et al.\n\nThanks! In the final we will elaborate on additional connections with the literature.\n\n- The argument the authors made against recurrent value functions is that recurrent value could be hard to train. An experiment illustrating this effect could be illuminating.\n\nWe added Section 6.3.2 with an experiment to illustrate this effect.\n\nWe compared feedforward and recurrent convolutional policies and value functions learned using A2C on the maze-like ViZDoom MyWayHome scenario. Adding recurrence does seem to have a small positive effect, but it is less than the effect due to e.g. choice of algorithm.\n\n- Can the proposed approach help when we have recurrent value functions? Since recurrence does not guarantee that all information needed is captured.\n\nAs ARM only involves different value function estimators it should be able to handle recurrent value functions. In practice we currently run ARM in batch mode only. An online version of ARM would handle recurrence much more naturally; this work is in progress.\n\n- Potential typos: in the 4th bullet point in section 3.1, should it be \\rho^{\\pi}(h, s')?\n\nThanks, this was a typo, it should be: s' -> h'",
"Thanks for your comments! We have addressed the clarity issues raised by the reviewers and added additional experiments to address reviewer concerns, which we detailed below.\n\n- The paper dives into the literature of counter-factual regret minimization without providing much intuition on why this type of ideas should provide improvement in the case of partial observable domain. To me it is not clear at all why this idea should help in the partial observable domains beside the argument that this method is designed in the game-theoretic settings which makes no Markov assumption .\n\nWe have added Section 3.6 with this information.\n\nIn Section 3.6 we make an informal argument based on the regret bounds of CFR/CFR+ vs the convergence rates of other methods. For CFR/CFR+, the regret bound is proportional to the size of the observation space. On the other hand, for the policy gradient method, the regret bound has no direct dependence on the size of the observation space. This suggests that there could be a \"threshold\" level of the observation space size, such that a smaller observation space size (i.e. more partially observable) leads to better relative performance of CFR/CFR+ (and hence ARM), whereas a larger observation space size (i.e. more fully observable) leads to worse relative performance of CFR/CFR+ (and hence ARM).\n\nThis argument suggests that ARM could outperform other methods when there is a high degree of partial observability in a domain (e.g. Minecraft). Conversely, when a domain is nearly fully observable (e.g. Atari) it is possible for ARM to converge slower than other methods -- these match our empirical results on Atari in the Appendix, section 6.3.1.\n\n- adding A+ to the return the algorithm introduces some bias for actions which are likely to be optimal so it is in some sense implements the optimism in the face of uncertainty principle.\n\nThanks! This is a good point which we added to our modified Section 3.4.\n\n- this type of optimistic policy gradient algorithms have been previously used in RL (though maybe not with the game theoretic justification)\n\nIn our modified section 3.5 we address this point in more detail.\n\nOne way to think about existing policy gradient methods is that they approximately minimize the KL-divergence between the policy and a target Boltzmann distribution induced either by the Q-function or by the advantage function. (For simplicity we ignore considerations of entropy regularization.) As a result of the KL-divergence/Boltzmann interpretation, the policy gradient update is linearly proportional to the advantage.\n\nBy analogy, if one were to minimize the KL-divergence between the policy and a different kind of target distribution, the resulting gradient updates would also be different. In particular, ARM proposes a distribution based on regret-matching which is proportional to the positively clipped part of the cumulative clipped advantage (i.e. the \"A+\" function). If we were to implement a policy gradient-like version of ARM, then the resulting gradient update would be proportional to the _logarithm_ of the cumulative clipped advantage, and is inherently different than the existing policy gradient update. One consequence is that logarithmic dependence on the advantage means that ARM may be less sensitive to value function overestimation that may result in large positive advantages.\n\n- algorithms like dueling DQN and DDPG are for the best asymptotic results and not for the best convergence rate\n\nRegret in the context of RL can be thought of as \"area over the learning curve (and under the optimal expected return)\". In other words, regret is a measure of sample efficiency. Faster minimization of regret leads to faster convergence rate but generally does not change the overall asymptotic performance that can be attained. Empirically we observe that ARM does achieve higher performance within a finite number of steps on some tasks compared to others (the Minecraft one in particular).\n\nWe are happy to compare with any other methods, but to our knowledge dueling double DQN is a strong baseline for the empirical convergence rate in addition to the asymptotic performance (we didn't compare with DDPG because we only evaluated discrete action space domains).\n\n- More experiments over a range of hyper parameter is needed before one can conclude that this algorithm improves the rate of convergence\n\nSome general comments about our hyperparameter tuning:\n(a) by fixing the network architecture, we found that the learning rate and other optimizer hyperparams were mostly architecture-dependent (except for TRPO);\n(b) the number of steps to use for n-step returns made the biggest difference and was independent of algorithm.",
"Thanks for your comments! We have addressed the clarity issues raised by the reviewers and added additional experiments to address reviewer concerns, which we detailed below.\n\n- Reviewing so much of the CFR-literature in a short paper means that it ends up feeling a little rushed and confused.\n\nWe admit Section 3 can be cleaned up -- in the final we will address this.\n\n- The ultimate algorithm *seems* like it is really quite similar to other policy gradient methods such as A3C, TRPO etc.\n\nIn our modified section 3.5 we address this point in more detail.\n\nOne way to think about existing policy gradient methods is that they approximately minimize the KL-divergence between the policy and a target Boltzmann distribution induced either by the Q-function or by the advantage function. (For simplicity we ignore considerations of entropy regularization.) As a result of the KL-divergence/Boltzmann interpretation, the policy gradient update is linearly proportional to the advantage.\n\nBy analogy, if one were to minimize the KL-divergence between the policy and a different kind of target distribution, the resulting gradient updates would also be different. In particular, ARM proposes a distribution based on regret-matching which is proportional to the positively clipped part of the cumulative clipped advantage (i.e. the \"A+\" function). If we were to implement a policy gradient-like version of ARM, then the resulting gradient update would be proportional to the _logarithm_ of the cumulative clipped advantage, and is inherently different than the existing policy gradient update. One consequence is that logarithmic dependence on the advantage means that ARM may be less sensitive to value function overestimation that may result in large positive advantages.\n\n- it feels like you don't put a compelling case for the non-Markovian benefits of ARM vs other policy gradient methods ... I'd want to understand how/why and whether we should expect this universally\n\nWe added section 3.6 to address this point in more detail.\n\nIn Section 3.6 we make an informal argument based on the regret bounds of CFR/CFR+ vs the convergence rates of other methods. For CFR/CFR+, the regret bound is proportional to the size of the observation space. On the other hand, for the policy gradient method, the regret bound has no direct dependence on the size of the observation space. This suggests that there could be a \"threshold\" level of the observation space size, such that a smaller observation space size (i.e. more partially observable) leads to better relative performance of CFR/CFR+ (and hence ARM), whereas a larger observation space size (i.e. more fully observable) leads to worse relative performance of CFR/CFR+ (and hence ARM).\n\nThis argument suggests that ARM could outperform other methods when there is a high degree of partial observability in a domain (e.g. Minecraft). Conversely, when a domain is nearly fully observable (e.g. Atari) it is possible for ARM to converge slower than other methods -- these match our empirical results on Atari in the Appendix, section 6.3.1.\n\nFinally, there is also a lot of work showing that Q-learning based methods (i.e. DQN) can be much more sample efficient than policy gradient methods (e.g. A3C and TRPO). Estimating something that looks like a Q-function often leads to faster convergence. A deeper reason is that policy gradient methods are better at avoiding Markov assumptions by reducing dependence on value function estimation, but at the cost of sample efficiency.",
"Thanks for your comment. In our work we only considered model-free deep RL methods, so I'm not super familiar with the SM-UCRL work by Azizzadenesheli et al which involves POMDP model estimation. My initial impression though is that estimation of the POMDP model parameters using spectral methods is an interesting idea for model-based RL in general.",
"Hi authors, \n\nThe title of your paper reminds me a theory paper on regret bound of POMDPs. \n\"Reinforcement learning of POMDPs using spectral methods\" which does regret minimization of POMDPS.\nI skimmed your paper a bit, and I think it would good to discuss this paper."
] | [
4,
7,
5,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
5,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BkCV_W-AZ",
"iclr_2018_BkCV_W-AZ",
"iclr_2018_BkCV_W-AZ",
"S17ehb1WM",
"SkYyvPyWf",
"ByyLE_tgG",
"Hk29NGimM",
"iclr_2018_BkCV_W-AZ"
] |
iclr_2018_rkEfPeZRb | Variance-based Gradient Compression for Efficient Distributed Deep Learning | Due to the substantial computational cost, training state-of-the-art deep neural networks for large-scale datasets often requires distributed training using multiple computation workers. However, by nature, workers need to frequently communicate gradients, causing severe bottlenecks, especially on lower bandwidth connections. A few methods have been proposed to compress gradient for efficient communication, but they either suffer a low compression ratio or significantly harm the resulting model accuracy, particularly when applied to convolutional neural networks. To address these issues, we propose a method to reduce the communication overhead of distributed deep learning. Our key observation is that gradient updates can be delayed until an unambiguous (high amplitude, low variance) gradient has been calculated. We also present an efficient algorithm to compute the variance and prove that it can be obtained with negligible additional cost. We experimentally show that our method can achieve very high compression ratio while maintaining the result model accuracy. We also analyze the efficiency using computation and communication cost models and provide the evidence that this method enables distributed deep learning for many scenarios with commodity environments. | workshop-papers | The reviewers find the gradient compression approach novel and interesting, but they find the empirical evaluation not fully satisfactory. Some aspects of the paper have improved with the feedback from the reviewers, but because of the domain of the paper, experimental evaluation is very important. I recommend improving the experiments by incorporating the reviewers' comments. | train | [
"B1O_32YeM",
"rkZd9y9xz",
"ByqfOWqlM",
"SkGeDz6Xf",
"B1aOTEKXG",
"Byiwed0fG",
"B1rp1uAGz",
"H1mSy_RMM",
"Hk1W1dRzf",
"BJkmTPAzM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper proposes a variance-based gradient compression method to reduce the communication overhead of distributed deep learning. Experiments on real datasets are used for evaluation. \n\nThe idea to adopt approximated variances of gradients to reduce communication cost seems to be interesting. However, there also exist several major issues in the paper.\n\nFirstly, the authors propose to combine two components to reduce communication cost, one being variance-based gradient compression and the other being quantization and parameter encoding. But the contributions of these two components are not separately analyzed or empirically verified. \n\nSecondly, the experimental results are unconvincing. The accuracy of Momentum SGD for ‘Strom, \\tau=0.01’ on CIFAR-10 is only 10.6%. Obviously, the learning procedure is not convergent. It is highly possible that the authors do not choose a good hyper-parameter. Furthermore, the proposed method (not the hybrid) is not necessarily better than Strom except for the case of Momentum SGD on CIFAR-10. Please note that the case of Momentum SGD on CIFAR-10 may have a problematic experimental setting for Strom. In addition, it is weird that the experiment on ImageNet does not adopt the same setting as that on CIFAR-10 to evaluate both Adam and Momentum SGD. \n",
"The paper proposes a novel way of compressing gradient updates for distributed SGD, in order to speed up overall execution. While the technique is novel as far as I know (eq. (1) in particular), many details in the paper are poorly explained (I am unable to understand) and experimental results do not demonstrate that the problem targeted is actually alleviated.\n\nMore detailed remarks:\n1: Motivating with ImageNet taking over a week to train seems misplaced when we have papers claiming to train ImageNet in 1 hour, 24 mins, 15 mins...\n4.1: Lemma 4.1 seems like you want B > 1, or clarify definition of V_B.\n4.2: This section is not fully comprehensible to me.\n- It seems you are confusingly overloading the term gradient and words derived (also in other parts or the paper). What is \"maximum value of gradients in a matrix\"? Make sure to use something else, when talking about individual elements of a vector (which is constructed as an average of gradients), etc.\n- Rounding: do you use deterministic or random rounding? Do you then again store the inaccuracy?\n- I don't understand definition of d. It seems you subtract logarithm of a gradient from a scalar.\n- In total, I really don't know what is the object that actually gets communicated, and consequently when you remark that this can be combined with QSGD and the more below it, I don't understand it. This section has to be thoroughly explained, perhaps with some illustrative examples.\n4.3: allgatherv remark: does that mean that this approach would not scale well to higher number of workers?\n4.4: Remarks about quantization and mantissa manipulation are not clear to me again, or what is the point in doing so. Possible because the problems above.\n5: I think this section is not too useful unless you can accompany it with actual efficient implementation and contrast the practical performance. \n6: Given that I don't understand how you compress the information being communicated, it is hard to believe the utility of the method. The objective was to speed up training time because communication is bottleneck. If you provide 12,000x compression, is it any more practically useful than providing 120x compression? What would be the difference in runtime? Such questions are never discussed. Further, if in the implementation you discuss masking mantissa, I have serious concern about whether the compression protocol is feasible to implement efficiently, without writing some extremely low-level code. I think the soundness of work addressing this particular problem is damaged if not implemented properly (compared to other kinds of works in current ML related research). Therefore I highly recommend including proper time comparison with a baseline in the future.\nFurther, I don't understand 2 things about the Tables. a) how do you combine the proposed method with Momentum in SGD? This is not discussed as far as I can see. b) What is \"QSGD, 2bit\" If I remember QSGD protocol correctly, there's no natural mapping of 2bit to its parameters.",
"The authors propose a new gradient compression method for efficient distributed training of neural networks. The authors propose a novel way of measuring ambiguity based on the variance of the gradients. In the experiment, the proposed method shows no or slight degradation of accuracy with big savings in communication cost. The proposed method can easily be combined with other existing method, i.e., Storm (2015), based on the absolute value of the gradient and shows further efficiency. \n\nThe paper is well written: clear and easy to understand. The proposed method is simple yet powerful. Particularly, I found it interesting to re-evaluate the variance with (virtually) increasing larger batch size. The performance shown in the experiments is also impressive. \n\nI found it would have also been interesting and helpful to define and show a new metric that incorporates both accuracy and compression rate into a single metric, e.g., how much accuracy is lost (or gained) per compression rate relatively to the baseline of no compression. With this metric, the comparison would be easier and more intuitive. \n",
"First of all, thank you for reading our response and giving us an additional comment. To support our arguments, we estimate actual speedup by variance-based compression using micro-benchmarks of communication over slow interconnection.\nFirst, we measured computation and communication time for training ResNet50 on 16 nodes using Infiniband. It took 302.72 ms for computation and 69.95 ms for communication for each iteration in average. We note that ResNet50 contains about 102 MB of parameters.\nNext, we measured communication time of allreduce without compression and that of allgatherv with compression. We used 16 t2.micro instances of AWS Its point-to-point bandwidth was about 100MB/s. We used OSU micro-benchmarks (http://mvapich.cse.ohio-state.edu/benchmarks/) for the measurements. Summary of the result is the following:\n--\nallreduce\ncompression | Avg elapsed time (ms)\n1 | 9,572.95\n--\nallgatherv\ncompression | Avg elapsed time (ms)\n10 | 3,440.70\n100 | 314.17\n1,000 | 30.09\n10,000 | 4.26\n--\nWith this result, we can see that communication takes longer time compared to usual computation time even with 100x compression. Thus, we can say that even with only 16 nodes, compression ratio over a hundred is desirable to achieve high scalability. In use cases with more nodes, communication will take longer and thousands of times of compression will help. We hope this addresses your concern.",
"After seeing your response, and reviews of other reviewers, my opinion is still that this is an interesting work, but more needs to be done to publish it.\n\nIn particular, you propose something that you show is an interesting thing to do, but you do not demonstrate that this is actually a useful thing to do. This is very important difference for the specific problem you try to address. Comments such as \"yes, we believe it can be practically useful\" are in my opinion deeply insufficient, and the belief should be explicitly captured in experimental results. This is what I would suggest to focus on in a revision.",
"We found that we used mistakenly smaller \\zeta for our algorithm than the value specified in our paper, and thus we reran experiments and updated experimental results.\nWe also found inconsistency of our setting for QSGD with its original paper, and we corrected the experimental results.",
"Thank you for your review. We are glad to hear that you found our algorithm interesting.\n\n> But the contributions of these two components are not separately analyzed or empirically verified. \nThank you for your comment. The main contribution is intended to be the variance-based gradient compression, with the quantization provided as a way to fit both values of gradient elements and its index in 32-bit while not rounding many elements to zero. We amended our paper with the following:\nSec 4.2’’’To allow for comparison with other compression methods, we propose a basic quantization process. …’’’\n\n> The accuracy of Momentum SGD for ‘Strom, \\tau=0.01’ on CIFAR-10 is only 10.6%. Obviously, the learning procedure is not convergent. It is highly possible that the authors do not choose a good hyper-parameter.\nThank you for your comment. We amended our paper with the following:\nSec.6.1’’’We note that we observed unstable behaviors with other thresholds around 0.01.’’’\nAppendix D’’’The code is available in examples of Chainer on GitHub.’’’\n\n> Furthermore, the proposed method (not the hybrid) is not necessarily better than Strom except for the case of Momentum SGD on CIFAR-10. Please note that the case of Momentum SGD on CIFAR-10 may have a problematic experimental setting for Strom.\nThank you for your comment. We amended our paper with the following:\nSec. 6.1’’’We also would like to mention the difficulty of hyperparameter tuning in Strom's method. … On the other hand, our algorithm is free from such problem. Moreover, when we know good threshold for Strom's algorithm, we can just combine ours to get further compression.’’’\n\n> In addition, it is weird that the experiment on ImageNet does not adopt the same setting as that on CIFAR-10 to evaluate both Adam and Momentum SGD.\nThank you for your comment. We amended our paper with the following:\nSec 6.2 ‘’’We also evaluated algorithms with replacing MomentumSGD and its learning rate scheduling to Adam with its default hyperparameter.’’’",
"> 5: I think this section is not too useful unless you can accompany it with actual efficient implementation and contrast the practical performance. \nYes, we would like to be able to do this comparison. We amended the paper to include this at the beginning of Section 5 -- Performance Analysis:\n‘’’Because common deep learning libraries do not currently support access to gradients of each sample, it is difficult to contrast practical performance of an efficient implementation in the commonly used software environment.In light of this, we estimate speedup of each iteration by gradient compression with a performance model of communication and computation.’’’\n\n> If you provide 12,000x compression, is it any more practically useful than providing 120x compression?\nYes, we believe it can be practically useful, depending on the underlying computation infrastructure. With existing compression methods, computation with a large number of nodes essentially requires high bandwidth connections like InfiniBand. Much higher levels of compression make it possible to consider large numbers of nodes even with commodity-level bandwidth connections. We amended our paper with the following:\nSec 6.1’’’The hybrid algorithm's compression ratio is several orders higher than existing compression methods with a low reduction in accuracy. This indicates the algorithm can make computation with a large number of nodes feasible on commodity level infrastructure that would have previously required high-end interconnections.’’’\nSec 6.2’’’In this example as well as the previous CIFAR10 example, Variance-based Gradient Compression shows a significantly higher compression ratio, with comparable accuracy. While in this case, Strom's method's accuracy was comparable with no compression, given the significant accuracy degradation with Strom's method on CIFAR10, it appears Variance-based Gradient Compression provides a more robust solution.’’’\n\n> Further, if in the implementation you discuss masking mantissa, I have serious concern about whether the compression protocol is feasible to implement efficiently, without writing some extremely low-level code. \nYes, low-level code would be required to use our method. It is also true for other existing methods.\n\n> Therefore I highly recommend including proper time comparison with a baseline in the future. Once parameter variance is provided within one of the standard calculation libraries of primitives for neural deep neural networks, this time comparison can be done.\n\n> a) how do you combine the proposed method with Momentum in SGD? This is not discussed as far as I can see. \nWe amended our paper with the following:\nSec. 4.1 ‘’’... In the combination with optimization methods like Momentum SGD, gradient elements not sent are assumed to be equal to zero.’’’\n\n> b) What is \"QSGD, 2bit\" If I remember QSGD protocol correctly, there's no natural mapping of 2bit to its parameters.\nThank you for your comment. We misunderstood meaning ‘bit” used in experiment section of original QSGD paper. We asked the authors at NIPS, and we reran experiments. We amended our paper as follows:\nSec 6.1’’’We used two's complement in implementation of QSGD and \"bit\" represents the number of bits used to represent each element of gradients. \"d\" represents a bucket size.’’’",
"Thanks for the review. We're glad to hear that you found our technique to be novel. We've amended our paper in light of your review. We hope this helps explain the details and demonstrate how our technique alleviates the problem of transmitting gradients between nodes.\n\nSection 1\n> Motivating with ImageNet taking over a week to train seems misplaced when we have papers claiming to train ImageNet in 1 hour, 24 mins, 15 mins...\nThanks for the comment. We have amended our paper with the following:\n‘’’For example, it takes over a week to train ResNet-50 on the ImageNet dataset if using a single GPU. … For example, when using 1000BASE-T Ethernet, communication takes at least ten times longer than forward and backward computation for ResNet-50, making multiple nodes impractical. High performance interconnections such as InfiniBand and Omni-Path are an order of magnitude more expensive than commodity interconnections, which limits research and development of deep learning using large-scale datasets to a small number of researchers.’’’\n\n> 4.1: Lemma 4.1 seems like you want B > 1, or clarify definition of V_B.\nCorrect. We have amended our paper with the following:\n‘’’Lemma 4.1\n A sufficient condition that a vector -g is a descent direction is\n \\|g - \\nabla f(x)\\|_2^2 < \\|g\\|_2^2.\nWe are interested in the case of g = \\nabla f_B(x), the gradient vector of the loss function over B.\nBy the weak law of large numbers, when B > 1, the left-hand side with g = \\nabla f_B(x) can be estimated as follows.’’’\nNote, \\nabla_B f(x) in lemma 4.1 of our first paper was replaced with a symbol g.\n\n> 4.2: This section is not fully comprehensible to me.\n> - It seems you are confusingly overloading the term gradient and words derived (also in other parts or the paper). What is \"maximum value of gradients in a matrix\"? Make sure to use something else, when talking about individual elements of a vector (which is constructed as an average of gradients), etc.\nYou’re right. We amended the paper to replace gradient with ‘gradient element’ when we refer elements of gradient vectors.\n‘’’Our quantization except for the sign bit is as follows. For a weight matrix W_k (or a weight tensor in CNN), there is a group of gradient elements corresponding to the matrix. Let M_k be the maximum absolute value in the group.’’’\n\n> - Rounding: do you use deterministic or random rounding? Do you then again store the inaccuracy?\nGood questions. We amended our paper as follows:\nSec 4.2’’’... We do not adopt stochastic rounding like QSGD nor accumulate rounding error g_i - g'_i for the next batch because this simple rounding does not harm accuracy empirically.’’’\n\n> - I don't understand definition of d. It seems you subtract logarithm of a gradient from a scalar.\nd is a difference of two scalars.\n> In total, I really don't know what is the object that actually gets communicated, and consequently when you remark that this can be combined with QSGD and the more below it, I don't understand it. This section has to be thoroughly explained, perhaps with some illustrative examples.\nWe hope our clarification between ‘gradient’ and ‘gradient element’ made the definition of d clearer. We amended our paper as follows:\nSec 4.2‘’’... After deciding which gradient elements to send, each worker sends pairs of a value of a gradient element and its parameter index …’’’\nSec 4.2’’’... Because the variance-based sparsification method described in subsection 4.1 is orthogonal to the quantization shown above, we can reduce communication cost further using sparsity promoting quantization methods such as QSGD instead.’’’\n\n> 4.3: allgatherv remark: does that mean that this approach would not scale well to higher number of workers?\nIt does scale well. We amended our paper with the following:\nSec 4.3’’’Thanks to the high compression ratio possible with this algorithm in combination with other compression methods, even large numbers of workers can be supported.’’’\n\n> 4.4: Remarks about quantization and mantissa manipulation are not clear to me again, or what is the point in doing so. Possible because the problems above.\nTo make a point of the mantissa operations clear, we amended our paper with the following:\n‘’’The quantization of parameters described in subsection 4.2 can also be efficiently implemented with the standard binary floating point representation using only binary operations and integer arithmetic as follows.’’’",
"Thank you for your review and helpful suggestion.\nWe tried to make a new single metric, however, we are not sure how to combine accuracy and compression ratio as they are not directly comparable.\nTo make a comparison between methods more intuitive, we added scatter plots of accuracy and compression ratio in Appendix C."
] | [
6,
4,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rkEfPeZRb",
"iclr_2018_rkEfPeZRb",
"iclr_2018_rkEfPeZRb",
"B1aOTEKXG",
"H1mSy_RMM",
"iclr_2018_rkEfPeZRb",
"B1O_32YeM",
"Hk1W1dRzf",
"rkZd9y9xz",
"ByqfOWqlM"
] |
iclr_2018_SyVOjfbRb | LSH-SAMPLING BREAKS THE COMPUTATIONAL CHICKEN-AND-EGG LOOP IN ADAPTIVE STOCHASTIC GRADIENT ESTIMATION | Stochastic Gradient Descent or SGD is the most popular optimization algorithm for large-scale problems. SGD estimates the gradient by uniform sampling with sample size one. There have been several other works that suggest faster epoch wise convergence by using weighted non-uniform sampling for better gradient estimates. Unfortunately, the per-iteration cost of maintaining this adaptive distribution for gradient estimation is more than calculating the full gradient. As a result, the false impression of faster convergence in iterations leads to slower convergence in time, which we call a chicken-and-egg loop. In this paper, we break this barrier by providing the first demonstration of a sampling scheme, which leads to superior gradient estimation, while keeping the sampling cost per iteration similar to that of the uniform sampling. Such an algorithm is possible due to the sampling view of Locality Sensitive Hashing (LSH), which came to light recently. As a consequence of superior and fast estimation, we reduce the running time of all existing gradient descent algorithms. We demonstrate the benefits of our proposal on both SGD and AdaGrad. | workshop-papers | The reviewers think that the theoretical contribution is not significant on its own. The reviewers find the empirical aspect of the paper interesting, but more analysis of the empirical behavior is required, especially for large datasets. Even for small datasets with input augmentation (e.g. random crops in CIFAR-10) the pre-processing can become prohibitive. I recommend improving the manuscript for a re-submission to another venue and an ICLR workshop presentation. | train | [
"SJpaRgDNf",
"HyGpJF44z",
"HkmgURdlf",
"rkcg14qlz",
"SJl0YdfWM",
"ryd-Vb4Vf",
"SJBH-bE4z",
"SyEJNkVVf",
"BJBcbfAXG",
"HyQqaeRmM",
"HJZNvP2fG",
"S1Kj_sofM",
"ByTTs9ifM"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author"
] | [
"Thanks for the discussions! \n\nWe will restress the subtleties and differenced of indexing, sub-linear similarity search, and the new line of sub-linear adaptive sampling and unbiased estimation in any future versions of the paper. \n\nLet us know if you think anything else will be helpful. ",
"Response to the following comments/questions:\n\n> >The new thing is sampling in sub-linear time that requires indexing, and simply random projections won't help. \n>>Any non-trivial similarity based adaptive sampling (using random projection or otherwise) is a linear cost without indexing (hash tables). Its the power of data structure combined with properties of random projections. The power of data structure is often missed with LSH and dimensionality reduction is thought to be the prime reason. \n>> Point out any earlier literature exploiting LSh for sub-linear adaptive sampling given a query? \n\n1. This \"indexing\" is simply bucketing by hash/randomized function. AND it is not new! It is the first very basic step of turning the randomized similarity functions into approx NN schemes. This component of LSH was also used for similarity applications for decades. For example, it is routinely used to generate graphs from massive metric data (kNN is too expensive). In this context think of a node as a \"query\" if you wish and you connect it to sampled nodes from LSH buckets. This creates a graph where closer points are much more likely to have an edge connected. \n\n2. In any case, this is not even claimed to be a contribution of this submission. Only that the presentation of this paper seems to attribute basic known methods to very recent work. \n\n\nThis is not the only issue to correct. You can make a nice paper, but the submission is not yet there.\n",
"Authors propose sampling stochastic gradients from a monotonic function proportional to gradient magnitudes by using LSH. I found the paper relatively creative and generally well-founded and well-argued.\n\nNice clear example with least squares linear regression, though a little hard to tell how generalizable the given ideas are to other loss functions/function classes, given the authors seem to be taking heavy advantage of the inner product. \n\nExperiments: appreciated the wall clock timings.\n\nSGD comparison: “fixed learning rate.” Didn't see how the initial (well constant here) step size was tuned? Why not use the more standard 1/t decay?\n\nFig 1: Suspicious CIFAR100 that test objective is so much better than train objective? Legend backwards?\n\nWhy were so many of the chosen datasets have so few training examples?\n\nPaper is mostly very clearly written, though a bit too redundant and some sentences are oddly ungrammatical as if a word is missing - just needs a careful read-through. \n",
" The main idea in the paper is fairly simple:\n\n The paper considers SGD over an objective of the form of a sum over examples of a quadratic loss.\nThe basic form of SGD selects an example uniformly. Instead, one can use any probability distribution over examples and apply inverse probability weighting to retain unbiasedness of the gradient.\n\n A good method (that builds on classic pps sampling) is to select examples with higher normed gradients with higher probability [Alain et al 2015].\n\n With quadratic loss, the gradient increases with the inner product of the parameter vector (concatenated with -1) and the example vector x_i (concatenated with the label y_i).\n\n For the current parameter vector \\theta, we would like to sample examples so that the probability of sampling larger inner products is larger.\n\n The paper uses LSH structures, computed over the set of examples,\n to quickly sample examples with large inner products with the current parameter vector \\theta. Essentially, two vectors are hashed to the same bucket with probability that increases with their cosine similarity.\n So we select examples in the same LSH bucket as \\theta (for rubstness, we use multiple LSH mappings).\n\n\nstrengths: simple idea that can work well in the context of sampling examples for SGD\n\nweaknesses: \n\n The novelty in the paper is limited. The use of LSH for sampling is a common technique to sample more similar vectors with higher probability. There are theorems, but they are trivial, straightforward applications of importance sampling. \n\n The paper is not well written. The presentation is much more complex that need be. References to classic weighted sampling are \n\n The application is limited to certain loss functions for which we can compute LSH structures. This excludes NN models and even the addition of regularization to the quadratic loss can affect the effectiveness.\n",
"The main contribution of this work is just a combination of LSH schemes and SGD updates. Since hashing schemes essentially reduce the dimension, LSH brings computational benefits to the SGD operation. The targeted issue is fundamentally important, and the proposed approach (exploiting LSH schemes) seems to be sound. Specifically, LSH schemes fit into the SGD schemes since they hash two vectors to the same bucket with probability in proportional to their distance (here, inner product or Cosine similarity).\n\nStrengths: a sound approach; a simple and straightforward idea that is shown to work well in evaluations.\n\nWeaknesses: \n1. The phrase of \"computational chicken-and-egg loop\" in the title and also in the main body is misleading and not accurate. The so-called \"chicken-and-egg” issue concerns the causality dilemma: two causally related things, which comes the first. In the paper, the authors concerned \"more accurate gradients\" and \"faster convergence\"; their causality is very clear (the first leads to the second), and there is no causality dilemma. Even from a computational perspective, \"SDG schemes aim for computational efficiency\" and \"stochastic makes the convergence slow down\" are not a causality dilemma. The reason behind is that the latter is the cost of the first one, just the old saying that \"there is no such thing as a free lunch\". Therefore, this disordered logic makes the title very misleading, and all the corresponding descriptions in the main body are obscured by \"twisted\" and unnatural logics. \n \n2. The depth is so limited. Besides a good observation that LSH fits well into SDG, there are no more in-depth results provided. The theorems (Theorems 1~3) are trivial, with loose relations with LSH.\n\t \n3. The LSH schemes are not correctly referred to. Since the similarity metric is inner-product, the authors are expected to refer to Cosine similarity and inner-product based LSHs, which were published recently in NIPS. It is not in depth to assume \"any known LSH scheme\" in Alg. 2. Accordingly again, Theorems 1~3 are unrelated with this specific kind of similarity metric (Cosine similarity).\n\n4. As the authors tried hard to stick to the unnecessary (a bit bragging) phrase \"computational chicken-and-egg loop\", the organization and presentation of the whole manuscript are poor.\n\n5. Occasionally, there are typos, and it is not good to use words in formulas. Please proof-read carefully.\n",
"LSH, that is, sampling schemes were more similar entities are more likely to be sampled together, are known for decades. E.g., based on random projections or on consistent samples. \n\n What you are doing is using LSH sampling schemes for exactly what they are... weighted sampling by similarity. \n>> Point out any earlier literature exploiting LSh for sub-linear adaptive sampling given a query? Unbiased estimation with LSH in sublinear time is not known before. \n\nThe new thing is sampling in sub-linear time that requires indexing, and simply random projections won't help. \n Random projections are good for estimation (not sub-linear in the number of examples) unless combined with quantizations and indexing. We can stress this part more if needed. It is easy to miss. It requires data structures. \n\nAny non-trivial similarity based adaptive sampling (using random projection or otherwise) is a linear cost without indexing (hash tables). Its the power of data structure combined with properties of random projections. The power of data structure is often missed with LSH and dimensionality reduction is thought to be the prime reason. \n\n\n\n I believe that if this is written well, explaining what is the contribution (this observation and experiments), careful evaluation, providing clarity to readers without much background, point on the limitations, present the simple idea for what it is, take credit only for what you contribute, write it in a way that provide value to readers, then it can make a very nice paper. \n>> We are happy to make any suggested changes, as we can clearly see that LSH is so widely popular that the important points can be easily lost. \nWe hope you see we are not claiming for more than what we are contributing. ",
"Ok, lets put things straight.\n\nLSH, that is, sampling schemes were more similar entities are more likely to be sampled together, are known for decades. E.g., based on random projections or on consistent samples. \n\n The \"big deal\" about the theory of LSH (2 decades) was a very general method of using these \"weak\" sampling schemes to construct approximate NN structures.\n\n What you are doing is using LSH sampling schemes for exactly what they are... weighted sampling by similarity. \n\n BTW, I am pretty confident that you do not want a NN here (largest gradient norm) even if you could get it for free. In particular, this can very badly bias the expectation of the gradient and you will lose theoretical convergence properties. My hunch is that it would also be very bad in practice. \n\n\n The novelty in your paper, which could make for a **very nice** application, is the simple observation that LSH can be applied to select examples that have larger gradients in the context of GD with quadratic loss (because then you have the LSH function (cosine similarity between the parameter of the model and the vectors).\n \n I believe that if this is written well, explaining what is the contribution (this observation and experiments), careful evaluation, providing clarity to readers without much background, point on the limitations, present the simple idea for what it is, take credit only for what you contribute, write it in a way that provide value to readers, then it can make a very nice paper. \n\n ",
"I forgot to mention that near neighbor queries are significantly slower than sampling. \n\nIn our experiments sampling requires only one memory lookup and random number generation\n\nOn the contrary, near-neighbor query (per update) require in theory to probe n^\\rho (grows with data) lookup, followed by bucket aggregation, followed by filtering using distance computations (again of the order n^\\rho). \n\nAlthough \\rho < 1 (sublinear) but sill compared to SGD (one random sample) this process (or any near neighbor query) is unlikely to lead to a faster in running time algorithm. \n\nThis is the reason; any neighbor based sampling approach is unlikely to beat SGD in running time. While ours can! (only one lookup, no costly candidate filtering)\n\nWe hope you see the critical subtlety with this new view of LSH. ",
"I do not believe I missed the unbiasedness. Note that \"importance weights\" (inverse probability weighting) is a 7 decades old technique to obtain unbiased estimators from unequal probability samples. When the probabilities are better \"correlated\" with the weights (similarity) the variance is better. \n\nThe unnormalised sampling (based on weights without knowing the sum) is also decades old. Say order sampling.\n>> Yes, however, any non-trivial (interesting) sampling is O(N) as simply computing any weight requires O(N) cost per iteration. LSH is the only way to get is constant amortized cost\n\nI believe that current submission novelty is really only in noting the potential SGD application. \n>> Isn't is a neat observation? We are really excited about this striking possibility. What in the world gives constant time adaptive sampling? Any form of adaptiveness is O(N), except a wierd mathematical form of 1 - (1-p^K)^L (unheard of) which admits contant amortized cost sampling and at the same time is adaptive. \n\n To put history in perspective. LSH schemes are essentially sampling scheme. There are many older techniques that simply perform similarity-based sampling and did not call it LSH. \n>> Computing similarities itself is O(N) to start as there are N data points.\n\n The beautiful theory of LSH from the last two decades was about relating the sampling schemes to approximate NN structures. \n>> LSH as sampling just came in 2016 not last decade. Until that time LSH was thought to be a fast subroutine for NN search and its potential as a sampler and unbiased estimator were not heard of. The beauty is that sampling can be amortized constant time, which was first shown in early 2016. We are not aware of any literature that uses LSH as samplers before that. \n\n\n A very convincing demonstration of the potential of that, with comparison to other methods, and proper presentation, could make a very nice paper. I am looking forward to see the next version.\n>> Thanks for the encouragement. Is there anything you have in mind, and we will compare it. We know that beating SGD on running time (with same resources) is hard, so it looks rather easy for us. \n\nWe hope you will support our paper. We are happy to do any additional comparisons you have in mind. \n\n\n",
">> Again you are missing the importance style weights. And the sampling is correlated and not normalized, so it is something never seen before\n\nI do not believe I missed the unbiasedness. Note that \"importance weights\" (inverse probability weighting) is a 7 decades old technique to obtain unbiased estimators from unequal probability samples. When the probabilities are better \"correlated\" with the weights (similarity) the variance is better. \n\nThe unnormalised sampling (based on weights without knowing the sum) is also decades old. Say order sampling.\n\n To put history in perspective. LSH schemes are essentially sampling scheme. There are many older techniques that simply perform similarity-based sampling and did not call it LSH. The beautiful theory of LSH from the last two decades was about relating the sampling schemes to approximate NN structures. What the very recent work does is using LSH sampling schemes, again, for sampling... That recent thread is very nice as it notes this in the context of some new applications, with some very nice analysis. \n\nI believe that current submission novelty is really only in noting the potential SGD application. A very convincing demonstration of the potential of that, with comparison to other methods, and proper presentation, could make a very nice paper. I am looking forward to see the next version.\n\n",
"Thanks for the encouraging comment. \nWe are happy to get your support and hope you will clarify the misconception of other reviewers in the subsequent discussions. \n\nTo avoid any bells and whistles, we show plain SGD as well as adagrad which adaptively chooses the step size based on the previous gradients estimates. We did not tune anything to nullify the effect of any tuning and ensure an apples-to-apples comparison. Better gradient estimate leads to improvements despite SGD or adagrad.\n\nInner product naturally goes for linear regression as well as logistic (exp^{inner product}). A natural next step is to look at popular loss function as well as existing LSH to see if there are other sweet spots. \n \nOther than CIFAR, we chose high dimensional regression datasets (not classification) from UCI. https://archive.ics.uci.edu/ml/datasets.html unfortunately, all high dimensional regressions datasets are small. Let us know if you have any suggestions on that. \n",
"Since hashing schemes essentially reduce the dimension, LSH brings computational benefits to the SGD operation\n>> NO .... Not at all. It has nothing to do with dimensionality reduction at all. It is about efficient sampling using hash tables. (Also see response to AnonReviewer1) \n\nWe are afraid that the reviewer is mistaken as to what the method it, despite this being mentioned at several placed very explicitly. We still try our best to respond to concerns. \n\n1) SGD reduces the costly iteration (O(1) per iteration) but increases the number of iterations. Any known adaptive scheme to reduce the number of iterations leads to very costly O(N) per iteration. We refer this inherent tradeoff as chicken and egg loop. If this is a big issue, we can easily change it? \n\n2) See response to AnonReviewer1. Missing the subtlety of the algorithm is easy. Simplicity that beats a fundamental barrier is rare and most exciting. \n\n3) The theorems are valid for any LSH irrespective of the choice of similarity, similar to why importance sampling is unbiased for any proposal. So we don't really see what the issue is. \n\n4) see 1\n\n5) We will proofread the paper. Thanks for pointing out. \n\nWe hope that our comments will change the opinion of the reviewer. We are happy to have any more suggestions. \nThanks for the time in providing feedback. ",
" The novelty in the paper is limited. The use of LSH for sampling is a common technique to sample more similar vectors with higher probability. There are theorems, but they are trivial, straightforward applications of importance sampling. \n>> LSH as sampling was first used very recently (early 2016).\nNote the importance weighting factor in the algorithm of 1 - (1-p^K)^L. It is about the unbiased estimation of gradients rather than a simple heuristic. \n\nWe challenge the reviewer to show one paper which shows the use of LSH as sampling for unbiased estimation of the gradient in SGD. \n\nSimplicity is not bad, especially when it beats a fundamental barrier. \n\n**************\n\nThe paper uses LSH structures, computed over the set of examples,\n to quickly sample examples with large inner products with the current parameter vector \\theta. Essentially, two vectors are hashed to the same bucket with probability that increases with their cosine similarity.\n So we select examples in the same LSH bucket as \\theta (for robustness, we use multiple LSH mappings).\n>> Not really, the process is about unbiased estimation (mentioned in the paper at several places). Again you are missing the importance style weights. And the sampling is correlated and not normalized, so it is something never seen before. Due to the simplicity of our proposal, it might be easy to overlook the subtlety of the methods. \n\nWe reiterate, this not yet another heuristic here. For the first time, we see some hope of beating SGD in running time using better estimator, and this does not happen often. \n\nWe hope these comments will lead to a healthy discussion and correction of any misconceptions on either side :) \n\nThanks for taking time in trying to improve our paper. \n\n\n\n"
] | [
-1,
-1,
8,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"HyGpJF44z",
"ryd-Vb4Vf",
"iclr_2018_SyVOjfbRb",
"iclr_2018_SyVOjfbRb",
"iclr_2018_SyVOjfbRb",
"SJBH-bE4z",
"SyEJNkVVf",
"HyQqaeRmM",
"HyQqaeRmM",
"ByTTs9ifM",
"HkmgURdlf",
"SJl0YdfWM",
"rkcg14qlz"
] |
iclr_2018_BJjquybCW | The loss surface and expressivity of deep convolutional neural networks | We analyze the expressiveness and loss surface of practical deep convolutional
neural networks (CNNs) with shared weights and max pooling layers. We show
that such CNNs produce linearly independent features at a “wide” layer which
has more neurons than the number of training samples. This condition holds e.g.
for the VGG network. Furthermore, we provide for such wide CNNs necessary
and sufficient conditions for global minima with zero training error. For the case
where the wide layer is followed by a fully connected layer we show that almost
every critical point of the empirical loss is a global minimum with zero training
error. Our analysis suggests that both depth and width are very important in deep
learning. While depth brings more representational power and allows the network
to learn high level features, width smoothes the optimization landscape of the
loss function in the sense that a sufficiently wide network has a well-behaved loss
surface with almost no bad local minima. | workshop-papers | Dear authors,
While I appreciate the result that a convolutional layer can have full rank output, this allowing a dataset to be classified perfectly under mild conditions, the fact that all reviewers expressed concern about the statement is an indication that the presentation sill needs quite a bit of work.
Thus, I recommend it as an ICLR workshop paper. | train | [
"BkIW6fYxz",
"rkvS6-9gG",
"S136E0hZf",
"HJ0nIZkfM",
"rJoC5PTQM",
"ByYsQbYGz",
"SyA2UxD-z",
"SJsVLlv-f"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper presents several theoretical results on the loss functions of CNNs and fully-connected neural networks. I summarize the results as follows:\n\n(1) Under certain assumptions, if the network contains a \"wide“ hidden layer, such that the layer width is larger than the number of training examples, then (with random weights) this layer almost surely extracts linearly independent features for the training examples.\n\n(2) If the wide layer is at the top of all hidden layers, then the neural network can perfectly fit the training data.\n\n(3) Under similar assumptions and within a restricted parameter set S_k, all critical points are the global minimum. These solutions achieve zero squared-loss.\n\nI would consider result (1) as the main result of this paper, because (2) is a direct consequence of (1). Intuitively, (1) is an easy result. Under the assumptions of Theorem 3.5, it is clear that any tiny random perturbation on the weights will make the output linearly independent. The result will be more interesting if the authors can show that the smallest eigenvalue of the output matrix is relatively large, or at least not exponentially small.\n\nResult (3) has severe limitations, because: (a) there can be infinitely many critical point not in S_k that are spurious local minima; (b) Even though these spurious local minima have zero Lebesgue measure, the union of their basins of attraction can have substantial Lebesgue measure; (c) inside S_k, Theorem 4.4 doesn't exclude the solutions with exponentially small gradients, but whose loss function values are bounded away above zero. If an optimization algorithm falls onto these solutions, it will be hard to escape.\n\nOverall, the paper presents several incremental improvement over existing theories. However, the novelty and the technical contribution are not sufficient for securing an acceptance.\n\n",
"This paper analyzes the expressiveness and loss surface of deep CNN. I think the paper is clearly written, and has some interesting insights.",
"This paper analyzes the loss function and properties of CNNs with one \"wide\" layer, i.e., a layer with number of neurons greater than the train sample size. Under this and some additional technique conditions, the paper shows that this layer can extract linearly independent features and all critical points are local minimums. I like the presentation and writing of this paper. However, I find it uneasy to fully evaluate the merit of this paper, mainly because the \"wide\"-layer assumption seems somewhat artificial and makes the corresponding results somewhat expected. The mathematical intuition is that the severe overfitting induced by the wide layer essentially lifts the loss surface to be extremely flat so training to zero/small error becomes easy. This is not surprising. It would be interesting to make the results more quantitive, e.g., to quantify the tradeoff between having local minimums and having nonzero training error. ",
"This paper presents an analysis of convolutional neural networks from the perspective of how the rank of the features is affected by the kinds of layers found in the most popular networks. Their analysis leads to the formulation of a certain theorem about the global minima with respect to parameters in the latter portion of the network.\n\nThe authors ask important questions, but I am not sure that they obtain important answers. On the plus side, I'm glad that people are trying to further our understanding our neural networks, and I think that their investigation is worthy of being published.\n\nThey present a collection of assumptions, lemmas, and theorems. They have no choice but to have assumptions, because they want to abstract away the \"data\" part of the analysis while still being able to use certain properties about the rank of the features at certain layers.\n\nMost of my doubts about this paper come from the feeling that equivalent results could be obtained with a more elegant argument about perturbation theory, instead of something like the proof of Lemma A1. That being said, it's easy to voice such concerns, and I'm willing to believe that there might not exist a simple way to derive the same results with an approach more along the line of \"whatever your data, pick whatever small epsilon, and you can always have the desired properties by perturbing your data by that small epsilon in a random direction\". Have the authors tried this ?\n\nI'm not sure if the authors were the first to present this approach of analyzing the effects of convolutions from a \"patch perspective\", but I think this is a clever approach. It simplifies the statement of some of their results. I also like the idea of factoring the argument along the concept of some critical \"wide layer\".\n\nGood review of the literature.\n\nI wished the paper was easier to read. Some of the concepts could have been illustrated to give the reader some way to visualize the intuitive notions. For example, maybe it would have been interesting to plot the rank of features a every layer for LeNet+MNIST ?\n\nAt the end of the day, if a friend asked me to summarize the paper, I would tell them :\n\n\"Features are basically full rank. Then they use a square loss and end up with an over-parametrized system, so they can achieve loss zero (i.e. global minimum) with a multitude of parameters values.\"\n\n\nNitpicking :\n\n\"This paper is one of the first ones, which studies CNNs.\"\nThis sentence is strange to read, but I can understand what the authors mean.\n\n\"This is true even if the bottom layers (from input to the wide layer) and chosen randomly with probability one.\"\nThere's a certain meaning to \"with probability one\" when it comes to measure theory. The authors are using it correctly in the rest of the paper, but in this sentence I think they simply mean that something holds if \"all\" the bottom layers have random features.",
"We thank reviewer 4 for the detailed comments.\n\n\"They present a collection of assumptions, lemmas, and theorems. They have no choice but to have assumptions, because they want to abstract away the \"data\" part of the analysis while still being able to use certain properties about the rank of the features at certain layers.\"\n\nYes, the reviewer is right, we did not want to make assumptions on the distribution of the training data\nas these assumptions are very difficult to check. Instead our assumptions can all be easily checked for a given training set and CNN architecture.\n\n\"Most of my doubts about this paper come from the feeling that equivalent results could be obtained with a more elegant argument about perturbation theory, instead of something like the proof of Lemma A1. That being said, it's easy to voice such concerns, and I'm willing to believe that there might not exist a simple way to derive the same results with an approach more along the line of \"whatever your data, pick whatever small epsilon, and you can always have the desired properties by perturbing your data by that small epsilon in a random direction\". Have the authors tried this ?\"\n\nWe don't know but we can prove Lemma A1 for any given dataset (fulfilling the stated assumptions). However, we use a perturbation argument to show that our assumptions on the training data are always fulfilled for an arbitrarily small perturbation of the data (similar to what the reviewer suggests).\n\n\"I'm not sure if the authors were the first to present this approach of analyzing the effects of convolutions from a \"patch perspective\", but I think this is a clever approach. It simplifies the statement of some of their results. I also like the idea of factoring the argument along the concept of some critical \"wide layer\".\n\nGood review of the literature.\"\n\nUp to the best of our knowledge we have not seen that this patch argument has been used before. It is a very convenient tool to analyze even much more general CNN architectures than the ones currently used.\n\n\"I wished the paper was easier to read. Some of the concepts could have been illustrated to give the reader some way to visualize the intuitive notions. For example, maybe it would have been interesting to plot the rank of features a every layer for LeNet+MNIST ?\"\n\nWe would be very grateful for pointers where we could improve the readability of the paper. We have added a plot for the architecture of Figure 1, where we vary the number of filters T_1 and plot the rank of the feature at the first convolutional layer. As shown by Theorem 3.5 we get full rank for T_1>=89 which implies n_1>=N for the first convolutional layer. In this case the rank of F_1 is 60000 and training error is zero and the loss is minimized almost up to single precision. We think that this illustrates nicely the result of Theorem 3.5\n\n\" \"This paper is one of the first ones, which studies CNNs.\"\nThis sentence is strange to read, but I can understand what the authors mean.\"\n\nWe agree: please check the new uploaded version, where we have changed it to:\nThis paper is one of the first ones, which theoretically analyzes deep CNNs\n\n\"\"This is true even if the bottom layers (from input to the wide layer) and chosen randomly with probability one.\"\nThere's a certain meaning to \"with probability one\" when it comes to measure theory. The authors are using it correctly in the rest of the paper, but in this sentence I think they simply mean that something holds if \"all\" the bottom layers have random features.\"\n\nWe agree that this can be misunderstood. What we prove is that it holds for almost any weight configuration for the layers from input to the wide layer with respect to the Lebesgue measure (up to a set of measure zero). As in practice the weights are often initialized using e.g. a Gaussian distribution, we wanted to highlight that our result holds with probability 1. In order to clarify this we have added a footnote \"are choosen randomly (\"with respect to any probability measure which has a density with respect to the Lebesgue measure\"). Thus it holds for any probability measure on the weight space which has a density function. We have changed the uploaded manuscript in that way.\n\n\n",
"\"I like the presentation and writing of this paper. However, I find it uneasy to fully evaluate the merit of this paper, mainly because the \"wide\"-layer assumption seems somewhat artificial and makes the corresponding results somewhat expected.\"\n\nPlease note Table 1, where we have listed several state-of-the-art CNN networks, which have such a wide layer (more hidden units than the number of training points) in the case of ImageNet. These are VGG, Inception V3 and Inception V4. Thus we don't see why this wide layer assumption is \"artificial\" if CNNs which had large practical success fulfill this condition.\n\n\"The mathematical intuition is that the severe overfitting induced by the wide layer essentially lifts the loss surface to be extremely flat so training to zero/small error becomes easy. This is not surprising.\"\n\nWe think that our finding that practical CNNs such as VGG/Inception produce linearly independent features at the wide layer for ImageNet for almost any weight configuration up to the wide layer is an interesting finding which fosters the understanding of these CNNs. While the fact that whether the result is surprising or not is rather a matter of personal taste, what we find more relevant and important is if this result can help to advance the theoretical understanding of practical networks using rigorous math, which it does.\n\n\"It would be interesting to make the results more quantitive, e.g., to quantify the tradeoff between having local minimums and having nonzero training error.\"\n\nSuch results are currently only available for coarse approximations of neural networks where it is not clear how and if they apply to neural networks used in practice. Meanwhile, our results hold exactly for the architectures used in practice.",
"Thanks a lot for your reviews. We are happy to answer any additional questions you might have regarding our work.",
"We do not agree with the assessment of novelty and contribution of reviewer 3. Up to our knowledge only the paper of Cohen and Shashua (ICML 2016) analyzes general CNN architectures. As CNN architectures are obviously very important in practice, we think that a better theoretical understanding is urgently needed. Our paper contains two main results. First we show that CNNs used in practice produce linearly independent features (for ImageNet with VGG or Inception architecture) with probability 1 (Theorem 3.5) at the wide layer (first layer in VGG and Inception). We think that this is a very helpful result to understand how and why current CNNs work also with respect to the recent debate around generalization properties of state of the art networks (Zhang et al, 2017). Second, we give necessary and sufficient conditions for global optima under squared loss (Theorem 4.4) and show that all critical points in S_k are globally optimal under the conditions of Theorem 4.5. We think that this is a significant contribution to the theoretical understanding of CNN architectures. In particular, we would like to emphasize that all our results are applicable to the real problem of interest without any simplifying assumptions.\n\nWe agree in general with the reviewer that it might be nice to have even stronger results e.g. convergence of gradient descent/SGD to the global optimum. But given that the current state of the art in this regard is limited to one hidden layer together with additional distributional assumptions and does not cover deep CNNs used in practice (multiple filters, overlapping patches, deep architecture), we think that the reviewer demands too much. Even papers which consider just deep linear models have been appreciated in the community and get very good reviews at ICLR 2018.\n\nSpecific answers:\n\"Intuitively, (1) is an easy result. Under the assumptions of Theorem 3.5, it is clear that any tiny random perturbation on the weights will make the output linearly independent.\"\n\nThere are a lot of mathematical results which are intuitive but that does not mean that they are easy to prove.\n\n\"The result will be more interesting if the authors can show that the smallest eigenvalue of the output matrix is relatively large, or at least not exponentially small.\"\n\nWe agree that this result would be interesting, but one has to start somewhere (see general comment above).\n\n\"Result (3) has severe limitations, because: (a) there can be infinitely many critical point not in S_k that are spurious local minima; (b) Even though these spurious local minima have zero Lebesgue measure, the union of their basins of attraction can have substantial Lebesgue measure; (c) inside S_k, Theorem 4.4 doesn't exclude the solutions with exponentially small gradients, but whose loss function values are bounded away above zero. If an optimization algorithm falls onto these solutions, it will be hard to escape.\"\n\n(a) Yes, but then these critical points not in S_k (the complement of S_k has measure zero) must have either low rank weight matrices in the layers above the wide layer or the features are not linearly independent at the wide layer. We don't see any reason in the properties of the loss which would enforce low rank in the weight matrices of a CNN. Moreover, it seems unlikely that a critical point with a low rank matrix is a suboptimal local minimum as this would imply that all possible full rank perturbations have larger/equal objective (we don't care if the complement of S_k potentially contains additional global minima). Even for simpler models like two layer linear networks, it has been shown by (Baldi and Hornik, 1989) that all the critical points with low rank weight matrices have to be saddle points and thus cannot be suboptimal local minima. See also other parallel submissions at ICLR 2018 for similar results and indications for deep linear models (e.g. Theorem 2.1, 2.2 in https://openreview.net/pdf?id=BJk7Gf-CZ, and Theorem 5 in https://openreview.net/pdf?id=ByxLBMZCb).\nMoreover, a similar argument applies to the case where one has critical point such that the features are not linearly independent at the wide layer. As any neighborhood of such a critical point contains points which have linearly independent features at the wide layer (and thus it is easy to achieve zero loss), it is again unlikely that this critical point is a suboptimal local minimum.\nIn summary, if there are any critical points in the complement of S_k, then it is very unlikely that these are suboptimal local minima but they are rather also global minima, saddle points or local maxima.\n\n(b/c) We agree that these are certainly interesting questions but the same comment applies as above. Moreover, we see no reason why critical points with low rank weight matrices should be attractors.\n"
] | [
4,
7,
5,
6,
-1,
-1,
-1,
-1
] | [
4,
2,
2,
3,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BJjquybCW",
"iclr_2018_BJjquybCW",
"iclr_2018_BJjquybCW",
"iclr_2018_BJjquybCW",
"HJ0nIZkfM",
"S136E0hZf",
"rkvS6-9gG",
"BkIW6fYxz"
] |
iclr_2018_SyfiiMZA- | Jointly Learning to Construct and Control Agents using Deep Reinforcement Learning | The physical design of a robot and the policy that controls its motion are inherently coupled. However, existing approaches largely ignore this coupling, instead choosing to alternate between separate design and control phases, which requires expert intuition throughout and risks convergence to suboptimal designs. In this work, we propose a method that jointly optimizes over the physical design of a robot and the corresponding control policy in a model-free fashion, without any need for expert supervision. Given an arbitrary robot morphology, our method maintains a distribution over the design parameters and uses reinforcement learning to train a neural network controller. Throughout training, we refine the robot distribution to maximize the expected reward. This results in an assignment to the robot parameters and neural network policy that are jointly optimal. We evaluate our approach in the context of legged locomotion, and demonstrate that it discovers novel robot designs and walking gaits for several different morphologies, achieving performance comparable to or better than that of hand-crafted designs. | workshop-papers | The chief contribution of this paper is to show that a single set of policy parameters can be optimized in an alternating fashion while the design parameters of the body are also optimized with policy gradients and sampled. The fact that this simple approach seems to work is interesting and worthy of note. However, the paper is otherwise quite limited - other methods are not considered or compared, incomplete experimental results are given, and important limitations of the method are not addressed. As it is an interesting but preliminary work, the workshop track would be appropriate. | train | [
"SksyD3Dgz",
"ByrfSMcgz",
"rJweW2Sbf",
"Sydj1V_Mz",
"Hk8rkN_Mz",
"HkyWy4ufz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"This is a well written paper, very nice work.\nIt makes progress on the problem of co-optimization of the physical parameters of a design\nand its control system. While it is not the first to explore this kind of direction,\nthe method is efficient for what it does; it shows that at least for some systems, \nthe physical parameters can be optimized without optimizing the controller for each \nindividual configuration. Instead, they require that the same controller works over an evolving\ndistribution of the agents. This is a simple-but-solid insight that makes it possible\nto make real progress on a difficult problem.\n\nPros: simple idea with impact; the problem being tackled is a difficult one\nCons: not many; real systems have constraints between physical dimensions and the forces/torques they can exert\n Some additional related work to consider citing. The resulting solutions are not necessarily natural configurations, \n given the use of torques instead of musculotendon-modeling. But the current system is a great start.\n\nThe introduction could also promote that over an evolutionary time-frame, the body and\ncontrol system (reflexes, muscle capabilities, etc.) presumably co-evolved.\n\nThe following papers all optimize over both the motion control and the physical configuration of the agents.\nThey all use derivative free optimization, and thus do not require detailed supervision or precise models\nof the dynamics.\n\n- Geijtenbeek, T., van de Panne, M., & van der Stappen, A. F. (2013). Flexible muscle-based locomotion\n for bipedal creatures. ACM Transactions on Graphics (TOG), 32(6), 206.\n (muscle routing parameters, including insertion and attachment points) are optimized along with the control).\n\n- Sims, K. (1994, July). Evolving virtual creatures. In Proceedings of the 21st annual conference on\n Computer graphics and interactive techniques (pp. 15-22). ACM.\n (a combination of morphology, and control are co-optimized)\n\n- Agrawal, S., Shen, S., & van de Panne, M. (2014). Diverse Motions and Character Shapes for Simulated\n Skills. IEEE transactions on visualization and computer graphics, 20(10), 1345-1355.\n (diversity in control and diversity in body morphology are explored for fixed tasks)\n\nre: heavier feet requiring stronger ankles\nThis commment is worth revisiting. Stronger ankles are more generally correlated with \na heavier body rather than heavy feet, given that a key role of the ankle is to be able\nto provide a \"push\" to the body at the end of a stride, and perhaps less for \"lifting the foot\".\n\nI am surprised that the optimization does not converge to more degenerate solutions\ngiven that the capability to generate forces and torques is independent of the actual\nlink masses, whereas in nature, larger muscles (and therefore larger masses) would correlate\nwith the ability to generate larger forces and torques. The work of Sims takes these kinds of \nconstraints loosely into account (see end of sec 3.3).\n\nIt would be interesting to compare to a baseline where the control systems are allowed to adapt to the individual design parameters.\n\nI suspect that the reward function that penalizes torques in a uniform fashion across all joints would\nfavor body configurations that more evenly distribute the motion effort across all joints, in an effort\nto avoid large torques. \n\nAre the four mixture components over the robot parameters updated independently of each other\nwhen the parameter-exploring policy gradients updates are applied? It would be interesting\nto know a bit more about how the mean and variances of these modes behave over time during\nthe optimization, i.e., do multiple modes end up converging to the same mean? What does the\nevolution of the variances look like for the various modes?\n",
"I'm glad to see the concept of jointly learning to control and evolve pop up again!\n\nUnfortunately, this paper has a number of weak points that - I believe - make it unfit for publication in its current state.\nMain weak points:\n- No comparisons to other methods (e.g. switch between policy optimization for the controller and CMA-ES for the mechanical parameters). The basic result of the paper is that allowing PPO to optimize more parameters, achieves better results...\n- One can argue that this is not true joint optimization Mechanical and control parameters are still treated differently. This begs the question: How should one define mechanical \"variables\" in order for them to behave similarly to other optimization variables (assuming that mechanical and control parameters influence the performance in a similar way)?\n\nAdditional relevant papers (slightly different approach):\nhttp://www.pnas.org/content/108/4/1234.full#sec-1\nhttp://ai2-s2-pdfs.s3.amazonaws.com/ad27/0104325010f54d1765fdced3af925ecbfeda.pdf\n\nMinor issues:\nFigure 1: please add labels/captions\nFigure 2: please label the axes\n",
"The paper presents a model-free strategy for jointly optimizing robot design and a neural network-based controller. While it is well-written and covers quite a lot of related work, I have a few comments with regards to the algorithm and experiments.\n\n- The algorithm boils down to an alternating policy gradient optimization of design and policy parameters, with policy parameters shared between all designs. This requires the policy to have to generalize across the current design distribution. How well the policy generalizes is then in turn fed back into the design parameter distribution, favoring those designs it could improve on the quickest. However, these designs are not guaranteed to be optimal in the long run, with further specialization. The results for the Walker2d might be hinting at this. A comparison between a completely shared policy vs. a specialized policy per design, possibly aided by a meta-learning technique to speed up the specialization, would greatly benefit the paper and motivate the use of a shared policy more quantitatively. If the condition of a common state/action space (morphology) is relaxed, then the assumption of smoothness in design space is definitely not guaranteed.\n- Related to that, it would be interesting to see a visualization of the design space distribution. Is the GMM actually multimodal within a single run (which implies the policy is able to generalize across significantly different designs)? \n- There are a separate number of optimization steps for the design and policy parameters within each iteration of the training loop, however the numbers used for the experiments are not listed. It would be interesting to see what the influence of the ratio of these steps is, as well as to know how many design iterations were taken in order to get to those in Fig. 4. This is especially relevant if this technique is to be used with real physical systems. One could argue that, although not directly used for optimization or planning, the physics simulator acts a cheap dynamics model to test new designs.\n- I wonder how robust and/or general the optimized designs are with respect to the task settings. Do small changes in the task or reward structure (i.e. friction or reward coefficients) result in wildly different designs? In practice, good robot designs are also robust and flexible and it would be great to get an idea how locally optimal the found designs are.\n\nIn summary, while the paper presents a simple but possibly effective and very general co-optimization procedure, the experiments and discussion don't definitively illustrate this.\n\nMinor remarks:\n- Sec. 1, final paragraph: \"To do the best of our knowledge\"\n- Sec. 2, 3rd paragraph: \"contract constraints\"\n- Sec. 4.1: a convolutional neural network consisting only of fully-connected layers can hardly be called convolutional\n- Fig. 3: ±20% difference to the baseline for the Walker2d is borderline of what I would call comparable, but it seems like some of these runs have not converged yet so that difference might still decrease.\n- Fig. 3: Please add x & y labels.",
"We thank the reviewer for the valuable feedback, and respond to specific concerns below.. \n\n\nRE: Local optima / shared policy: The reviewer is right in that the optimization may (and indeed likely does) converge to a local optima. But that is the fundamental challenge of what our method is trying to achieve: a joint search over design and policy space will have to involve, in all but the simplest cases, optimizing a complex, non-linear, and non-convex objective, at which point it is hard to guarantee convergence to a global optimum. (Indeed, even optimizing the control policy with a fixed design would not have such a guarantee). A major contribution of our work is in developing an optimization strategy that is able to find good solutions, if not globally optimal ones, with reasonable consistency, and we believe it constitutes an important step towards developing more efficient and successful optimization techniques for design+control problems.\n\nThe shared policy actually ends up being critical to this effort. Firstly, we don't have any other option since it would simply be computationally infeasible to train a separate policy for every candidate design (which is also why we are unable to compare to such an approach---although we do compare to a policy learned with a fixed hand-crafted design as the baseline). However, we partially mitigate this by providing design parameters as input to the controller, allowing it to adapt its policy based on the specific design instance it is controlling. At the same time, having a common controller ensures that it is able to transfer knowledge of successful gaits and successful strategies between similar designs, and does not have to start training from scratch. This again is key in allowing optimization to succeed.\n\nWe will update the paper to clarify this and expand the discussion of the motivation behind our design choices to provide the reader with greater intuition regarding the underlying optimization problem. We also agree that meta-learning would be an interesting approach to pursue in future work as a means to improve the efficiency of specialization.\n\nRE: Design space distribution: We find that the optimization process actually maintains a fairly high-variance multi-modal distribution over design choices till about a third of the way into training, before beginning to commit to a specific design. In most of the first 100M iterations, multiple components remain active, and the marginal variance of each physical parameter also remains high (indeed, for some parameters like foot length and radius, the variance actually increases first before beginning to converge). This exploration of the design space is in fact critical to successful optimization: we had initially attempted to use only a single Gaussian (i.e., just one component), which lead to greedy convergence to poor local optima. We will update the paper to discuss this phenomenon, as well as visualize the evolution of the design parameter distribution.\n\nRE: Influence of optimization steps: We experimented with different alternation ratios between policy and design update iterations, which led to different speeds and qualities of convergence. We found that alternating too quickly results in the policy network not adapting fast enough to the changes in design parameters. If we alternated too slowly, we found that the algorithm takes a long time to converge or converges to poor local optima. We will include this discussion in the paper. All results in Figure 4 are reported after 300 million timesteps, which is roughly 5000 design iterations. \n\nRE: Robustness and generalizability: Based on the reviewer's comments, we conducted experiments on the hopper in which we fine-tuned the controller in environments with varying levels of friction, while keeping the learned design fixed. We found that the learned design was reasonably robust, and showed similar variability in performance compared to doing the same for the hand-crafted hopper---and the learned design outperformed the hand-crafted one across the full range of friction values (although, for very low friction values, both designs essentially were unable to learn a successful gait).\n\nNote that our framework can incorporate the goal of generalization by simply sampling from a diverse set of environment values during training. But at the same time, in some applications it may be useful to seek out solutions that are specifically adapted to a relatively narrower set of environment parameters, gaining better performance within this set at the cost of more general performance.",
"We are gratified by the reviewer's comments on the contributions of the paper, and thank you for the valuable feedback. Please find our responses below.\n\nRE: Relevant papers: Thank you for suggesting these papers. These are indeed relevant, and we will discuss them in the updated version.\n\nRE: Non-degenerate solutions: Without any constraints on the design space, we found that our method may converge to degenerate solutions. For example, without placing a lower bound on the length of each limb, the method exploits imperfections in the physics engine to learn a design and control strategy that achieve high reward, but are not realizable. We dealt with this by placing loose upper and lower bounds on the design parameters. We believe that similar application / manufacturing-specific constraints and costs---such as the correlation between actuator power and mass---can be easily incorporated into our framework.\n\nRE: Controller adaptation baseline: Unfortunately, it would be too computationally expensive to train/adapt separate controllers for individual designs when searching over a large enough design space. However, as we'll clarify in the paper, we're doing this already to some extent by providing the design parameters of a specific sampled instance as input to the controller, which can then learn to adapt its policy to that specific instance based on this input.\n\nRE: Distribution of applied torques: The reviewer is correct that applied torques are evenly distributed across all joints. This is likely because there is a squared penalty for applying torques at every joint. While this is a desirable property for real robots---reducing the stress on any particular part---it would be interesting to see what actuation would develop under other penalties, such as an L1 penalty.\n\nRE: Updates to mixture components: Our algorithm actually maintains a high-entropy distribution over the components till about a third of the way into training, before beginning to commit to a specific design (i.e., a single component, and eventually low variance within that component). Looking at the marginal variance of each parameter, we find that it also remains high early on in training---in fact, for some parameters like foot length and radius, it actually increases before beginning to converge. The updated paper will provide visualizations of the evolution of these distributions through training.",
"We thank the reviewer for their encouraging comments, and respond to specific points below:\n\nRE: Comparison to other methods: Note that our work seeks to enable automatic data-driven discovery of jointly optimal physical models and control policies. As part of this, we evaluate our method when it is initialized completely randomly---rather than with a \"good\" initial expert-guided guess. Thus, our experiments demonstrate the ability of our method to explore the entire design space, and potentially arrive at creative solutions far from expert intuition. As far as we know, all existing methods, including CMA-ES, require that the user decide on at least an initial parameterization, and conduct essentially a local search around a specific design. Therefore, these methods are not directly comparable. (Indeed, in our experiments we've found that initializing with a hand-crafted model---like the standard walker--- as the only component in our GMM allows our optimization to proceed quickly and improve that design. But this is not our goal.)\n\nRE: Alternating Optimization / just PPO over more parameters: Perhaps the biggest challenge we addressed in this paper relates to the design of an optimization strategy that is able to successfully search the joint design+control space, to arrive at good solutions while being computationally efficient. A key part of this is the alternating iterative procedure, which significantly improves computational efficiency and accelerates optimization.\n\nMoreover, note that we separate policy and design parameters in order to also allow a richer and more powerful parameterization of the network's belief distribution over good designs. We found that using a mixture model was key in allowing the optimization procedure to escape poor local optima, since in early iterations when the controller wasn't sufficiently sophisticated, it allowed the method to maintain a multi-modal distribution over a diverse set of possible \"good\" designs.\n\nWe will expand on this in the updated version of the paper.\n\nRE: Relevant work: We sincerely thank the reviewer for these very relevant citations, and will discuss them in the updated version.\n\nThank you for the review. We will address these points in the updated version."
] | [
9,
4,
5,
-1,
-1,
-1
] | [
5,
4,
3,
-1,
-1,
-1
] | [
"iclr_2018_SyfiiMZA-",
"iclr_2018_SyfiiMZA-",
"iclr_2018_SyfiiMZA-",
"rJweW2Sbf",
"SksyD3Dgz",
"ByrfSMcgz"
] |
iclr_2018_SJZ2Mf-0- | Adaptive Memory Networks | Real-world Question Answering (QA) tasks consist of thousands of words that often represent many facts and entities. Existing models based on LSTMs require a large number of parameters to support external memory and do not generalize well for long sequence inputs. Memory networks attempt to address these limitations by storing information to an external memory module but must examine all inputs in the memory. Hence, for longer sequence inputs the intermediate memory components proportionally scale in size resulting in poor inference times and high computation costs.
In this paper, we present Adaptive Memory Networks (AMN) that process input question pairs to dynamically construct a network architecture optimized for lower inference times. During inference, AMN parses input text into entities within different memory slots. However, distinct from previous approaches, AMN is a dynamic network architecture that creates variable numbers of memory banks weighted by question relevance. Thus, the decoder can select a variable number of memory banks to construct an answer using fewer banks, creating a runtime trade-off between accuracy and speed.
AMN is enabled by first, a novel bank controller that makes discrete decisions with high accuracy and second, the capabilities of a dynamic framework (such as PyTorch) that allow for dynamic network sizing and efficient variable mini-batching. In our results, we demonstrate that our model learns to construct a varying number of memory banks based on task complexity and achieves faster inference times for standard bAbI tasks, and modified bAbI tasks. We achieve state of the art accuracy over these tasks with an average 48% lower entities are examined during inference. | workshop-papers | This paper presents an interesting model which at the time of submission was still quite confusingly described to the reviewers.
A lot of improvements have been made for which I applaud the authors.
However, at this point, the original 20 babi tasks are not quite that exciting and several other models are able to fully solve them as well.
I would encourage the authors to tackle harder datasets that require reasoning or multitask settings that expand beyond babi.
| train | [
"SJIPkC2Nf",
"ByQPccw4G",
"HJ_BtypgM",
"rJBY85UEM",
"rJGuMGYef",
"By5ZMrqxG",
"S1Lg0TTfM",
"B1amYyGXG",
"S18rEFImM",
"SJzUv1GXG",
"Skkp0DGgM",
"SkYhYmGlz"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"Thanks for taking time to revisit the paper. We are working on additional experiments and will add these results in the coming revision.",
"Dear reviewer, \n\nThe revised paper is available. We can see 03 Nov 2017 (modified: 05 Jan 2018) as the latest revision. Furthermore, if you click on revisions, you can see the diff between the latest draft and the submitted version by clicking on 'Compare Revisions' on the top right to see the changes we have made. Please let us know if you have additional concerns.\n\nUpdate 01/17: Thanks for your feedback and taking time to revisit the paper. We will work towards improving the paper.",
"This paper offers a very promising approach to the processing of the type of sequences we find in dialogues, somewhat in between RNNs which have problem modeling memory, and memory networks whose explicit modeling of the memory is too rigid.\n\nTo achieve that, the starting point seems to be a strength GRU that has the ability to dynamically add memory banks to the original dialogue and question sentence representations, thanks to the use of imperative DNN programming. The use of the reparametrization trick to enable global differentiability is reminiscent of an ICLR'17 paper \"Learning graphical state transitions\". Compared to the latter, the current paper seems to offer a more tractable architecture and optimization problem that does not require strong supervision and should be much faster to train.\n\nUnfortunately, this is the best understanding I got from this paper, as it seems to be in such a preliminary stage that the exact operations of the SGRU are not parsable. Maybe the authors have been taken off guard by the new review process where one can no longer improve the manuscript during this 2017 review (something that had enabled a few paper to pass the 2016 review).\n\nAfter a nice introduction, everything seems to fall apart in section 4, as if the authors did not have time to finish their write-up. \n- N is both the number of sentences and number of word per sentence, which does not make sense.\n- i iterates over both the sentences and the words. \n\nThe critical SGRU algorithm is impossible to parse\n- The hidden vector sigma, which is usually noted h in the GRU notation, is not even defined\n- The critical reset gate operation in Eq.(6) is not even explained, and modified in a way I do not understand compared to standard GRU.\n- What is t? From algorithm 1 in Appendix A, it seems to correspond to looping over both sentences and words.\n- The most novel and critical operation of this SGRU, to process the entities of the memory bank, is not even explained. All we get at the end of section 4.2 is \" After these steps are finished, all entities are passed through the strength modified GRU (4.1) to recompute question relevance.\"\n\nThe algorithm in Appendix A does not help much. With PyTorch being so readable, I wish some source code had been made available.\n\nExperiments reporting also contains unacceptable omissions and errors:\n- The definition of 'failed task', essential for understanding, is not stated (more than 5% error)\n- Reported numbers of failed tasks are erroneous: it should be 1 for DMN+ and 3 for MemN2N.\n\nThe reviewers corrections, while significant, do not seem enough to clarify the core of the paper.\n\nPage 3: dynanet -> dynet",
"I read the response from the authors. I was expecting to see the mean and variance of the 10 runs. Other responses are convincing. I still stand by my decision.\n\nIf the paper gets accepted, please do report the mean and variance of your experiments.",
"Summary: \n\nThis paper proposes a dynamic memory augmented neural network for question answering. The proposed model iteratively creates a shorter list of relevant entities such that the decoder can look at only a smaller set of entities to answer the given question. Authors show results in bAbi dataset.\n\nMy comments:\n\n1. While the proposed model is very interesting, I disagree with the claim that AMN has lower inference times. The memory creation happens only after reading the question and hence the entire process can be considered as part of inference. So it is not clear if there is a huge reduction in the inference time when compared to other models that the authors compare. However, the proposed model looks like a nice piece of interpretable reasoning module. In that sense, it is not any better than EntNet based on the error rate since EntNet is doing better than AMN in 15 out of 20 tasks. So it is not very clear what is the advantage of AMN over EntNet or other MANN architectures.\n\n2. Can you explain equation 9 in detail? What is the input to the softmax function? What is the output size of the softmax? I assume q produces a scalar output. But what is the input size to the q function?\n\n3. In the experiment, when you say “best of 10 runs”, is it based on a separate validation set? Please report the mean and variance of the 10 runs. It is sad that people just report best of multiple runs in the bAbi tasks and not report the variance in the performance. I would like to see the mean and variance in the performance.\n\n4. What happens when number of entities is large? Can you comment about how this model will be useful in situations other than reading comprehension style QA? \n\n5. Are the authors willing to release the code for reproducing the results?\n\nMinor comments:\n\n1. Page 2, second line: “Networks(AMN)” should be “Networks (AMN).\n2. In page 3, first line: “Hazy et al. (2006)” should be “(Hazy et al. 2006)”.\n3. In page 3, second para, first line, both references should use \\citep instead of \\citet.\n4. In page 4, fourth para, Vanhoucke et al should also be inside \\citep.\n5. In page 4, notations paragraph: “a question is a sequence of N_q words” - “of” is missing.\n6. In page 5, first paragraph is not clear.\n7. In page 6, point 4, 7th line: “nodes in the its path” should be “nodes in its path”.\n8. In page 9, section 5.3, multiple questions, 2nd line: “We extend the our model” should be “We extend our model”.\n",
"The authors propose a model for QA that given a question and a story adaptively determines the number of entity groups (banks). The paper is rather hard to follow as many task specific terms are not explained. For instance, it would benefit the paper if the authors introduced the definitions of a bank and a story. This will help the reader have a more comprehensive understanding of their framework.\n\nThe paper capitalized on the argument of faster inference and no wall-time for inference is shown. The authors only report the number of used banks. What are the runtime gains compared to Entnet? \nThis was the core motivation behind this work and the authors fail to discuss this completely.",
"Thank you for your review. We have updated Section 3 and 4 to improve the readability of the paper. We also added the bank and entity definitions at the beginning of Section 3. Furthermore, we have uploaded a revision that provides the wall clock savings in Appendix A.3 for three tasks. As our results show, the wall clock times are directly proportional to the number of entities under consideration during inference. Adaptive Memory Network (AMN) architecture reduces inferences times by learning to attend to fewer entities during inference. Please let us know if you have revised comments based on the new draft.\n",
"We are sorry about the difficulty in understanding our paper. We have posted a revision that fixes these concerns. Additionally, we have fixed and clarified the notation as necessary. \n\nThe strength GRU measures the relevance of each word from the question. The equations in the paper now accompany additional, helpful text and the update is performed at a sentence level. We also fixed the algorithm in the appendix. The relevance score coupled with the memory bank design allow AMN to look at only relevant entities during inference time. \n\nWe also fixed the tasks error rates and passing tasks definition. We have improved the readability of the paper. However, we plan to provide documented source code with the final version of the paper. We appreciate your comments in improving the paper. Please let us know if you have additional comments about the paper.\n",
"We would like to thank all reviewers for all the comments and feedback towards improving the paper. We have fixed the typos, terms, and explanations in the paper and made it more accessible. We have also added inference times for representative tasks that are consistent with the savings in terms of the number of entities accessed during inference in Appendix A.2.\n\nAdaptive Memory Networks (AMN) presents a dynamic network design where entities from the input stories are stored in memory banks. Starting from a single bank, as the number of input entities increases, the network learns to create new banks as the entropy in a single bank becomes too high. Over a period of time, the network represents a hierarchical structure where entities are stored in different banks distanced by the question. During inference, AMN can answer the question with high accuracy for most bAbI tasks by just looking at a single bank.\n\nAMN presents a new design paradigm in memory networks. Unlike NTMs, where the network learns where to read/write to fine-grained address information, AMN only learns to write input entities to coarse-grained banks and entities reside within the bank. As a result, AMN is easier to train (e.g. does not require curriculum learning like NTMs) and does not require a separate sparsification mechanism like approximate nearest neighbors for inference efficiency.\n\nAMN is timely. It has been made possible with the recent progress in dynamic networks which allows input dependent network creation and efficiency in variable sized batching as well as recent tricks in deep networks towards learning discrete decision making with high accuracy.\n\nApart from saving inference times, AMN can learn to reason which specific entities contribute towards the final answer improving interpretability. ",
"We thank the reviewer for providing us with a detailed feedback.\n\n1) Since AMN (our model) reduces the number of entities under test, the inference times is reduced. We have updated the draft with the inference time information in Appendix. This can be useful when questions (or hints) are available during the QA process (we describe the Amazon example in the Introduction). Your concern about memory creation on a per question basis in a memory network is a valid one. Therefore, we extend AMN to multiple questions as shown in the evaluation. Hence, given a list of questions, AMN can learn to construct a network architecture such that these questions can be answered quickly. Here, network construction costs are amortized for mutiple questions.\n\n2) In equation 9, depending on what ΠCtrl is used for, Q is a polymorphic function and will take on a different operation and ∗ will be a different input. Examples of such are given in the paper in the respective sections (4.2.2.1, 4.2.2.2) with the required details.\n\n3) The result is on the validation set. We share the frustration. However, this is how a majority of the past work is reported. We plan on providing variance and mean for Entnet, GGT-NN and our work in the final version of the paper.\n\n4) When the number of entities is large, inference slows down. It also reduces the accuracy in some cases since the final operation such as softmax is performed over a large number of entities and it can be difficult to train. Our model can be applied to other QA tasks such as VQA. Here, each entity can additionally include CNN feature output and inference costs can be reduced.\n\n5) Yes, we plan to release the code with the final version of the paper.\n\nWe have fixed all the minor comments as you mention in your review. We really appreciate your help in improving the paper.",
"Thanks for reading our paper. We agree that the abstract does not describe memory bank/slot clearly. We plan to clarify the this in the next revision. A bank in the paper does mean a series of similar entities.\n\nAn entity is a 3-tuple of word ID, hidden state, and a question relevance strength. A memory slot can hold this entity. A bank consists of multiple entities. The network learns to store various entities as the story is ingested into a bank. As the story is read in, the network learns to create newer banks and copy the entities. During inference, only a single bank or a few banks are used to answer the question, saving inference times.\n",
"The authors may wish to consult the definition of \"bank\" which is used extensively throughout the paper (as \"memory bank\") without a definition. A bank is supposed to be \"series of similar things\". It seems they use \"memory bank\" for \"memory slot\" making the text somewhat confusing."
] | [
-1,
-1,
5,
-1,
7,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
-1,
5,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"rJBY85UEM",
"HJ_BtypgM",
"iclr_2018_SJZ2Mf-0-",
"SJzUv1GXG",
"iclr_2018_SJZ2Mf-0-",
"iclr_2018_SJZ2Mf-0-",
"By5ZMrqxG",
"HJ_BtypgM",
"iclr_2018_SJZ2Mf-0-",
"rJGuMGYef",
"SkYhYmGlz",
"iclr_2018_SJZ2Mf-0-"
] |
iclr_2018_SJyfrl-0b | Fast Node Embeddings: Learning Ego-Centric Representations | Representation learning is one of the foundations of Deep Learning and allowed important improvements on several Machine Learning tasks, such as Neural Machine Translation, Question Answering and Speech Recognition. Recent works have proposed new methods for learning representations for nodes and edges in graphs. Several of these methods are based on the SkipGram algorithm, and they usually process a large number of multi-hop neighbors in order to produce the context from which node representations are learned. In this paper, we propose an effective and also efficient method for generating node embeddings in graphs that employs a restricted number of permutations over the immediate neighborhood of a node as context to generate its representation, thus ego-centric representations. We present a thorough evaluation showing that our method outperforms state-of-the-art methods in six different datasets related to the problems of link prediction and node classification, being one to three orders of magnitude faster than baselines when generating node embeddings for very large graphs. | workshop-papers | The authors addressed the reviewers concerns but the scores remain somewhat low.
The method is not super novel, but it is an incremental improvement over existing approaches. | train | [
"HkuS2uuef",
"ByVfm9uxz",
"Sy5ZqP9gM",
"B1qFOGOzf",
"HyWyDMdzG",
"Sk2-OTwGM",
"B1_ipj-GG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper demonstrates good experiment results on several tasks. There are some pros and comes as below:\n\nPros\nThe proposed model provides a new view to generate training examples for random-walk-based embedding models.\nExperiments are conducted on several datasets (6 datasets for link prediction and 3 datsets for classification).\nVarious experiments are provided to support the analysis of time complexity of the proposed model.\nAdditional experiments are provided in appendix, though the details of experimental setups are not provided.\n\nCons:\n1. The novelty is limited. The main contribution is the idea of substituting random-walk by neighbors of a node. The rest of the model can be viewed as DeepWalk which requires walk length be the same as window size.\n2. The experiment setup is not fair to the competitors. It seems that the proposed model is turned by validation, and the competitors adopt the parameters proposed in Node2Vec. A fair experiment shall require every model to turn their parameters by the validation dataset. \n3. Furthermore, since embedding training is unsupervised, in reality no validation data can be used to select the parameters. Therefore a fair experiments it to find a universal set of parameters and use them across different datasets. \n\n4. In section 4.1 (Link Prediction), is there negative sampling during training and testing? Otherwise the training and testing instances are all positive. Also, what is the ratio of positive instances over negative ones?\n5. In section 5.1 (Number of Permutations), why is “test accuracy” adopted but not “AUC” (which is the evaluation metric in section 4.1)?\n6. In section 5.1 (Number of Permutations), table 4 should contain only the results of the same task (either Link Prediction or Classification but not both).\n7. Didn't compare with the state-of-the-art node embedding models.",
"The authors propose a method for learning node representations which, like previous work (e.g. node2vec, DeepWalk), is based on the skip-gram model. However, unlike previous work, they use the concept of shared neighborhood to define context rather than applying random walks on the graph.\n\nThe paper is well-written and it is quite easy to follow along with the discussion. This work is most similar, in my opinion, to node2vec. In particular, when node2vec has its restart probability set pretty high, the random walks tend to stay within the local neighborhood (near the starting node). The main difference is in the sentence construction strategy. Whereas node2vec may sample walks that have context windows containing the same node, the proposed method does not as it uses a random permutation of a node's neighbors. This is the main difference between the proposed method and node2vec/DeepWalk.\n\nPros:\n\nOutperforms node2vec and DeepWalk on 5 of the 6 tested datasets and achieves comparable results on the last one.\nProposes a simple yet effective way to sample walks from large graphs.\n\n\nCons:\n\nThe description on the experimental setup seems to lack some important details. See more detailed comments in the paragraph below. While LINE or SDNE, which the authors cite, may not run on some of the larger datasets they can be tested on the smaller datasets. It would be helpful if the authors tested against these methods as well. \n\n For instance, on page 5 footnote 4 the authors state that DeepWalk and node2vec are tested under similar conditions but do not elaborate. In NBDE, when k=5 a node u's neighbors are randomly permuted and these are divided into subsets of five and concatenated with u to form sentences. Random walks in node2vec and DeepWalk can be longer, instead they use a sliding context window. For instance a sentence of length 10 with context window 5 gives 6 contexts. Do the authors account for this to ensure that skip-gram for all compared methods are tested using the same amount of information. Also, what exactly does the speedup in time mean. The discussion on this needs to be expounded.",
"The paper includes the terms first-order proximity (\"the concept that connected nodes in a graph should have similar properties\") and second-order proximity (\"the concept that nodes with similar neighborhoods should have common characteristics\"). These are called homophily in social network analysis. It is also known as assortativity in network science literature. The paper states on Page 4: \"A trade-off between first and second order proximity can be achieved by changing the parameter k, which simultaneously controls both the sizes of sentences generated and the size of the wind used in the SkipGram algorithm.\" It is not readily clear why this statement should hold. Also the paper does not include a discussion on how the amount of homophily in the graph affects the results. There are various ways of measuring the level of homophily in a graph. There is simple local consistency, which is % of edges connecting nodes that have the same characteristics at each endpoint. Neville & Jensen's JMLR 2007 paper describes relational auto-correlation, which is Pearson contingency coefficient on the characteristics of endpoints of edges. Park & Barabasi's PNAS 2007 paper describes dyadicity and heterophilicity, which measures connections of nodes with the same characteristics compared to a random model and the connections of nodes with different characteristics compared to a random model. \n\nk (\"which simultaneously controls both the sizes of sentences generated and the size of the wind used in the SkipGram algorithm\") is a free-parameter in the proposed algorithm. The paper needs an in-depth discussion of the role of k in the results. Currently, no discussion is provided on k except that it was set to 5 for the experiments. From a network science perspective, it makes sense to have k vary per node.\n\nIt is also not clear why d = 128 was chosen as the size of the embedding.\n\nFrom the description of the experimental setup for link prediction, it is not clear if a stratified sample of the entries of the adjacency matrix (i.e., both 0 and 1 entries) where selected.\n\nFor the node classification experiments, information on class distribution and homophily levels would be helpful. \n\nIn Section 5.1, the paper states: \"For highly connected graphs, larger numbers of permutations should be chosen (n in [10, 1000]) to better represent distributions, while for sparser graphs, smaller values can be used (n in [1, 10]).\" How high is highly connected graphs? How spare is a sparser graph? In general, the paper lacks an in-depth analysis of when the approach works and when it does not. I recommend running experiments on synthetic graphs (such as Barabasi-Albert, Watts-Strogatz, Forest Fire, Kronecker, and/or BTER graphs), systematically changing various characteristics of the graph, and reporting the results.\n\nThe faster runtime is interesting but not surprising given the ego-centric nature of the approach.\n\n",
"We added a comparison with SDNE in Appendix D of our paper, which was suggested by AnonReviewer1 and AnonReviewer3. We ran it for all datasets on both Link Prediction and Node Classification tasks, except DBLP because of its size, running SDNE with alpha=0.2 and beta=10, since they seem to be the best parameters as per their analysis. We chose an architecture of size [10300-1000-100] for all our experiments, which is their architecture chosen for the BlogCatlog dataset, the only one we also use. Like in their paper, we ran it as a semi-supervised algorithm, tuning nu in a validation set, choosing nu from {0.1, 0.01, 0.001}. They don't mention how they chose this value, so we selected it from {0.1, 0.01, 0.001} to test an ample set of values. In Link Prediction, both our algorithms perform similarly, with ours having better results in three datasets and theirs in two, but our algorithm usually has more than two orders of magnitude faster training when SDNE is on a GPU, and is three to four orders of magnitude faster when SDNE is trained in a CPU. On Node Classification we win in two of the three datasets, with a gain of 46% on blog. In this task NBNE has a 29~63 times faster training than SDNE on a GPU and 495~866 times faster than SDNE on a CPU.",
"Thank you for your review and suggestions.\nOur answers to your considerations are:\n\n\n1. We believe the two algorithms have different motivations, with NBNE following a Breath First Search (BFS) strategy, while DeepWalk follows a strategy similar to Depth First Search (DFS). Although they may be similar, these strategies result in very different algorithms and produce embeddings with different properties.\n\n2. Node2Vec in the original work by Grover and Leskovec is also semi-supervised and is also tuned on the validation set to choose the best values for both parameters p and q, so comparisons to this algorithm are already fair in this sense.\nWe added a new paragraph on page 5 to make this description clearer: \"On both these tasks [Link Prediction and Node Classification], DeepWalk and Node2Vec were used as baselines, having been trained and tested under the same conditions as NBNE and using the parameters as proposed in (Grover and Leskovec, 2016). More specifically, we trained them with the same training, validation and test sets as NBNE and used a window size of 10 (k), walk length (l) of 80 and 10 runs per node (r). For Node2Vec, which is a semi-supervised algorithm, we tuned p and q on the validation set, doing a grid search on values p,q in {0.25; 0.5; 1; 2; 4}.\"\n\n3. Several state-of-the-art embedding training algorithms are actually semi-supervised. While Deep Walk and LINE are unsupervised, Node2Vec chooses p and q based on results in a validation set. SDNE also selects alpha, beta and nu by evaluating them on a validation set.\n\n4. When training and testing the logistic regression we use both positive and negative samples of edges, having both groups with equal sizes (notice that during training parts of the removed edges could have been randomly included as negative samples). We used logistic regressions only to benchmark the power of the representations, not entering in the harder topic of working with unbalanced datasets (much more non edges than edges in a graph).\nWe changed the text to make this clearer.\n\n5. We hadn't noticed this and have changed the graphs to use AUC scores. Accuracy and AUC have similar trends, leading to the same conclusion in Section 5.1.\n\n6. We also changed this table to only show Link Predictions results. Again, results lead to the same conclusion as before.\n\n7. We added results for SDNE in Appendix D of our paper. We ran it for all datasets on both Link Prediction and Node Classification, except DBLP because of its size. We ran SDNE for alpha=0.2 and beta=10, since they seem to be the best parameters as per their analysis. We chose an architecture of size [10300-1000-100] for all our experiments, which is their architecture chosen for the BlogCatlog dataset, the only one we also use. Like in their paper, we run it as a semi-supervised algorithm, tuning nu in a validation set, choosing nu from {0.1, 0.01, 0.001}. They don't mention how they chose this value, so we selected it from {0.1, 0.01, 0.001} to test an ample set of values. In Link Prediction, both our algorithms perform similarly, with ours having better results in three datasets and theirs in two, but our algorithm has more than two orders of magnitude faster training when SDNE is on a GPU, and is three to four orders of magnitude faster when SDNE is trained in a CPU. On Node Classification we win in two of the three datasets, with a gain of 45% on blog. In this task NBNE is 29~63 times faster training than SDNE on a GPU and 495~866 times faster than SDNE on a CPU.\nIf you have other suggestions of state-of-the-art algorithms to compare, we would be willing to run experiments.\n\n\nWe would again like to thank you for your review. The clarifications of Section 4 and 4.1 make our experiments clearer to understand.\nThe new comparison with state-of-the-art algorithm SDNE also makes it easier to compare both algorithms directly and shows that, although both algorithms are competitive, NBNE usually produces better results in terms of AUC/Macro F1 scores. At the same time it shows that NBNE is much more computationally efficient, running in a fraction of the time taken by SDNE on both GPU or CPU.\n\nFurthermore, we also added two new sections in Appendix Sections B and C, per request of AnonReviewer2, with an in depth analysis of our algorithm considering: (i) an homophily analysis of both the datasets themselves and the learned representations and (ii) a series of tests analyzing results for different values of n and k on two synthetic graphs with various sizes (|V|) and connectedness (b). We believe these experiments give further support to our choice of a semi-supervised approach to choose n and give a more solid understanding of how parameters n and k affect our resulting representations.\n\nIf you have any other suggestions/questions, we would be pleased to answer them before the deadline of January 5th.",
"Thank you for your review and suggestions.\nOur answers to your considerations are:\n\n\"The description on the experimental setup seems to lack some important details. See more detailed comments in the paragraph below. While LINE or SDNE, which the authors cite, may not run on some of the larger datasets they can be tested on the smaller datasets. It would be helpful if the authors tested against these methods as well.\"\n\n--> We added results for SDNE in Appendix D of our paper. We ran it for all datasets on both Link Prediction and Node Classification, except DBLP because of its size. We ran SDNE for alpha=0.2 and beta=10, since they seem to be the best parameters as per their analysis. We chose an architecture of size [10300-1000-100] for all our experiments, which is their architecture chosen for the BlogCatlog dataset, the only one we also use. Like in their paper, we run it as a semi-supervised algorithm, tuning nu in a validation set, choosing nu from {0.1, 0.01, 0.001}. They don't mention how they chose this value, so we selected it from {0.1, 0.01, 0.001} to test an ample set of values. In Link Prediction, both our algorithms perform similarly, with ours having better results in three datasets and theirs in two, but our algorithm usually has more than two orders of magnitude faster training when SDNE is on a GPU, and is three to four orders of magnitude faster when SDNE is trained in a CPU. On Node Classification we win in two of the three datasets, with a gain of 46% on blog. In this task NBNE has a 29~63 times faster training than SDNE on a GPU and 495~866 times faster than SDNE on a CPU.\n\n\"For instance, on page 5 footnote 4 the authors state that DeepWalk and node2vec are tested under similar conditions but do not elaborate.\"\n\n-->We added a new paragraph on page 5 to make this description clearer: \"On both these tasks [Link Prediction and Node Classification], DeepWalk and Node2Vec were used as baselines, having been trained and tested under the same conditions as NBNE and using the parameters as proposed in (Grover and Leskovec, 2016). More specifically, we trained them with the same training, validation and test sets as NBNE and used a window size of 10 (k), walk length (l) of 80 and 10 runs per node (r). For Node2Vec, which is a semi-supervised algorithm, we tuned p and q on the validation set, doing a grid search on values p,q in {0.25; 0.5; 1; 2; 4}.\"\n\n\"In NBNE, when k=5 a node u's neighbors are randomly permuted and these are divided into subsets of five and concatenated with u to form sentences. Random walks in node2vec and DeepWalk can be longer, instead they use a sliding context window. For instance a sentence of length 10 with context window 5 gives 6 contexts. Do the authors account for this to ensure that skip-gram for all compared methods are tested using the same amount of information. Also, what exactly does the speedup in time mean. The discussion on this needs to be expounded.\"\n\n--> We did not account for these changes, nor did we add an extra section with these experiments to the paper, because there are three different parameters in DeepWalk and Node2Vec which control the amount of information used: the number of runs per node (r); length of walks (l) and; window size (k). To find a value for these three variables together which at the same time compared in amount of computation to NBNE and gave good results would require a deep analysis on them, since they interact with one another in non-linear ways. At the same time, to run a semi-supervised version of both DeepWalk and Node2Vec which chose these parameters by evaluating them in a validation set would be too computationally expensive, specially since Node2vec already takes more than 800 minutes to train for a single of these values on Blog.\n\nWe would again like to thank you for your review. The new comparison with SDNE makes it easier to compare both algorithms directly and shows that, although both algorithms are competitive, NBNE usually produces better results in terms of AUC/Macro F1 scores. At the same time, it shows that NBNE is much more computationally efficient, running in a fraction of the time taken by SDNE on both GPU or CPU.\n\nFurthermore, we also added two new sections in Appendix Sections B and C, per request of AnonReviewer2, with an in depth analysis of our algorithm considering: (i) an homophily analysis of both the datasets themselves and the learned representations and (ii) a series of tests analyzing results for different values of n and k on two synthetic graphs with various sizes (|V|) and connectedness (b). We believe these experiments give further support to our choice of a semi-supervised approach to choose n and give a more solid understanding of how parameters n and k affect our resulting representations.\n\nIf you have any other suggestions/questions, we would be pleased to answer them before the deadline of January 5th.",
"Thank you for the detailed review. Bellow, we address each point in your review.\nFurthermore, we added two new sections in the Appendix with an in depth analysis of our algorithm considering: (i) an homophily analysis of both the datasets themselves and the learned representations and (ii) a series of tests analyzing results for different values of n and k on two synthetic graphs with various sizes (|V|) and connectedness (b).\n\nOur answers to each of your considerations are:\n\n1. First and second order proximity and k\n\n--> We added a new section in Appendix B.3 which includes an intuitive and a qualitative analysis of why this property holds. We do this analysis using three synthetic graphs: Barabasi-Albert, Erdos-Renyi and Watts-Strogatz.\n\n2. Homophily in the results\n\n--> In Section B.1 of the Appendix we added a quantitative analysis of the homophily inherent to the datasets themselves. This analysis shows that we work with a diverse set of graphs, with both positive and negative degree and label assortativity. We also added an in depth qualitative analysis of homophily and overfitting of our learned representations in relation to n. These particular experiments led to very interesting results. In the plotted graphs we can clearly see the overfitting in our representations as the number n of permutations grows. We believe these experiments also support our choice for a semi-supervised approach to choose n.\n\n3. k in the results\n\n--> We added a new section in Appendix C.2, in which we analyze results for different values of k. We used two different synthetic graphs in this analysis: Barabasi-Albert and Watts-Strogatz, creating them with different sizes and sparseness. We concluded that, although larger choices for this parameter (k={25,125}) give better results in several graphs, they are also more prone to overfitting, with k=5 being more robust.\n\n4. d = 128\n\n--> We chose this value because it was used by our baselines in their works, to make our comparisons fair.\n\n5. Link prediction: Stratified sample\n\n--> When training and testing the logistic regression we use both positive and negative samples of edges, having both groups with equal sizes (notice that during training parts of the removed edges could have been randomly included as negative samples). We used logistic regressions only to benchmark the power of the representations, not entering in the harder topic of working with unbalanced datasets (much more non edges than edges in a graph).\nWe changed the text to make it clearer.\n\n6. Node classification: Class distribution and homophily\n\n--> We added both degree and label homophily levels in Section B.1 in the Appendix, having in our tested datasets graphs with both positive and negative correlations. We also added the distribution of classes in each dataset in Appendix A. This plotted graph also shows the diversity in our analyzed datasets, containing both long tailed class distributions, in the Wikipedia dataset, and a more balanced distribution in the PPI dataset.\n\n7. How connected/sparse?\n\n--> It is hard to state what is a highly connected or sparse graph in this sense, since these embeddings also depend on degree homophily levels, graph size and structure. But we added new experiments in Section C.1 on synthetic graphs to give more insight to answer this question. Our results show that indeed denser graphs tend to have better results with larger values of n, but how dense they should be depends on the graph structure. We believe this difficulty in selecting n without a deeper analysis of the graph justifies our choice for a semi-supervised algorithm, which can select the best value depending on results in a small validation set. This has the further advantage that n can be trained interactively with increasing values, not needing to be retrained for each new case, which is similar to early stopping in other machine learning settings.\n\n8. Experiments on synthetic graphs\n\n--> As stated above, we addressed this problem doing an in depth analysis of both parameters n and k on Barabasi-Albert and Watts-Strogatz graphs, and it is present in Appendix C.\n\n9. Faster runtime is interesting but not surprising\n\n--> We agree that the faster runtime is intuitive. But this faster runtime allied to the better/similar results in all tested datasets/experiments is a good contribution of our proposed method.\n\nWe would like to thank you again for your detailed review and suggestions, specially concerning the suggested analysis on synthetic graphs and on homophily properties. They give our motivation for a semi-supervised algorithm a stronger experimental justification, and we added it as one of our contributions in the Introduction.\nWe believe they were indeed important, giving a more solid understanding of our algorithm and increasing the paper's contributions.\n\nIf you have any other suggestions/questions, we would be pleased to answer them and maybe try to conduct more experiments before the deadline of January 5th."
] | [
5,
6,
4,
-1,
-1,
-1,
-1
] | [
4,
4,
5,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJyfrl-0b",
"iclr_2018_SJyfrl-0b",
"iclr_2018_SJyfrl-0b",
"B1_ipj-GG",
"HkuS2uuef",
"ByVfm9uxz",
"Sy5ZqP9gM"
] |
iclr_2018_S1LXVnxRb | Cross-Corpus Training with TreeLSTM for the Extraction of Biomedical Relationships from Text | A bottleneck problem in machine learning-based relationship extraction (RE) algorithms, and particularly of deep learning-based ones, is the availability of training data in the form of annotated corpora. For specific domains, such as biomedicine, the long time and high expertise required for the development of manually annotated corpora explain that most of the existing one are relatively small (i.e., hundreds of sentences). Beside, larger corpora focusing on general or domain-specific relationships (such as citizenship or drug-drug interactions) have been developed. In this paper, we study how large annotated corpora developed for alternative tasks may improve the performances on biomedicine related tasks, for which few annotated resources are available. We experiment two deep learning-based models to extract relationships from biomedical texts with high performance. The first one combine locally extracted features using a Convolutional Neural Network (CNN) model, while the second exploit the syntactic structure of sentences using a Recursive Neural Network (RNN) architecture. Our experiments show that, contrary to the former, the latter benefits from a cross-corpus learning strategy to improve the performance of relationship extraction tasks. Indeed our approach leads to the best published performances for two biomedical RE tasks, and to state-of-the-art results for two other biomedical RE tasks, for which few annotated resources are available (less than 400 manually annotated sentences). This may be particularly impactful in specialized domains in which training resources are scarce, because they would benefit from the training data of other domains for which large annotated corpora does exist. | workshop-papers | We encourage the authors to improve the mentioned aspects of their work in the reviews.
| train | [
"HydZAdHgG",
"rJFaZDqgz",
"Sk6i-Szbz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"SUMMARY.\n\nThe paper presents a cross-corpus approach for relation extraction from text.\nThe main idea is complementing small training data for relation extraction with training data with different relation types.\nThe model is also connected with multitask learning approaches where the encoder for the input is the same but the output layer is different for each task. In this work, the output/softmax layer is different for each data type, while the encoder is shared.\nThe authors tried two different sentence encoders (cnn-based and tree-lstm), and final results are calculated on the low resource dataset. \n\nExperimental results show that the tree-rnn encoder is able to capture valuable information from auxiliary data, while the cnn based does not.\n\n----------\n\nOVERALL JUDGMENT\nThe paper shows an interesting approach to data augmentation with data of different type for relation extraction.\nI would have appreciated a section where the authors explain briefly what relation extraction is maybe with an example.\nThe paper is overall clear, although the experimental section has to be improved I believe.\nFrom section 5.2 I am not able to understand the experimental setting the authors used, is it 10-fold CV? Did the authors tune the hyperparameters for each fold?\nAre the results in table 3 obtained with tree-lstm? \nWhat kind of ensembling did the authors chose for those experiments?\nThe author overstates that their model outperforms the state-of-the-art models they compare to, but that is not true for the EU-ADR dataset where in 2 out of 3 relation types the proposed model performs on par with the state-of-the-art model.\nFinally, the authors used only one auxiliary dataset at the time, it would be interesting to see whether using all the auxiliary dataset together would improve results even more.\n\nI would suggest the author also to check and revise citations (CNN's are not Collobert et al. invention, the same thing for the maximum likelihood objective) and more in general to improve the reference on relation extraction literature.",
"This is a well-written paper with sound experiments. However, the research outcome is not very surprising. \n\n- Only macro-average F-scores are reported. Please present micro-average scores as well.\n- The detailed procedure of relation extraction should be described. How do you use entity type information? (Probably, you did not use entity types.)\n- Table 3: The SotA score of EU-ATR target-disease (i.e. 84.6) should be in bold face.\n- Section 5.3: Your system scorers in Table 3 are not consistent with Table 2 scores. \n- Page 8. \"Our approach outperforms ...\" The improvement is clear only for SNPPhenA and EU-ADR durg-disease.\n\nMinor comments:\n\n- TreeLSTM --> Tree-LSTM\n- Page 7. connexion --> connection\n- Page 8. four EU-ADR subtasks --> three ...\n - I suggest to conduct transfer learning studies in the similar settings.\n",
"This paper proposes to use Cross-Corpus training for biomedical relationship extraction from text. \n\n- Many wording issues, like citation formats, grammar mistakes, missing words, \n e.g., Page 2: it as been\n \n- The description of the methods should be improved. \n For instance, why the input has only two entities? In many biomedical sentences, there are more than two entities. How can the proposed two models handle these cases? \n\n- The paper just presents to train on a larger labeled corpus and test on a task with a smaller labeled set. Why is this novel? \n Nothing is novel in the deep models (CNN and TreeLSTM). \n\n- Missing refs, like: \n A simple neural network module for relational reasoning, Arxiv 2017"
] | [
4,
5,
3
] | [
4,
4,
5
] | [
"iclr_2018_S1LXVnxRb",
"iclr_2018_S1LXVnxRb",
"iclr_2018_S1LXVnxRb"
] |
iclr_2018_rkaT3zWCZ | Building Generalizable Agents with a Realistic and Rich 3D Environment | Teaching an agent to navigate in an unseen 3D environment is a challenging task, even in the event of simulated environments. To generalize to unseen environments, an agent needs to be robust to low-level variations (e.g. color, texture, object changes), and also high-level variations (e.g. layout changes of the environment). To improve overall generalization, all types of variations in the environment have to be taken under consideration via different level of data augmentation steps. To this end, we propose House3D, a rich, extensible and efficient environment that contains 45,622 human-designed 3D scenes of visually realistic houses, ranging from single-room studios to multi-storied houses, equipped with a diverse set of fully labeled 3D objects, textures and scene layouts, based on the SUNCG dataset (Song et al., 2017). The diversity in House3D opens the door towards scene-level augmentation, while the label-rich nature of House3D enables us to inject pixel- & task-level augmentations such as domain randomization (Tobin et al., 2017) and multi-task training. Using a subset of houses in House3D, we show that reinforcement learning agents trained with an enhancement of different levels of augmentations perform much better in unseen environments than our baselines with raw RGB input by over 8% in terms of navigation success rate. House3D is publicly available at http://github.com/facebookresearch/House3D. | workshop-papers | The authors present an environment for semantic navigation that is based on an existing dataset, SUNCG. Datasets/environments are important for deep RL research, and the contribution of this paper is welcome. However, this paper does not offer enough novelty in terms of approach/method and its claims are somewhat misleading, so it would probably be a better fit to publish it at a workshop. | val | [
"BySAhfGgM",
"ByK1GkteM",
"BJlAby9gM",
"ByREAIo7z",
"SywxCIj7f",
"SyJ9TUiXM",
"r1jXpIi7z",
"SJKZ7DPxM",
"ryfUh0Xlf",
"HJWt2VcAZ",
"HJAzR1nR-",
"Hk38Cy2Cb",
"Hkv6GtMRW"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"public",
"public",
"author",
"author",
"public"
] | [
"Paper Summary: The paper proposes a simulator for the SUNCG dataset to perform rendering and collision detection. The paper also extends A3C and DDPG (reinforcement learning methods) by augmenting them with gated attention. These methods are applied for the task of navigation.\n\nPaper Strengths:\n- It is interesting that the paper shows generalization to unseen scenes unlike many other navigation methods.\n- The renderer/simulator for SUNCG is useful.\n\nPaper Weaknesses:\nThe paper has the following issues: (1) It oversells the task/framework. The proposed task/framework is not different from what others have done. (2) There is not much novelty in the paper. The SUNCG dataset already exists. Adding a renderer to that is not a big deal. There is not much novelty in the method either. The paper proposes to use gated attention, which is not novel and it does not help much according to Figures 3b and 4b. (3) Other frameworks have more functionalities than the proposed framework. For example, other frameworks have physics or object interaction while this framework is only useful for navigation. (4) The paper keeps mentioning \"Instructions\". This implies that the method/framework handles natural language, while this is not the case. This is over-selling as well.\n\nQuestions and comments:\n\n- Statements like \"On the contrary, we focus on building a flexible platform that intersects with multiple research directions in an efficient manner allowing users to customize the rules and level of complexity to their needs.\" are just over-selling. This environment is not very different from existing platforms.\n\n- What is referred to as “physics” is basically collision detection. It is again over-selling the environment. Other environments model real physics.\n\n- It is not clear what customizable mean in Table 1. I searched through the paper, but did not find any definition for that. All of the mentioned frameworks are customizable.\n\n- \"we compute the approximate shortest distance from the target room to each location in the house\" --> This assumption is somewhat unrealistic since agents in the real world do not have access to such information.\n\n- Instruction is an overloaded word for \"go to a RoomType\". The paper tries to present the tasks/framework as general tasks/framework while they are not.\n\n- In GATED-LSTM, h_t is a function of I. Why is I concatenated again with h_t?\n\n- Success rate is not enough for evaluation. The number of steps should be reported as well.\n\n- The paper should include citations to SceneNet and SceneNet RGBD.\n\n",
"Building rich 3D environments where to run simulations is a very interesting area of research. \n\nStrengths:\n1.\tThe authors propose a virtual environment of indoor scenes having a much larger scale compared to similar interactive environments and access to multiple visual modalities. They also show how the number of available scenes greatly impacts generalization in navigation based tasks. \n2.\tThe authors provide a thorough analysis on the contribution of different feature types (Mask, Depth, RGB) towards the success rate of the goal task. The improvements and generalization brought by the segmentation and depth masks give interesting insights towards building new navigation paradigms for real-world robotics. \n\nWeaknesses:\n1.\tThe authors claim that the proposed environment allows for multiple applications and interactions, however from the description in section 3, the capacities of the simulator beyond navigation are unclear.\nThe dataset proposed, Home3D, adds a number of functionalities over the SUNCG dataset. The SUNCG dataset provides a large number of 3D scanned houses. The most important contributions with respect to SUNCG are:\n- An efficient renderer: an important aspect.\n- Introducing physics: this is very interesting, unfortunately the contribution here is very small. Although I am sure the authors are planing to move beyond the current state of their implementation, the only physical constraint currently implemented is an occupancy rule and collision detection. This is not technically challenging. \nTherefore, the added novelty with respect to SUNCG is very limited.\n2.\tThe paper presents the proposed task as navigation from high level task description, but given that the instructions are fixed for a given target, there are only 5 possible instructions which are encoded as one-hot vectors. Given this setting, it is unclear the need for a gated attention mechanism. While this limited setting allows for a clear generalization analysis, it would have been good to study a setting with more complex instructions, allowing to evaluate instructions not seen during training.\n3.\tWhile the authors make a good point showing generalization towards unseen scenes, it would have been good to also show generalization towards real scenarios, demonstrating the realistic nature of House3D and the advantages of using non-RGB features.\n4.\tIt would have been good to report an analysis on the number of steps performed by the agent before reaching its goal on the success cases. It seems to me that the continuous policy would be justified in this setting. \nComments\n-\tIt is unclear to me how the reward shaping addition helps generalize to unseen houses at test time, as suggested by the authors.\n-\tI miss a reference to (https://arxiv.org/pdf/1609.05143.pdf) beyond the AI-THOR environment, given that they also approach target driven navigation using an actor-critic approach.\n\n\nThe paper proposes a new realistic indoor virtual environment, having a much larger number of scenes than similar environments. From the experiments shown, it seems that the scale increase, together with the availability of features such as Segmentation and Depth improve generalization in navigation tasks, which makes it a promising framework for future work on this direction. However, the task proposed seems too simple considering the power of this environment, and the models used to solve the task don’t seem to bring relevant novelties from previous approaches. (https://arxiv.org/pdf/1706.07230.pdf)\n\n",
"The paper introduces House3D, a virtual 3D environment consisting of in-door scenes with a diverse set of scene types, layouts and objects. This was originally adapted from the SUNCG dataset, enhanced with the addition of a physics model and an API for interacting with the environment. They then focus on a single high-level instruction following task where an agent is randomly assigned at a location in the house and is asked to navigate to a destination described by a high-level concept (“kitchen”) without colliding with objects. They propose two models with gated-attention architecture for solving this task, a gated-CNN and a gated-LSTM. Whilst the novelty of the two models is questionable (they are adaptations of existing models to the task), they are a useful addition to enable a benchmark on the task. The paper in general is well written, and the environment will be a useful addition to the community.\n\nGeneral Comments\n- In the related work section the first part talks about several existing environments. Whilst the table is useful, for the “large-scale” and “fast-speed”columns, it would be better if there were some numbers attached - e.g. are these orders of magnitude differences? Are these amenable to Bayesian optimisation?\n- I didn’t see any mention of a pre-specified validation set or pre-defined cross-validation sets. This would surely be essential for hyperparameter tuning\n- For the discrete action space state what the 12 actions are.\n- The reward function should be described in more detail (can be in appendix). How is the shortest distance calculated? As a general comment it seems that this is a very strong (and unrealistic) reward signal, particularly for generalisation.\n- There are a number of hyperparameters (αDDPG, αA3C, entropy bonus terms, learning rates etc). Some discussion of how these were chosen and the sensitivity to these parameters were helpful\n- Figures 3 and 4 are hard to compare, as they are separated by a page, and the y-axes are not shared.\n- The additional 3D scenes datasets mentioned by Ankur Handa should be cited.\n\nTypographical Issues\n- Page 1: intelligence→intelligent \n- Page 4: On average, there is→On average, there are; we write→we wrote\n- Page 7: softmax-gumbel trick→softmax-Gumbel trick; gumbel-softmax→Gumbel-softmax\n- References. The references should have capitalisation where appropriate.For example, openai→OpenAI, gumbel→Gumbel, malmo→Malmo",
"Dear Reviewer,\n \nWe thank you for your feedback. Below we address your comments and point to the changes made to accommodate them.\n \nSee general comments (the top official thread) for the novelty of the framework and the definition of the task and instructions.\n \nUnrealistic shortest path:\n1. The shortest path is computed only during training to provide the agent with intermediate reward signals. In evaluation, there is no need to compute shortest path (or other statistics involving ground truth) and thus the trained agent can be evaluated in the real world.\n2. Even if an agent needs to be trained in the real environment, we can also find surrogate shortest path, e.g., by building a map first. Note that building a map is not a laborious task, since it can be reused many times during training. Again in evaluation, these quantities are not needed.\n \nPhysics:\nCurrently we only have collision detection and in the future we will add more realistic physics engine (e.g., bullet).\n \nTo better reflect the nature of interactions used for RoomNav, we describe the interaction rules in House3D (see Environment Section). However, note that House3D is able to support more complicated engines. For the purpose of our work, we adopt the most lightweight interaction rules for fast experimental cycles. We have updated the changes regarding to physics in the new version of the paper.\n \nReward shaping\nSee general comments.\n \nAnalysis of number of steps:\nWe have added the analysis for the number of steps. See Appendix B.4. All reported success rates are computed from a fixed number of 100 steps.\n \nGated LSTM:\nWe concatenate the concept I with h_t so that the LSTM module can have direct access to the target when computing its states. In practice, this affected performance by a small amount but it made training more stable.\n \nMissing citations:\nWe have cited the relevant environments to our work. See Related Work Section.\n \nTo better reflect the motivation and contributions of our work, we have rephrased the parts of the text that seem to cause confusions with the hope that they would clarify hopefully all of your questions.\n \nWe sincerely hope that you could read the modified version.\n",
"Dear Reviewer,\n \nWe thank you for your feedback. Below we address your comments and point to the changes made to accommodate them.\n \nPlease check general comments (the very first official thread on the top) for Point 1-2.\n \n3. Transfer learning from House3D to real environments (e.g., within an actual building) is an important direction to pursue. Our results also show that using semantic segmentation, depth and RGB images as input, it is possible to have a navigation agent that can be transferred to unseen scenarios. This suggests that using semantic segmentation given by state-of-the-art vision approaches should achieve strong performance without training from raw pixels. As a future work, this could be a promising direction for transfer learning.\n \n4. We have added the analysis for the number of steps in the Appendix Section B.4. In general, we notice that continuous policies involve fewer steps, as expected.\n \n5. Stated in the general comments.\n \n6. There are a few key differences between our task and the task proposed by Al-Thor:\n \n(1)\tIn AI-Thor, the learned agent is evaluated on the same environments as training, while ours is evaluated on unseen environments. We emphasize that this is a huge difference.\n(2)\tIn AI-Thor, navigation is restricted within a single room, while our work shows navigation results in houses with multiple rooms and indoor/outdoor situations. \n(3)\tAI-Thor designs different networks for different room types, and the target is provided with an actual observation of the object. In contrast, ours use a shared network for 200 houses, and the target is provided with a word (concept). Therefore, the agent needs to associate the concept with the observations.\n \nIn summary, our navigation setting poses a major improvement over the setting in AI-Thor, and requires sophisticated actions to achieve the goal. As the first task proposed in House3D dataset, it is not simple at all.\n \nIn terms of writing, we have updated the paper and the terminology to clarify the motivation and contributions of our work. Our changes should better reflect the impact of our proposed environment and task.\n",
"Dear Reviewer,\n \nWe thank you for your valuable comments. Note that we have improved the text to better reflect our contributions (see Introduction). We sincerely hope that our proposed House3D environment and RoomNav task will be adopted as a benchmark in RL. We strongly agree that our proposed methods to tackle RoomNav are useful and necessary in order to encourage further research in House3D and concretely describe their extensions from existing RL techniques (see Introduction & Method Section)\n \nWe address your comments:\n \n1. Regarding to simulation speed, our environment is around 1.8K FPS on 120x90 image using only a single M40 GPUs. This makes our environment suitable for RL approaches that typically require both fast simulation and realistic scenarios. To our knowledge, very few environments strike such a balance (e.g., atari games can achieve 6K FPS but its observation is simple, while DeepMind lab and Malmo render more complex images but with a few hundred frames per second, slower than House3D). \n \n2. In the first part of our experiments, following standard practice in RL, we trained on 200 house environments and report the success rate on these environments accordingly. Unlike traditional supervised learning, there is no pre-specified validation set or pre-defined cross-validation sets, since every image perceived from the agent in the same house environment can be different.\n \nIn the second part of our experiments, we tested our trained agent on 50 unseen house environments, a practice known as transfer learning in RL. We show that the trained agent is able to navigate around unseen environments, much better than other baselines (e.g., random exploration).\n \n3. We test both continuous and discrete action space. For discrete action space, there are 8 movement actions and 4 rotation actions with different scales Please see Appendix A for more details.\n \n4. Sparse rewards pose difficulty in learning, as noted by several RL works in environments that are even simpler than House3D (Mirowski et al., 2016, Jaderberg et al., 2016). As a standard practice in RL, in all our experiments, to guide the agent towards finishing the task, intermediate reward is provided when the agent moves closer to the target, computed by shortest path. Note that this technique, known as reward shaping, is used during training but never during evaluation, in which an agent needs to navigate to the destination alone. Therefore, it does not affect the generalization capability, no matter how strong such a signal is. More details are provided in the Appendix.\n \n5. The hyper-parameters in our models are tuned using a very rough grid search without extensive tuning.\n \n6. We have fixed the typos, cited the existing 3D environments mentioned by other commenters and have fixed the figure layout for better comprehension. \n \nWe sincerely appreciate your valuable suggestions!\n",
"Dear Reviewers and AC\nWe note some common misunderstandings in the reviews, so we highlight and clarify our main contributions in this thread. We also improved our introduction section with all these points clarified. \n\n**Novelty and Contributions**\n In addition to novel algorithms, we believe that it is also important to acknowledge the contributions regarding to environments and implementations. These contributions also help tremendously to research community, not by solving technically challenging problems, but by pointing to the right directions to pursue. For example, ImageNet fuels the usage of DL models in computer vision research because of its scale; AlphaGo achieves super-human Go AI by combining many old ideas in RL (e.g., ConvNet, MCTS, Selfplay) and massive computational resource together. An algorithm-centric criterion of novelty would reject both of the great works, and might lock researchers in the circle of “smart algorithms”that might look mathematically interesting but never generalize well in the case of large scale and complicated situations.\n\n A main contribution of our work is that we are the first ones to explore “semantic-level” generalization in RL. Semantic-level generalization studies an agent’s ability to extract conceptual abstractions (e.g., kitchen) from observations from a diverse set of scenes, and apply the same abstraction in unseen environments. Note that this is contrary to popular definitions of generalization in the RL literature such as pixel-level perturbations (e.g., object colors) or levels of difficulties (e.g., maze configurations). \n\n To explore semantic-level generalization, we develop a suitable large-scale environment, House3D. We focus on its scale (45k human-labored houses from SUNCG), its flexibility and efficiency. House3D can render images of 120x90 resolution on a single M40 GPU with 1600fps. As a testament to its flexible design, House3D has already been used for tasks beyond navigation, such as embodied QA (Das et al., 2017a). Moreover, our platform is not restricted to SUNCG, but can also use other data sources (e.g., Matterport, SceneNet).\n\n As an attempt to study semantic-level generalization in House3D, we define a “concept-driven navigation” task, RoomNav. Here the goal is conveyed by a concept rather than a natural language instruction. We propose RoomNav in order to evaluate whether an agent can understand “semantic concepts” (e.g., room types) and can generalize to unseen scenarios. We hope RoomNav can serve as a benchmark task for semantic generalization.\n\n To tackle RoomNav, we propose to use gated-attention networks, which are shown to be effective for this task and can potentially serve as strong baselines for further benchmarking and fair comparisons.\n\n**Concept/Instructions**\nIn the submitted version of this paper, we refer“instruction”as the concept (e.g., roomtypes) that the agent needs to associate with its observations during exploration. We emphasize that in the paper, we did not suggest any connections between instruction and natural language. In fact, natural language instructions are beyond the scope of this submission. We have updated Method Section (Sec. 4) for a clearer narration to avoid possible confusion (as R2/R3’s comments suggest). \n\n**Reward Shaping**\nReward shaping, as we state in the paper, provides a supervisory signal that helps the agent to learn faster. It is not provided to the agent during test time. Note that the RoomNav task is difficult and a sparse reward hinders training. We actually tried several reward-shaping approaches but didn’t see any impact on generalization in experiments. So we only report the most effective reward shaping in the paper.\n\nWe have updated the abstract, introduction and related work of the paper to clarify these points. \n",
"Hi, \nI had a few questions regarding the environment:\n\n- Interaction: What agent-object interactions does House3D implement? Is there interaction aside from collision detection?\n-Physics Engine: Is there support for rigid body dynamics, and can objects react to external forces and gravity?",
"Hi Ankesh,\n\nIn the meantime, you could try out our realtime 3D environment simulation and rendering code (https://github.com/tensorflow/models/tree/master/research/cognitive_mapping_and_planning).\n\nFor our CVPR 17 paper (https://sites.google.com/view/cognitive-mapping-and-planning/), we used it for visual navigation tasks using Matterport Scans from the Stanford Building Parser Dataset (http://buildingparser.stanford.edu/index.html), but it should be easy to plug in other sources of mesh data.\n\n",
"Very neat work. We would be very interested in working with your environment. Would you be able to publish the source code / dataset for the work anonymously using a service like http://anonymous.4open.science/ ? ",
"We will provide flexible APIs as soon as possible, once the codebase of the environment is cleaned up.",
"Thank you very much for bringing these datasets to our attention. They are indeed relevant and very valuable to our work. We promise to cite all of them in the next version.\n\nActually, the design of our environment is flexible and can utilize 3D models from different sources. We have done some initial work to incorporate Matterport3D to our environment and our goal is to release a general API that can be used to incorporate a variety of 3D model sources, including SceneNet.\n",
"You might also be interested in the following work\n\n- SceneNet https://robotvault.bitbucket.io/\n- SceneNet RGB-D https://robotvault.bitbucket.io/scenenet-rgbd.html\n\nThey are photorealistic, 3D, customisable and potentially large scale. The source code is here https://bitbucket.org/dysonroboticslab/scenenetrgb-d/src. I think these papers are worth citing too. \n\nMatterport3D is a recent paper which has real 3D scenes https://niessner.github.io/Matterport/. This is also worth citing. \n\nYou could talk about the limitations/benefits of these datasets in your table. \n\n"
] | [
4,
5,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rkaT3zWCZ",
"iclr_2018_rkaT3zWCZ",
"iclr_2018_rkaT3zWCZ",
"BySAhfGgM",
"ByK1GkteM",
"BJlAby9gM",
"iclr_2018_rkaT3zWCZ",
"iclr_2018_rkaT3zWCZ",
"HJWt2VcAZ",
"iclr_2018_rkaT3zWCZ",
"HJWt2VcAZ",
"Hkv6GtMRW",
"iclr_2018_rkaT3zWCZ"
] |
iclr_2018_HyXNCZbCZ | Hierarchical Adversarially Learned Inference | We propose a novel hierarchical generative model with a simple Markovian structure and a corresponding inference model. Both the generative and inference model are trained using the adversarial learning paradigm. We demonstrate that the hierarchical structure supports the learning of progressively more abstract representations as well as providing semantically meaningful reconstructions with different levels of fidelity. Furthermore, we show that minimizing the Jensen-Shanon divergence between the generative and inference network is enough to minimize the reconstruction error. The resulting semantically meaningful hierarchical latent structure discovery is exemplified on the CelebA dataset. There, we show that the features learned by our model in an unsupervised way outperform the best handcrafted features. Furthermore, the extracted features remain competitive when compared to several recent deep supervised approaches on an attribute prediction task on CelebA. Finally, we leverage the model's inference network to achieve state-of-the-art performance on a semi-supervised variant of the MNIST digit classification task. | rejected-papers | Pros:
- The paper proposes to use a hierarchical structure to address reconstruction issues with ALI model.
- Obtaining multiple latent representations that individually achieve a different level of reconstructions is interesting.
- Paper is well written and the authors made a reasonable attempt to improve the paper during the rebuttal period.
Cons:
- Reviewers agree that the approach lacks novelty as similar hierarchical approaches have been proposed before.
- The main goal of the paper to achieve better reconstruction in comparison to ALI without changing the latter's objective seems narrow. More analysis is needed to demonstrate that the approach out-performs other approaches that directly tackle this problem in ALI.
- The paper does not provide strong arguments as to why hierarchy works (limited to 2 levels in the empirical analysis presented in the paper).
- Semi-supervised learning as a down-stream task is impressive but limited to MNSIT. | train | [
"B135Vq1rz",
"Sy-zrYT4f",
"BkAnkYaNz",
"rk7QytpNz",
"SkRgK-YxM",
"r1KKXM6Nf",
"SJjWbz6VM",
"r1s1wghEf",
"HJMQW6P4M",
"SJbtlZ5gf",
"SkI8jt8EG",
"SJbY1_5xG",
"HJ8Hbw6Xf",
"rkOvxDpQG",
"rJixxPpQG",
"B1tuJwTQG"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"We thank the reviewer for his answer. \n\nWhile we agree that HALI's objective is effectively unchanged from ALI, we feel that HALI's novelty lies in illustrating how the hierarchy can be leveraged to:\n\n* Improve reconstructions in adversarially trained generative models.\n* Learn a hierarchy of latent representation with increasing levels of abstraction.\n* Perform semantic meaningful manipulation on the original image as shown in novel innovation vector transfer and unsupervised image inpainting.\n\nWe thank the reviewer for his feedback.",
"Dear authors,\n\nThank you for bringing up reference [1] by Bachman. That paper is a perfect example supporting my argument for why I see a lack of novelty in the presented paper. \nIf you look at my comments in detail, I argue that the HALI objective is effectively unchanged from ALI. In fact, it is basically already largely explained in the original ALI paper (v1 section 2.6: https://arxiv.org/pdf/1606.00704v1.pdf) in a way that is easy to implement and follow.\nTo my understanding HALI is an empirically supported version of that section 2.6 without new insights.\nAs such, it is executed well but adds little novelty.\n\nIn contrast, reference [1] as well as a spiritually related paper by Kingma et al. (Inverse autoregressive flows) tackle the challenge of inferring deep variational autoencoder-type models by changing the inference structures and the objective appropriately instead of just adding a layer. Reference [1] does this using the Matryoshka structures, while IAF use skip connections gainfully to simplify signal flow during inference.\nIt is precisely that type of work that adds novelty, since it is not a carbon copy of the procedure introduced in the original VAE paper with an extra layer, but represents a meaningful modification to the inference process in order to overcome challenge in phrasing a hierarchical model.\n\nHALI, to the best of my understanding, does not change the objective or the inference primitives in a meaningful way and as such the reference [1] is a perfect contrast to HALI exemplifying my comments regarding lack of novelty ( where novelty is defined as researching needed hierarchical versions of the model).\n\nIn addition, if the paper is aimed more at understanding joint distribution matching, I would recommend that the authors study other cases in addition to image generation to make a more comprehensive case.",
"We thank the reviewer for his feedback. \n\nA:\"It is a bit disappointing that the authors claim that the typo in the ALICE reference has been fixed, but it is actually not. The current style in the updated version is\"\nR: we apologize for the confusion. The citation was indeed fixed in the bibliography file but did not propagate to the paper. It is now fixed.\n\nA:\"see my response for your \"Answer to AnonReviewer2 review update\", in which I have more serious concerns. \"\nB: We have tried to address the reviewer's more serious concerns in our response to \"Answer to AnonReviewer2 review update\".\n",
"We thank the reviewer for his feedback. We now move to address his concerns about HALI provided more faithful reconstructions.\n\nR: \"The authors claim \"One of our contribution lies in showing that it is possible to do so (i.e., faithful reconstruction) without adding additional terms to the loss of ALI.\" It is very much raising my concerns how reliable the results are\"\nA: Please note that we are referring here to reconstructions coming from lower levels of the hierarchy. By the data-processing inequality, the information retained by the latent representation is a non-increasing function of the level in the hierarchy. The increased faithfulness of reconstructions coming from lower levels of the hierarchy is quantitatively evaluated on the CelebA validation set by measuring the number of attributes of the original image that are preserved by the reconstruction. Table 1 in the paper shows that reconstructions from z1 preserve a higher number of attributes that reconstructions from z2. Moreover, following [2], we compute the reconstruction errors of the Imagenet 128 validation set under the discriminator's feature map of reconstructions coming from z1 and z2. Figure 3, clearly shows that reconstruction error from z1 is uniformly bounded above by that from z2 and that both reconstruction errors decrease steadily during training. Figure 3 shows that, under both the discriminator's feature space and Euclidean metrics, reconstruction from z1 are closer to the original input image that reconstructions from z2.\n\nR: \" In [1], the authors show that the training objectives of ALI cannot prevent learning meaningless codes for data -- essentially white noise. \"Thus if ALI does indeed work then it must be due to reasons as yet not understood since the training objective can be low even for meaningless solutions\"\nA: We do not claim that all the codes learned by HALI for a given example are meaningful. Rathe,r we claim that latent representations learned by HALI are useful for downstream tasks. We quantitatively demonstrate this claim with an attribute classification task on CelebA and a semi-supervised learning task on MNIST. \n\n[1] S Arora, A Risteski, Y Zhang, arXiv preprint arXiv:1711.02651. Theoretical limitations of Encoder-Decoder GAN architectures.\n\n[2] A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther. Autoencoding beyond pixels using a learned similarity metric. International Conference on Machine Learning (ICML), 2016.\n",
"_________________________________________________________________________________________________________\n\nI raise my rating on the condition that the authors will also address the minor concerns in the final version, please see details below.\n_________________________________________________________________________________________________________\n\nThis paper proposes to perform Adversarially Learned Inference (ALI) in a layer-wise manner. The idea is interesting, and the authors did a good job to describe high-level idea, and demonstrate one advantage of hierarchy: providing different levels reconstructions. However, the advantage of better reconstruction could be better demonstrated. Some major concerns should be clarified before publishing:\n\n(1) How did the authors implement p(x|z) and q(z|x), or p(z_l | z_{l+1}) and q(z_{l+1} | z_l )? Please provide the details, as this is key to the reconstruction issues of ALI.\n\n(2) Could the authors provide the pseudocode procedure of the proposed algorithm? In the current form of the writing, it is not clear what the HALI procedure is, whether (1) one discriminator is used to distinguish the concatenation of (x, z_1, ..., z_L), or (2) L discriminators are used to distinguish the concatenation of (z_l, z_{l+1}) at each layer, respectively?\n\nThe above two points are important. If not correctly constructed, it might reveal potential flaws of the proposed technique.\n\nSince one of the major claims for HALI is to provide better reconstruction with higher fidelity than ALI. Could the authors provide quantitative results on MNIST and CIFAR to demonstrate this? The reconstruction issues have first been highlighted and theoretically analyzed in ALICE [*], and some remedy has been proposed to alleviate the issue. Quantitative comparison on MNIST and CIFAR are also conducted. Could the authors report numbers to compare with them (ALI and ALICE)? \n\nThe 3rd paragraph in Introduction should be adjusted to correctly clarify details of algorithms, and reflect up-to-date literature. \"One interesting feature highlighted in the original ALI work (Dumoulin et al., 2016) is that ... never explicitly trained to perform reconstruction, this can nevertheless be easily done...\". Note that ALI can only perform reconstruction when the deterministic mapping is used, while ALI itself adopted the stochastic mapping. Further, the deterministic mapping is the major difference of BiGAN from ALI. Therefore, more rigorous way to phrase is that \"the original ALI work with deterministic mappings\", or \"BiGAN\" never explicitly trained to perform reconstruction, this can nevertheless be easily done... This tiny difference between deterministic/stochastic mappings makes major difference for the quality of reconstruction, as theoretically analyzed and experimentally compared in ALICE. In ALICE, the authors confirmed further source of poor reconstructions of ALI in practice. It would be better to reflect the non-identifiability issues raised by ALICE in Introduction, rather than hiding it in Future Work as \"Although recent work designed to improve the stability of training in ALI does show some promise (Chunyuan Li, 2017), more work is needed on this front.\"\n\nAlso, please fix the typo in reference as:\n[*] Chunyuan Li, Hao Liu, Changyou Chen, Yunchen Pu, Liqun Chen, Ricardo Henao and Lawrence Carin. ALICE: Towards understanding adversarial learning for joint distribution matching. In Advances in Neural Information Processing Systems (NIPS), 2017.\n\n\n ",
"It is a bit disappointing that the authors claim that the typo in the ALICE reference has been fixed, but it is actually not. The current style in the updated version is \n\n\"Changyou Chen Yunchen Pu Liqun Chen Ricardo Henao Chunyuan Li, Hao Liu and Lawrence Carin.\nAlice: Towards understanding adversarial learning for joint distribution matching. In Advances in\nNeural Information Processing Systems (NIPS), 2017.\"\n\nIt is still not the correct form (explicitly suggested in the my initial review).\n\nI doubt have much progress would be made before the final version. Also, see my response for your \"Answer to AnonReviewer2 review update\", in which I have more serious concerns. ",
"The authors claim \"One of our contribution lies in showing that it is possible to do so (i.e., faithful reconstruction) without adding additional terms to the loss of ALI.\" It is very much raising my concerns how reliable the results are.\n\nNote that recent papers [1,2] show the original objective of ALI is problematic to learn meaningful mapping. In [1], the authors show that the training objectives of ALI cannot prevent learning meaningless codes for data -- essentially white noise. \"Thus if ALI does indeed work then it must be due to reasons as yet not understood, since the training objective can be low even for meaningless solutions\". In [2], similar conclusions are shown both theoretically (the non-identifiable issues) and empirically (500+ runs for each algorithm on the toy dataset). The performance variance of ALI is quite large, the probability it yields good solutions is equal to the the probability it yields bad solutions.\n\nIf HALI shares the same training objective, how could the the problem be alleviated? Perhaps conditioning introduced by the hierarchy reduces entropy? This must be answered confirmedly. Again, one may cherry-pick good solutions to show in the paper, but it is not fully convincing. Multiple runs should be considered to clearly demonstrate it.\n\nAlso, I agree with Reviewer2 that the novelty of the submission is limited (the proposed model and results are not surprised). I recommended for weak acceptance just because it is a clear paper.\n\n\n[1] S Arora, A Risteski, Y Zhang, arXiv preprint arXiv:1711.02651. Theoretical limitations of Encoder-Decoder GAN architectures.\n\n[2] Chunyuan Li, Hao Liu, Changyou Chen, Yunchen Pu, Liqun Chen, Ricardo Henao and Lawrence Carin. ALICE: Towards understanding adversarial learning for joint distribution matching. In Advances in Neural Information Processing Systems (NIPS), 2017.",
"We thank the reviewer for his answer to our clarifications.\n\nR: \"The paper was improved significantly but still lacks novelty. For context, multi-layer VAEs also were not published unmodified as follow-up papers since the objective is identical.\"\nA: We kindly to point the reviewer to [1], a published work proposing a hierarchical architecture to the VAE.\nWe take the liberty to point out that offering an adversarially trained generative model with faithful reconstruction is of significant interest to the community[2][3][4]. \nOne of our contribution lies in showing that it is possible to do so without adding additional terms to the loss of ALI.\n\nR: \"I would suggest the authors study the modified prior with marginal statistics and other means to understand not just 'that' their model performs better with the extra degree of freedom but also 'how' exactly it does it. [...], However, more statistical understanding of the distributions of the extra layers/capacity of the model would be interesting.\"\nA: We used information theoretic constructs to highlight the interplay between information compressions, data processing, and reconstruction errors as we move up the hierarchy. This interplay is formalized in proposition 1 and 2 in the paper.\n\nR: \"The only evaluation is sampling from z1 and z2 for reconstruction which shows that some structure is learned in z2 and the attribute classification task.\"\nA: We take the liberty to point out that our empirical set-up does not rely solely on sampling z1 and z2 for reconstructions and the attribute classification task. We used manifold traversal in z1 and z2 to show that the learned representations of samples encoded local information in z1 and global information z2. We exploited this structure in the vector innovation task to show how structure in z2 can be abstracted and carried down to z1 thus allowing semantically meaningful manipulation of test set images. We leveraged the hierarchy and the local/global information dichotomy in the inference network to perform unsupervised image inpainting. We have quantitatively evaluated HALI's reconstructions on the CelebA dataset using an attribute classifier thus showing the superiority of HALI's reconstruction in retaining attributes of the original image when compared to VAE and ALI. Finally, we leveraged the hierarchical inference network in a Semi-supervised learning task.\n\n[1] Philip Bachman. An Architecture for Deep, Hierarchical Generative Models. In Advances in Neural Information Processing Systems (NIPS), 2016.\n[2] Anders Boesen, Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. Proceedings of The 33rd International Conference on Machine Learning (ICML), 2016.\n[3] Mihaela Rosca, Balaji Lakshminarayanan, David Warde-Farley, Shakir Mohamed. Variational Approaches for Auto-Encoding Generative Adversarial Networks. arXiv preprint arXiv:1706.04987, 2017.\n[4] Chunyuan Li, Hao Liu, Changyou Chen, Yunchen Pu, Liqun Chen, Ricardo Henao and Lawrence Carin. ALICE: Towards understanding adversarial learning for joint distribution matching. In Advances in Neural Information Processing Systems (NIPS), 2017.\n",
"We thank the reviewer for the prompt response. Following the reviewer's comment, we have fixed the typo in the ALICE reference. We agree that HALI and ALICE could, in theory, be combined to achieve better results and we are currently comparing HALI and ALICE on MNIST and CIFAR-10. We assure the reviewer that the results will be added to the final version of our paper.\n",
"******\nPlease note the adjusted review score after revisions and clarifications of the authors. \nThe paper was improved significantly but still lacks novelty. For context, multi-layer VAEs also were not published unmodified as follow-up papers since the objective is identical. Also, I would suggest the authors study the modified prior with marginal statistics and other means to understand not just 'that' their model performs better with the extra degree of freedom but also 'how' exactly it does it. The only evaluation is sampling from z1 and z2 for reconstruction which shows that some structure is learned in z2 and the attribute classification task. However, more statistical understanding of the distributions of the extra layers/capacity of the model would be interesting.\n******\n\nThe authors propose a hierarchical GAN setup, called HALI, where they can learn multiple sets of latent variables.\nThey utilize this in a deep generative model for image generation and manage to generate good-looking images, faithful reconstructions and good inpainting results.\n\nAt the heart of the technique lies the stacking of GANS and the authors claim to be proposing a novel model here.\nFirst, Emily Denton et. al proposed a stacked version of GANs in \"Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks\", which goes uncited here and should be discussed as it was the first work stacking GANs, even if it did so with layer-wise pretraining.\nFurthermore, the differences to another very similar work to that of the authors (StackGan by Huan et al) are unclear and not well motivated.\nAnd third, the authors fail to cite 'Adversarial Message Passing' by Karaletsos 2016, which has first introduced joint training of generative models with structure by hierarchical GANs and generalizes the theory to a particular form of inference for structured models with GANs in the loop. \nThis cannot be called concurrent work as it has been around for a year and has been seen and discussed at length in the community, but the authors fail to acknowledge that their basic idea of a joint generative model and inference procedure is subsumed there. In addition, the authors also do not offer any novel technical insights compared to that paper and actually fall short in positioning their paper in the broader context of approximate inference for generative models.\n\nGiven these failings, this paper has very little novelty and does not perform accurate attribution of credit to the community.\nAlso, the authors propose particular one-off models and do not generalize this technique to an inference principle that could be reusable.\n\nAs to its merits, the authors manage to get a particularly simple instance of a 'deep gan' working for image generation and show the empirical benefits in terms of image generation tasks. \nIn addition, they test their method on a semi-supervised task and show good performance, but with a lack of details.\n\nIn conclusion, this paper needs to flesh out its contributions on the empirical side and position its exact contributions accordingly and improve the attribution.",
"Thanks for your updates.\n\nI am satisfied with responses on two majors concerns: implementation details of conditionals and pseudocode procedure of the proposed algorithm.\n\nHowever, other minor concerns should be addressed in the final version:\n(1) Comparison of the reconstruction performance on standard datasets: MNIST and CIFAR, on which the quantitative results are reported in ALICE paper. I understand the authors have reported comparison on CelebA validation dataset (on which the quantitative results are NOT reported in ALICE paper). It seems suspicious not to report results on all of them, because it leaves the impression that the comparison is cherry-picked to benefit the proposed method. It is not necessary to be the best on all of them, just honestly benchmark the numbers to have a fair comparison for future research. One can easily combine HALI and the reconstruction regularization in ALICE to achieve better results.\n\n(2) The typo in reference is still NOT fixed.\n\nI raise my rating to weak acceptance, on the condition that I trust the author will fix the two minor concerns in the final version.\n\n",
"The paper incorporated hierarchical representation of complex, reichly-structured data to extend the Adversarially Learned Inference (Dumoulin et al. 2016) to achieve hierarchical generative model. The hierarchical ALI (HALI) learns a hierarchy of latent variables with a simple Markovian structure in both the generator and inference. The work fits into the general trend of hybrid approaches to generative modeling that combine aspects of VAEs and GANs. \n\nThe authors showed that within a purely adversarial training paradigm, and by exploiting the model’s hierarchical structure, one can modulate the perceptual fidelity of the reconstructions. We provide theoretical arguments for why HALI’s adversarial game should be sufficient to minimize the reconstruction cost and show empirical evidence supporting this perspective.\n\nThe performance of HALI were evaluated on four datasets, CIFAR10, SVHN, ImageNet 128x128 and CelebA. The usefulness of the learned hierarchical representations were demonstrated on a semi-supervised task on MNIST and an attribution prediction task on the CelebA dataset. The authors also noted that the introduction of a hierarchy of latent variables can add to the difficulties in the training. \n\nSummary:\n——\nIn summary, the paper discusses a very interesting topic and presents an elegant approach for modeling complex, richly-structured data using hierarchical representation. The numerical experiments are thorough and HALI is shown to generate better results than ALI. Overall, the paper is well written. However, it would provide significantly more value to a reader if the authors could provide more details and clarify a few points. See comments below for details and other points.\n\nComments:\n——\n1.\tCould the authors comment on the training time for HALI? How does the training time scale with the levels of the hierarchical structure?\n\n2.\tHow is the number of hierarchical levels $L$ determined? Can it be learned from the data? Are the results sensitive to the choice of $L$?\n\n3.\tIt seems that in the experimental results, $L$ is at most 2. Is it because of the data or because of the lack of efficient training procedures for the hierarchical structure?\n\n\n",
"We thank the reviewer for taking the time spent reviewing our paper.\n\nR: “How did the authors implement $p(x|z)$ and $q(z|x)$, or $p(z_l | z_{l+1})$ and $q(z_{l+1} | z_l )$? Please provide the details, as this is key to the reconstruction issues of ALI.”\nA: We apologize for this oversight and add an architecture section in the appendix.\n\nR: “Could the authors provide the pseudocode procedure of the proposed algorithm? In the current form of the writing, it is not clear what the HALI procedure is, whether (1) one discriminator is used to distinguish the concatenation of $(x, z_1, ..., z_L)$, or L discriminators are used to distinguish the concatenation of $(z_l, z_{l+1})$ at each layer, respectively?”\nA: HALI considers the variables $(x, z_1, ..., z_L)$ jointly. Following the reviewer's suggestion we added a pseudocode procedure to the paper.\n\nR: “Since one of the major claims for HALI is to provide better reconstruction with higher fidelity than ALI. Could the authors provide quantitative results on MNIST and CIFAR to demonstrate this? The reconstruction issues have first been highlighted and theoretically analyzed in ALICE [*], and some remedy has been proposed to alleviate the issue. Quantitative comparison on MNIST and CIFAR are also conducted. Could the authors report numbers to compare with them (ALI and ALICE)?”\n\nA: In order to quantitatively show that HALI yields better reconstruction than ALI on complex large scale dataset, we leveraged the multimodality of the CelebA dataset by computing the proportion of preserved attributes in the different reconstruction level as detected by a pre-trained classifier. The results are shown in the paper (Table 1). Following the reviewer suggestion, we show below the average euclidean error on reconstruction of the CelebA validation set using ALI, ALICE and HALI. We hope that, in conjunction with Table 1, the results below will offer a meaningful proxy to the difficult task of comparing reconstruction errors across models.\n\nModel | l2 error \n--------------------------------------\nVAE | 18.91 \nALI | 53.68 \nALICE(Adversarial) |92.56 \nALICE(l2) | 32.22 \nHALI(z_1) | 22.74 \nHALI(z_2) | 48.77 \n\n R: \"In ALICE, the authors confirmed further source of poor reconstructions of ALI in practice. It would be better to reflect the non-identifiability issues raised by ALICE in Introduction, rather than hiding it in Future Work as \"Although recent work designed to improve the stability of training in ALI does show some promise (Chunyuan Li, 2017), more work is needed on this front.\"\n A: Following the reviewer's comment, we now address (Chunyan Li, 2017) in the introduction instead of the conclusion.\n\n\n\n",
"We thank the reviewer for taking the time spent reviewing our paper.\n\nBefore we start addressing the reviewer concerns, we would like to stress that the focus of our paper is on providing an adversarially trained generative model with high fidelity reconstructions, useful latent representations, and unsupervised hierarchically organized content discovery. Moreover, we also point out that our approaches does not rely on stacking GANs.\n\nWe now answer the reviewer comments.\n\nR: “First, Emily Denton et. al proposed a stacked version of GANs in \"Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks\", which goes uncited here and should be discussed as it was the first work stacking GANs, even if it did so with layer-wise pretraining.”\nA: Although our work does not rely on Laplacian pyramids or stacking GANs as presented in Emily Denton Al. We agree that Emily Denton et. al is an important paper in the context of adversarially trained generative models and correct this oversight by citing the paper. \n\nR: “Furthermore, the differences to another very similar work to that of the authors (StackGan by Huan et al) are unclear and not well motivated.”\nA: “We respectfully point out that both the objectives, training procedure and focus of HALI are significantly different from those of StackGan. StackGan uses a two stage training procedure with distinct discriminators. HALI training is significantly streamlined as we use only one discriminator and one stage. Moreover, contrary to our work, StackGan does not consider the inference problem nor the quality of the learned representations. Moreover,\nFollowing the reviewer's suggestion, we update the related works section to better situate our work with respect to StackGan.”\n\nR: “And third, the authors fail to cite 'Adversarial Message Passing' by Karaletsos 2016, which has first introduced joint training of generative models with structure by hierarchical GANs and generalizes the theory to a particular form of inference for structured models with GANs in the loop. \nThis cannot be called concurrent work as it has been around for a year and has been seen and discussed at length in the community, but the authors fail to acknowledge that their basic idea of a joint generative model and inference procedure is subsumed there.”\nA: First we thank the reviewer for bringing Karaletsos 2016 to our attention and accordingly update our related works section. While Karaletsos 2016 provides an elegant framework to simultaneously train and provide inference for models defined on directed acyclic graphs, it does not offer any empirical investigation of the proposed model, nor does it consider reconstructions quality, nor the usefulness of the learned hierarchical representations to downstream tasks.\n\nKaraletsos 2016 and our work are significantly different in scope and focus. HALI does not fit in the framework of Karaletsos 2016.\n\nspecifically, Karaletsos 2016 matches joint distribution through the use of local discriminators acting on a given variable and its parents. Consider a two level markovian encoder/decoder architecture. Let x, z1, z2 be the variables produced by this architecture. Karaletsos 2016 would use 2 different discriminators, one for the pair (x, z1) and another for the pair (z1, z2). HALI uses one discriminator taking as input the triplet (x, z1, z2). Please note that as consequence of Jensen's inequality Karaletsos 2016 approach will always offer a looser bound on the true Jensen-Shannon divergence during training.\nFigure 1 in Appendix 5.1 of https://arxiv.org/pdf/1506.05751.pdf clearly shows the difference between the two approaches. \n\nWe thank the reviewer for the time spent reviewing our work. We have considered your comments in our revised paper. Given the improved paper and our comments, we hope you reconsider your rating.\n",
"We thank the reviewer for taking the time spent reviewing our paper. \n\nWe now answer the reviewer’s comments and questions.\n\nR: “Could the authors comment on the training time for HALI? How does the training time scale with the levels of the hierarchical structure?”\nA: The number of hierarchical levels is determined empirically. We did not explore learning the number of Hierarchical levels from the data. In our experiments, we have noticed that additional levels come with decreased training stability.\n\nR:”How is the number of hierarchical levels $L$ determined? Can it be learned from the data? Are the results sensitive to the choice of $L$? It seems that in the experimental results, $L$ is at most 2. Is it because of the data or because of the lack of efficient training procedures for the hierarchical structure?”\nA: Limiting the number of hierarchical levels to 2 allowed for manageable training. Moreover, as the considered datasets come from computer vision, we tried to show that the first level of the hierarchy encoded local structure while the second encoded global properties of the image.\n",
"Before considering the specific comments of the reviewers, We wish to address the general sense that our work lacks novelty. While it's true that we do not offer a novel learning algorithm, we believe that our hierarchical extension of the ALI/BiGAN framework offers an important contribution that is extremely relevant to the current state of the literature on generative models. There are numerous papers (such as Li et al., 2017 -- \"ALICE: Towards Understanding Adversarial Learning for Joint Distribution Matching\") and even current ICLR submissions (such as \"IVE-GAN: INVARIANT ENCODING GENERATIVE ADVERSARIAL NETWORKS\")\nwhose focus is to modify the ALI objective function to improve image reconstruction. We feel our finding that an unmodified but hierarchical ALI model can dramatically improve over ALI reconstructions is timely and will likely have a real impact on future research into generative models. Our point is made by *not* proposing a novel learning algorithm. It is our hope that the reviewers will consider the utility of our contribution to the developing conversation that is evolving around this sorts of models. \n"
] | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
5,
-1,
7,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
5,
-1,
3,
-1,
-1,
-1,
-1
] | [
"Sy-zrYT4f",
"r1s1wghEf",
"r1KKXM6Nf",
"SJjWbz6VM",
"iclr_2018_HyXNCZbCZ",
"HJMQW6P4M",
"r1s1wghEf",
"SJbtlZ5gf",
"SkI8jt8EG",
"iclr_2018_HyXNCZbCZ",
"HJ8Hbw6Xf",
"iclr_2018_HyXNCZbCZ",
"SkRgK-YxM",
"SJbtlZ5gf",
"SJbY1_5xG",
"iclr_2018_HyXNCZbCZ"
] |
iclr_2018_B1CNpYg0- | Learning to Compute Word Embeddings On the Fly | Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare. Learning representations for words in the ``long tail'' of this distribution requires enormous amounts of data.
Representations of rare words trained directly on end tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation. We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained end-to-end for the downstream task. We show that this improves results against baselines where embeddings are trained on the end task for reading comprehension, recognizing textual entailment and language modeling.
| rejected-papers | The pros and cons of the paper can be summarized as follows:
Pros:
* The method of combining together multiple information sources is effective
* Experimental evaluation is thorough
Cons:
* The method is a relatively minor contribution, combining together multiple existing methods to improve word embeddings. This also necessitates the model being at least as complicated as all the constituent models, which might be a barrier to practical applicability
As an auxiliary comment, the title and emphasis on computing embeddings "on the fly" is a bit puzzling. This is certainly not the first paper that is able to calculate word embeddings for unknown words (e.g. all the cited work on character-based or dictionary-based methods can do so as well). If the emphasis is calculating word embeddings just-in-time instead of ahead-of-time, then I would also expect an evaluation of the speed or memory requirements benefits of doing so. Perhaps a better title for the paper would be "integrating multiple information sources in training of word embeddings", or perhaps a more sexy paraphrase of the same.
Overall, the method seems to be solid, but the paper was pushed out by other submissions.
| train | [
"SJTAcW5xf",
"ryDrjZqxM",
"SJBZut5gM",
"H1MCKqz7f"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author"
] | [
"This paper describes a method for computing representations for out-of-vocabulary words, e.g. based on their spelling or dictionary definitions. The main difference from previous approaches is that the model is that the embeddings are trained end-to-end for a specific task, rather than trying to produce generically useful embeddings. The method leads to better performance than using no external resources, but not as high performance as using Glove embeddings. The paper is clearly written, and has useful ablation experiments. However, I have a couple of questions/concerns:\n- Most of the gains seem to come from using the spelling of the word. As the authors note, this kind of character level modelling has been used in many previous works. \n- I would be slightly surprised if no previous work has used external resources for training word representations using an end-task loss, but I don’t know the area well enough to make specific suggestions \n- I’m a little skeptical about how often this method would really be useful in practice. It seems to assume that you don’t have much unlabelled text (or you’d use Glove), but you probably need a large labelled dataset to learn how to read dictionary definitions well. All the experiments use large tasks - it would be helpful to have an experiment showing an improvement over character-level modelling on a smaller task.\n- The results on SQUAD seem pretty weak - 52-64%, compared to the SOTA of 81. It seems like the proposed method is quite generic, so why not apply it to a stronger baseline?\n",
"\nThis paper illustrates a method to compute produce word embeddings on the fly for rare words, using a pragmatic combination of existing ideas:\n\n* Backing off to a separate decoder for rare words a la Luong and Manning (https://arxiv.org/pdf/1604.00788.pdf, should be cited, though the idea might be older).\n\n* Using character-level models a la Ling et al.\n\n* Using dictionary embeddings a la Hill et al.\n\nNone of these ideas are new before but I haven’t seen them combined in this way before. This is a very practical idea, well-explained with a thorough set of experiments across three different tasks. The paper is not surprising but this seems like an effective technique for people who want to build effective systems with whatever data they’ve got. \n",
"This paper examines ways of producing word embeddings for rare words on demand. The key real-world use case is for domain specific terms, but here the techniques are demonstrated on rarer words in standard data sets. The strength of this paper is that it both gives a more systematic framework for and builds on existing ideas (character-based models, using dictionary definitions) to implement them as part of a model trained on the end task.\n\nThe contribution is clear but not huge. In general, for the scope of the paper, it seems like what is here could fairly easily have been made into a short paper for other conferences that have that category. The basic method easily fits within 3 pages, and while the presentation of the experiments would need to be much briefer, this seems quite possible. More things could have been considered. Some appear in the paper, and there are some fairly natural other ones such as mining some use contexts of a word (such as just from Google snippets) rather than only using textual definitions from wordnet. The contributions are showing that existing work using character-level models and definitions can be improved by optimizing representation learning in the context of the final task, and the idea of adding a learned linear transformation matrix inside the mean pooling model (p.3). However, it is not made very clear why this matrix is needed or what the qualitative effect of its addition is.\n\nThe paper is clearly written. \n\nA paper that should be referred to is the (short) paper of Dhingra et al. (2017): A Comparative Study of Word Embeddings\nfor Reading Comprehension https://arxiv.org/pdf/1703.00993.pdf . While it in no way covers the same ground as this paper it is relevant as follows: This paper assumes a baseline that is also described in that paper of using a fixed vocab and mapping other words to UNK. However, they point out that at least for matching tasks like QA and NLI that one can do better by assigning random vectors on the fly to unknown words. That method could also be considered as a possible approach to compare against here.\n\nOther comments:\n - The paper suggests a couple of times including at the end of the 2nd Intro paragraph that you can't really expect spelling models to perform well in representing the semantics of arbitrary words (which are not morphological derivations, etc.). While this argument has intuitive appeal, it seems to fly in the face of the fact that actually spelling models, including in this paper, seem to do surprisingly well at learning such arbitrary semantics.\n - p.2: You use pretrained GloVe vectors that you do not update. My impression is that people have had mixed results, sometimes better, sometimes worse with updating pretrained vectors or not. Did you try it both ways?\n - fn. 1: Perhaps slightly exaggerates the point being made, since people usually also get good results with the GloVe or word2vec model trained on \"only\" 6 billion words – 2 orders of magnitude less data.\n - p.4. When no definition is available, is making e_d(w) a zero vector worse than or about the same as using a trained UNK vector?\n - Table 1: The baseline seems reasonable (near enough to the quality of the original Salesforce model from 2016 (66 F1) but well below current best single models of around 76-78 F1. The difference between D1 and D3 does well illustrate that better definition learning is done with backprop from end objective. This model shows the rather strong performance of spelling models – at least on this task – which he again benefit from training in the context of the end objective. \n - Fig 2: It's weird that only the +dict (left) model learns to connect \"In\" and \"where\". The point made in the text between \"Where\" and \"overseas\" is perfectly reasonable, but it is a mystery why the base model on the right doesn't learn to associate the common words \"where\" and \"in\" both commonly expressing a location.\n - Table 2: These results are interestingly different. Dict is much more useful than spelling here. I guess that is because of the nature of NLI, but it isn't 100% clear why NLI benefits so much more than QA from definitional knowledge.\n - p.7: I was slightly surprised by how small vocabs (3k and 5k words) are said to be optimal for NLI (and similar remarks hold for SQuAD). My impression is that most papers on NLI use much larger vocabs, no?\n - Fig 3: This could really be drawn considerably better: make the dots bigger and their colors more distinct.\n - Table 3: The differences here are quite small and perhaps the least compelling, but the same trends hold.\n",
"We are grateful to the reviewers for their thorough and thoughtful reviews! Based on their feedback, we uploaded a revised version of the paper with a number of small changes.\nIn the rest of the rebuttal we address some of the concerns that reviewers raised. We conclude by restating the strengths of the paper.\n\nAnonReviewer 2 (R2) asked what the contribution of training a linear transformation of mean pooling is. Our understanding is that such a linear transformation helps to compensate for the difference between the trainable word embeddings and their complements that are obtained by averaging embeddings of the words from definitions. \n\nWe thank R2 for pointing at the paper by Dhingra et al, which proposes to use fixed random embeddings for OOV words. We updated the paper to include this reference. As mentioned by R2, this technique does not really cover the same ground, because it only allows to match exactly identical words against each other, whereas our method takes into account semantics of the OOV words, as expressed in the definitions. As suggested by R2, we carried out an additional experiment on SNLI to verify our reasoning, and we did not find fixed random embeddings helpful. This additional experiment has been mentioned in the paper.\n\nR2 also commented on the fact that in some of our experiments spelling is more helpful than in others. They also suggested that this contradicts our initial argument that semantics can not always be inferred from spelling. We respectfully disagree for the following reasons: (a) the fact that gains from using the spelling and the definition are complementary is aligned with our expectation that spelling is not sufficient, (b) how much different sources of auxiliary information help is highly dataset-specific, and high performance of spelling on SQuAD can be due to the fact that a lot of questions can be answered by looking for question words in the document (Weissenborn et al, 2017), not because semantics could be inferred from spelling, (c) our qualitative investigation shows that the dictionary does enable semantic processing that spelling does not permit, such as matching “where” and “overseas”, (d) NLI, arguably the most semantically demanding of the considered benchmarks, shows a clear superiority of a dictionary-enabled model. We also make similar arguments in the paper, for example in Sentence 2 of Section 5.\n\nWe thank R2 for asking about vocabulary sizes; indeed, we used a 20k vocabulary for MultiNLI, which fact is reflected in the new edition of the paper. We did however find that using more than 3k (5k) on SQuAD (SNLI) merely caused stronger overfitting. Vocabulary sizes may be larger in other papers due to the fact they rely on embeddings that were pretrained on huge corpora.\n\nWe thank AnonReviewer1 for their positive review of the paper. We note that we do not study character-level decoders in our work, focusing only on the model’s ability to understand OOV inputs. In the light of this, we are not sure that the work of Luong and Manning is a required citation. We do cite Ling et al as a prior work on computing word representations from characters.\n\nWe do not fully agree with AnonReviewer3 (R3) when they say “most of the gains seem to come from using the spelling of the word”. As mentioned above in this rebuttal, in our NLI experiments dictionary definitions were a lot more helpful than spelling, and besides, in other experiments it was shown that benefits from the spelling and dictionary definitions are complementary. With regard of the size of the datasets that we used, our language modelling experiments suggest that the proposed technique is only more helpful for small datasets. Lastly, while we completely agree that it would be interesting to apply the proposed method to SOTA models for SQuAD, we note that SOTA on this dataset has been improving rapidly over the last year. It’s challenging to keep up with SOTA in an on-going research project, and besides our approach is by no means model specific, which makes us expect that the reported results should transfer across all SQuAD models.\n\nTo conclude, we would like to reiterate our key arguments in favor of the acceptance of this work. The paper proposes a conceptually simple, yet novel method to tackle a very general problem of OOV words in natural language processing. The experimental results that we provide on QA, NLI and language modelling give the reader an idea of whether this method is applicable to their domain of interest. Under a reasonable assumption that NLI recognition was the most semantically demanding task out of the considered ones, the relevance of the proposed method will only grow as the progress in the field will allow using harder datasets and tasks. Lastly, we believe that our method will be especially helpful for practitioners working in technical domains, such as legal text and biological texts, where exact definitions should typically be available.\n"
] | [
5,
7,
5,
-1
] | [
4,
3,
4,
-1
] | [
"iclr_2018_B1CNpYg0-",
"iclr_2018_B1CNpYg0-",
"iclr_2018_B1CNpYg0-",
"iclr_2018_B1CNpYg0-"
] |
iclr_2018_SJvu-GW0b | Graph2Seq: Scalable Learning Dynamics for Graphs | Neural networks are increasingly used as a general purpose approach to learning algorithms over graph structured data. However, techniques for representing graphs as real-valued vectors are still in their infancy. Recent works have proposed several approaches (e.g., graph convolutional networks), but as we show in this paper, these methods have difficulty generalizing to large graphs. In this paper we propose Graph2Seq, an embedding framework that represents graphs as an infinite time-series. By not limiting the representation to a fixed dimension, Graph2Seq naturally scales to graphs of arbitrary size. Moreover, through analysis of a formal computational model we show that an unbounded sequence is necessary for scalability. Graph2Seq is also reversible, allowing full recovery of the graph structure from the sequence. Experimental evaluations of Graph2Seq on a variety of combinatorial optimization problems show strong generalization and strict improvement over state of the art. | rejected-papers | The reviewers agree that the problem being studied is important and relevant but express serious concerns. I recommend the authors to carefully go through the reviews and significantly scale up their experiments. | test | [
"H1n6BG_HG",
"SyhZ-YerG",
"HyNr86Ylz",
"Hy9gZ2CxM",
"SkD9M_NZf",
"HklBwVKmG",
"SydlD4FQz",
"SJ67IVFmG",
"SJl9hMKXG"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"(1) Reg. length of sequence: \n\nSequence length depends on the graph, and in the worst case is exponential in # of edges. In any case, this is irrelevant to our (empirical) results. In fact, the theoretical fact of exponential dependence in # of edges only makes our empirical results more impressive; we only need to use sequence lengths roughly equal to the diameter of the graph to get our numerical results, which are favorably competitive with the best algorithms (not just prior neural network methods) for the problems studied.\n\n(2) Reg. the general remark:\n\nThe remark holds true only under deterministic sequence generation. \n\nUnder deterministic initialization and evolution, the sequence cannot be used to distinguish even non-isomorphic graphs, as we have showed in the proof of Proposition 1 in the paper. This is a clear limitation of deterministic sequence generation. We point this out in Section 3.1.\n\nHowever, if the evolution is random (by adding a random node label or noise), then the sequences are no longer identical even for isomorphic graphs, and as such cannot be used as a test for isomorphism.",
"I read the response and I do not feel I should change my review since mostly my concerns remain. \n\nThe authors did not acknowledge that their sequence representation can be exponential length, or if I am mistaken ?\n\nAs a general remark, if you could map a graph into a poly-size sequence that is invariant to labeling of the graph nodes and this sequence is invertible (i.e you can use it to reconstruct the graph) then you have solved graph isomorphism. \nThis is because two graphs would be isomorphic iff their sequences are identical. \n",
"This paper proposes to represent nodes in graphs by time series. This is an interesting idea but the results presented in the paper are very preliminary.\nExperiments are only conducted on synthetic data with very small sizes.\nIn Section 5.1, I did not understand the construction of the graph. What means 'all the vertices are disjoint'? Then I do not understand why the vertices of G_i form the optimum.",
"This paper proposes a novel way of embedding graph structure into a sequence that can have an unbounded length. \n\nThere has been a significant amount of prior work (e.g. d graph convolutional neural networks) for signals supported on a specific graph. This paper on the contrary tries to encode the topology of a graph using a dynamical system created by the graph and randomization. \n\nThe main theorem is that the created dynamical system can be used to reverse engineer the graph topology for any digraph. \nAs far as I understood, the authors are doing essentially reverse directed graphical model learning. In classical learning of directed graphical models (or causal DAGs) one wants to learn the structure of a graph from observed data created by this graph inducing conditional independencies on data. This procedure is creating a dynamical system that (following very closely previous work) estimates conditional directed information for every pair of vertices u,v and can find if an edge is present from the observed trajectory. \nThe recovery algorithm is essentially previous work (but the application to graph recovery is new).\n\nThe authors state:\n``Estimating conditional directed information efficiently from samples is itself an active area of research Quinn et al. (2011), but simple plug-in estimators with a standard kernel density estimator will be consistent.''\n\nOne thing that is missing here is that the number of samples needed could be exponential in the degrees of the graph. Therefore, it is not clear at all that high-dimensional densities or directed information can be estimated from a number of samples that is polynomial in the dimension (e.g. graph degree).\n\nThis is related to the second limitation, that there is no sample complexity bounds presented only an asymptotic statement. \n\nOne remark is that there are many ways to represent a finite graph with a sequence that can be decoded back to the graph (and of course if there is no bound on the graph size, there will be no bound on the size of the sequence). For example, one could take the adjacency matrix and sequentially write down one row after the other (perhaps using a special symbol to indicate 'next row'). Many other simple methods can be obtained also, with a size of sequence being polynomial (in fact linear) in the size of the graph. I understand that such trivial representations might not work well with RNNs but they would satisfy stronger versions of Theorem 1 with optimal size. \nOn the contrary it was not clear how the proposed sequence will scale in the graph size. \n\n\nAnother remark is that it seems that GCNN and this paper solve different problems. \nGCNNs want to represent graph-supported signals (on a fixed graph) while this paper tries to represent the topology of a graph, which seems different. \n\n\nThe experimental evaluation was somewhat limited and that is the biggest problem from a practical standpoint. It is not clear why one would want to use these sequences for solving MVC. There are several graph classification tasks that try to use the graph structure (as well as possibly other features) see eg the bioinformatics \nand other applications. Literature includes for example:\nGraph Kernels by S.V.N. Vishwanathan et al. \nDeep graph kernels (Yanardag & Vishwanathan and graph invariant kernels (Orsini et al.),\nwhich use counts of small substructures as features. \n\nThe are many benchmarks of graph classification tasks where the proposed representation could be useful but significantly more validation work would be needed to make that case. \n\n",
"The paper proposes GRAPH2SEQ that represents graphs as infinite time-series of vectors, one for\neach vertex of the graph and in an invertible representation of a graph. By not having the restriction of representation to a fixed dimension, the authors claims their proposed method is much more scalable. They also define a formal computational model, called LOCAL-Gather that includes GRAPH2SEQ and other classes of GCNN representations, and show that GRAPH2SEQ is capable of computing certain graph functions that fixed-depth GCNNs cannot. They experiment on graphs of size at most 800 nodes to discover minimum vertex cover and show that their method perform much better than GCNNs but is comparable with greedy heuristics for minimum vertex cover.\n\nI find the experiments to be hugely disappointing. Claiming that this particular representation helps in scalability and then doing experiment on graphs of extremely small size does not reflect well. It would have been much more desirable if the authors had conducted experiments on large graphs and compare the results with greedy heuristics. Also, the authors need to consider other functions, not only minimum vertex cover. In general, lack of substantial experiments makes it difficult to appreciate the novelty of the work. I am not at all sure, if this representation is indeed useful for graph optimization problems practically.\n\n\n\n\n",
"Vertices are disjoint means the vertices do not have any edge between them. Since the vertices of G_o do not have any edge between themselves, selecting the vertices of G_i as a cover will ensure every edge of the graph is covered.\n",
"We first note that recovering the graph topology from the time-series is not the primary objective of Graph2Seq (we already have the graph as our input, there is no need to recover it). The main goal of Graph2Seq is to provide a representation framework for learning tasks (e.g., classification, optimization), over graphs that are not fixed. \n\nSupposing we have a candidate neural network framework (such as Graph2Seq) that can take in arbitrary sized graphs as input, and produce an output. Knowing whether such a framework could work well on graphs of any size is unfortunately a difficult question to answer. In this context, we have included Theorem 1 as a strong conceptual evidence towards the scalability of Graph2Seq. The fact that the entire graph topology can be recovered from the Graph2Seq representation (even if we ignore sample complexity and computation issues) suggests the time-series has enough information to recover the graph in principle. \n\nIndeed, there are many ways in which one could represent a graph as a sequence (with potentially shorter sequences). However, the issue with methods involving the adjacency matrix is they require a prior labelling of the graph nodes (to identify the individual rows and columns of the matrix), and it is not clear how to incorporate such labels into the neural network. This is perhaps why the adjacency matrix is itself not used as a representation in the first place, and methods like GCNN are necessary. What we are seeking is a label-free representation. ",
"We have conducted experiments on graphs of size up to 3200, and will include in our revision. Graph2Seq’s performance trend continues to hold at this size. We also tried larger graph sizes, but due to the large number of edges we ran into computational and memory issues (25k and 100k size graphs, which have 46 million and 4 billion edges respectively). Even doing greedy algorithms at this scale is computationally hard. As mentioned previously, our test graphs are not sparse and the current test graphs contain a large number of edges (hundreds of thousands to a million). We also reiterate that our training is on graphs of size 15, illustrating a generalization over a factor of 200. Evaluations for maximum independent set and max cut functions have been included in the appendix. \n",
"We thank the reviewers for the helpful comments. Please find our response to the issues raised below. \n\nOn motivation: \n\nWe are rather puzzled by the comment that the motivations are unclear. Using neural networks for graph structured data is a fast-emerging field and is of topical interest (massive attendance in a recent NIPS workshop on Non-Euclidean deep learning https://nips.cc/Conferences/2017/Schedule?showEvent=8735 serves to illustrate). Our paper directly addresses one of the key open problems in the area: how to design neural networks for graphs that can scale to graph inputs of arbitrary sizes and shapes. \n\nSuch a scalable solution may be required for a variety of reasons: (1) directly training on large instances may not be possible; (2) application specific training can be avoided, and trained algorithms can be used in variety of settings; or (3) a scalable algorithm may be easier to analyze, reason about and can potentially inspire advances in theory CS. Indeed, traditionally algorithms in CS have usually been of this flavor. However, to our best awareness, such an analog in deep learning for graphs has been critically missing. \n\nThe combinatorial optimization problems we have used in our evaluations (vertex cover, max cut, max independent set) are also interesting and many recent works (e.g. Bello et al ’17, Vinyals et al ’15, Dai et al ‘17) have considered these problems. Moreover, input instances in these problems capture the very essence of what makes representing signals over non-fixed graphs challenging: (i) the input graphs could have arbitrary topology, and (ii) the input graphs could have arbitrary size. The simplicity of these problems (in terms of vertex/edge features) allow us to focus on directly addressing these two scalability issues without worrying about dependencies arising from high-dimensional node/edge attributes. \n\nOn evaluations:\n\nWe have evaluated graphs of size up to 3200 and will include in our revision. Our test graphs are not sparse, and contain a large number of edges: e.g., a 3200 node Erdos-Renyi graph has 700,000 edges; a 3200 node random bipartite graph has 1.9 million edges. These graph sizes are consistent and well-above the sizes used in the neural networks combinatorial optimization literature (e.g., Learning combinatorial optimization algorithms over graphs, Dai et al, NIPS ’17 (up to 1200 nodes); Neural combinatorial optimization with reinforcement learning, Bello et al, ’17 (100 nodes); Pointer networks, Vinyals et al, NIPS ’15 (50 nodes)). Compared to the recent NIPS spotlight paper by Dai et al (which focuses on similar combinatorial problems), our results illustrate significant generalizations both in graph topology, and graph size.\n\nThe space of problems where the graph instances are not fixed is vast, and finding scalable learning representations for these applications remains a grand challenge. To our knowledge, this is also a longer-term project and a one-size-fits-all approach that solves all of those applications may not be possible. In this regard, our work presents an important first-step of recognizing, formalizing and understanding the key challenges involved, and also proposes a promising solution that directly addresses the key issues. \n"
] | [
-1,
-1,
4,
4,
4,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
4,
3,
-1,
-1,
-1,
-1
] | [
"SyhZ-YerG",
"SydlD4FQz",
"iclr_2018_SJvu-GW0b",
"iclr_2018_SJvu-GW0b",
"iclr_2018_SJvu-GW0b",
"HyNr86Ylz",
"Hy9gZ2CxM",
"SkD9M_NZf",
"iclr_2018_SJvu-GW0b"
] |
iclr_2018_rJv4XWZA- | Generating Differentially Private Datasets Using GANs | In this paper, we present a technique for generating artificial datasets that retain statistical properties of the real data while providing differential privacy guarantees with respect to this data. We include a Gaussian noise layer in the discriminator of a generative adversarial network to make the output and the gradients differentially private with respect to the training data, and then use the generator component to synthesise privacy-preserving artificial dataset. Our experiments show that under a reasonably small privacy budget we are able to generate data of high quality and successfully train machine learning models on this artificial data. | rejected-papers | This paper presents an interesting idea: employ GANs in a manner that guarantees the generation of differentially private data.
The reviewers liked the motivation but identified various issues. Also, the authors themselves discovered some problems in their formulation; on behalf of the community, thanks for letting the readers know.
The discovered issues will need to be reviewed in a future submission. | test | [
"B1p11ROxz",
"H1Ae8Z5eM",
"ByaqVKoxz",
"BJgtYeuXM",
"BktqSZGXf",
"SyaRpcAlG",
"SJ15NnjeG",
"rkZnVH9yz",
"HJBcVfDkM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public",
"author",
"public"
] | [
"Summary: The paper addresses the problem of non-interactive differentially private mechanism via adversarial networks. Non-interactive mechanisms have been one of the most sought-after approaches in differentially private algorithm design. The reason is that once a differentially private data set is released, it can be used in any way to answer queries / perform learning tasks without worrying about the privacy budget. However, designing effective non-interactive mechanisms are notoriously hard because of strong computational lower bounds. In that respect, the problem addressed in this paper is extremely important, and the approach of using an adversarial network for the task is very natural (yet novel).\n\nThe main idea in the paper is to set up a usual adversarial framework with the generator and the discriminator, where the discriminator has access to the raw data. The information (in the form of gradients) is passed from the discriminator on to the generator via a differentially private channel (using Gaussian mechanism).\n\nPositive aspects of the paper: One main positive aspect of the paper is that it comes up with a very simple yet effective approach for a non-interactive mechanism for differential privacy. Another positive aspect of the paper is that it is very well-written and is easy to follow.\n\nQuestions: I have a few questions about the paper.\n\n1. The technical novelty of the paper is not that high. Given the main idea of using a GAN, the algorithms and the experiments are fairly straightforward. I may be missing something. I believe the paper can be strengthened by placing more emphasis on the technical content.\n\n2. I am mildly concerned about the effectiveness of the algorithm in the high dimensional setting. The norm of i.i.d. Gaussian noise scales roughly as \\sqrt{dimensions}, which may be too much to tolerate in most settings.\n\n3. I was wondering if there is a way to incorporate assumptions about sparsity in the original data set, to handle curse of dimensionality.\n\n4. I am not sure about the novelty of Theorem 2. Isn't it just post-processing property of differential privacy?",
"The paper proposes a technique for differentially privately generating synthetic data using GAN, and experimentally showed that their method achieves both high utility and good privacy.\nThe idea of building a differentially private GAN and generating differentially private synthetic data is very interesting. However, my main concern is the privacy aspect of the technique, as it is not explained clearly enough in the paper. There is also room for improvement in the presentation and clarity of the paper.\n\nMore details:\n- About the differential privacy aspect:\n The author didn't provide detailed privacy analysis of the Gaussian noise layer, and I don't find the values of the sensitivity (C = 1) provided in the answer to a public comment easy to see. Also, the paper mentioned that the batch size is 32 and the author mentioned in the comment that the std of the Gaussian noise is 0.7, and the number of epoch is 50 or 150. I think these values would lead to epsilon much larger than 8 (as in Table 1). However, in Section 5.2, it is said that \"Privacy bounds were evaluated using the moments accountant and the privacy amplification theorem (Abadi et al., 2016), and therefore, are data-dependent and are tighter than using normal composition theorems.\" I don't see clearly why privacy amplification is needed here, and why using moments accountant and privacy amplification can lead to data-dependent privacy loss.\n In general, I don't find the privacy analysis of this paper clear and detailed enough to convince me about the correctness of the privacy results. However, I am very happy to change my opinion if there are convincing details in the rebuttal.\n\n- About the presentation:\n As a paper proposing a differentially private algorithm, detailed and formal analysis of the privacy guarantees is essential to convince the readers. For example, I think it would be much better if there is a formal theorem showing the sensitivity of the Gaussian noise layer. And it would be better to restate (in Appendix 7.4) not only the definition of moments accountant, but the composition and tail bound, as well as the moments accountant for the Gaussian mechanism, since they are all used in the privacy analysis of this paper.\n",
"This paper considers the problem of generating differentially private datasets using GANs. To the best of my knowledge this is the first paper to study differential privacy for GANs.\n\nThe paper is fairly well-written but has several major weaknesses:\n-- Privacy parameter eps = 8 used in the experiments implies that the likelihood of any event can change by e^8 which is roughly 3000, which is an unacceptably high privacy loss. Moreover, even for this high privacy loss the accuracy on the SVHN dataset seems to drop a lot (92% down to 83%) when proposed mechanism is used.\n-- I didn't find a formal proof of the privacy guarantee in the paper. The authors say that the privacy guarantee is based on the moments accountant method, but I couldn't find the proof anywhere. The method itself is introduced in Section 7.4 but isn't used for the proof. Thus the paper seems to be incomplete.",
"\nDear readers,\n\nWe have discovered a problem related to dimensionality that invalidates privacy guarantees stated in the paper. We are currently working on solving the issue.",
"Dear authors,\n\nThe topic of this paper is interesting, but I have the following question: the authors showed that deterministic feed-forward neural network with (\\epsilon, \\delta)-differentially private layer is (\\epsilon, \\delta)-differentially private in Theorem 1 and 2. Therefore, it seems that there is no need to use \"generator\" for privacy since deterministic feed-forward neural network with noise layer already provides privacy guarantees. In other words, the reason why the authors introduced generator is not clear...\n\nIf I don't understand the paper correctly, please do not hesitate to let me know.\n\nThanks in advance.",
"Thank you for your questions.\nTo answer the first question, adding noise in each iteration is not a problem, as it does not introduce bias over time. The dimensionality of data would not be an issue either, because noise is added in an embedding space (and not in the original feature space) for each dimension independently, making the method agnostic to the dimensionality of original data. As reported in the paper, we have done experiments with the SVHN dataset, which has 3072-dimensional input vectors.\nMoving on to your second question. Thank you for drawing our attention to this paper. We were not aware of it and missed it when studying the related work.\nWhile the main ideas are indeed similar, there is a major difference in our method: the way of preserving privacy in GAN training. On the initial stages of our work, we explored the possibility of using differentially private SGD, but we found that achieving reasonable privacy bounds requires adding too much noise to gradients and makes GAN training much harder than it already is. The aforementioned paper confirms our findings by showing that the noise quickly overpowers the gradient (Fig. 1(e)) and that using the GAN after the final epoch is not sufficient for obtaining realistic data (Fig. 2(a)). Instead, we propose adding noise in the forward pass, which improves convergence properties and generated data quality.\nThis difference leads to a number of advantages. Most importantly, our technique does not require additional procedures for picking specific generator epochs or modifying optimisation methods. Moreover, it can be implemented by simply adding a noise layer to the discriminator, and we formally show that this is sufficient for achieving differential privacy.",
"Hi, interesting work. But the noise is added during each iteration and that would end up to be large. Did you run the algorithm for high dimension data?\nWhat's the difference between the proposed method and the one in this paper? https://www.biorxiv.org/content/biorxiv/early/2017/07/05/159756.full.pdf",
"Thank you for your question. We use the following parameter values in our experiments:\n1). C = 1, in all of the experiments.\n2). Number of training epochs for GAN is 150 for SVHN and 50 for MNIST. Note that we also use unrolling of the discriminator for 4 steps (reduced to 3 steps after 120 epochs) in generator updates to avoid mode collapse.\n3). Standard deviation of the Gaussian noise is generally set to 0.7. On SVHN, it is increased to 0.8 after 120 epochs to meet a tighter privacy bound.",
"Could you please specify which values have you used in your experiments for the following parameters?\n1) C - sensitivity of the preceding layer’s output\n2) number of training epochs\n3) magnitude of Gaussian noise (i.e, std deviation) injected per batch iteration\n\nThanks in advance"
] | [
6,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rJv4XWZA-",
"iclr_2018_rJv4XWZA-",
"iclr_2018_rJv4XWZA-",
"iclr_2018_rJv4XWZA-",
"iclr_2018_rJv4XWZA-",
"SJ15NnjeG",
"iclr_2018_rJv4XWZA-",
"HJBcVfDkM",
"iclr_2018_rJv4XWZA-"
] |
iclr_2018_Sk7cHb-C- | Representing dynamically: An active process for describing sequential data | We propose an unsupervised method for building dynamic representations of sequential data, particularly of observed interactions. The method simultaneously acquires representations of input data and its dynamics. It is based on a hierarchical generative model composed of two levels. In the first level, a model learns representations to generate observed data. In the second level, representational states encode the dynamics of the lower one. The model is designed as a Bayesian network with switching variables represented in the higher level, and which generates transition models. The method actively explores the latent space guided by its knowledge and the uncertainty about it. That is achieved by updating the latent variables from prediction error signals backpropagated to the latent space. So, no encoder or inference models are used since the generators also serve as their inverse transformations.
The method is evaluated in two scenarios, with static images and with videos. The results show that the adaptation over time leads to better performance than with similar architectures without temporal dependencies, e.g., variational autoencoders. With videos, it is shown that the system extracts the dynamics of the data in states that highly correlate with the ground truth of the actions observed. | rejected-papers | This paper proposes a model which learns simultaneously the dynamics of sequential data, together with a static latent representation. The idea and motivation is interesting and the results are promising.
However, all reviewers agree that the presentation needs much more work to convey the messages correctly and convincingly. Moreover, the reviewers question some design choices and lack of discussion of the results. No rebuttal has been provided.
| train | [
"r1OraDNgf",
"By8SeCYez",
"BypLdzcxf",
"S1pvRc7bM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose an architecture and generative model for static images and video sequences, with the purpose of generating an image that looks as similar as possible to the one that is supplied. This is useful for for example frame prediction in video and detection of changes in video as a consequence to changes in the dynamics of objects in the scene.\nThe architecture minimizes the error between the generated image(s) and the supplied image(s) by refining the generated image over time when the same image is shown and by adapting when the image is changed. The model consists of three neural networks (F_Zµ, F_Zsigma, f_X|Z) and three multivariate Gaussian distributions P(S_t), P(Z_t) and P(X_t,Z_t) with diagonal covariances. The NNs do not change over time but they relate the three Gaussian distributions in different ways and these distributions are changing over time in order to minimize the error of the generated image(s).\n\nThe paper took a while to understand due to its structure and how it is written. A short overview of the different components of Figure 1 giving the general idea and explaining\n* what nodes are stochastic variables and NNs\n* what is trained offline/online\nIt would also help the structure if the links/arrows/nodes had numbers corresponding to the relevant equations defining the relations/computations. Some of these relations are defined with explicit equation numbers, others are baked into the text which makes it difficult to jump around in the paper when reading it and trying to understand the architecture.\n\nThere are also numerous language errors in the paper and many of them are grammatical. For example:\n Page 5, second paragraph: \"osculations\" -> \"oscillations\"\n Page 5, fourth paragraph: \"Defining .. is defined as..\"\n\nThe results seem impressive and the problem under consideration is important and have several applications. There is however not much in terms of discussion nor analysis of the two experiments.\nI find the contribution fairly significant but I lack some clarity in the presentation as well as in the experiments section.\nI do not find the paper clearly written. The presentation can be improved in several chapters, such as the introduction and the method section.\nThe paper seem to be technically correct. I did not spot any errors.\n\nGeneral comments:\n- Why is 2 a suitable size of S?\n- Why use two by two encoding instead of stride for the VAE baseline? How does this affect the experiments?\n- How is S_0 and the Z_0 prior set (initialized) in the experiments?\n- It would improve the readability of the paper, not least for a broader audience, if more details are added on how the VAE baseline architecture differ from the proposed architecture.\n\n- In the first experiment: \n-- How many iterations does it take for your method to beat VAE?\n-- What is the difference between the VAE basline and your approach that make VAE perform better than your approach initially (or even after a few iterations)? \n-- What affect does the momentum formulation have on the convergence rate (number of iterations necessary to reach VAE and your methods result at t=10)? \n\n- In the second experiment and Figure 5 in particular it was observed that some actions are clearly detected while others are not. It is mentioned that those that are not detected by the approach are more similar. In what sense are the actions more similar, which are the most prominent (from a humans perspective) such actions, what is making the model not detecting them and what can be done (within your approach) in order to improve or adjust the detection fidelity?\n\nPlease add more time labels under the time axis in Figure 4 and Figure 5. Also please annotate the figures at the time points where the action transitions are according to the ground truth.",
"This paper, to me, is a clear rejection from these basic observations:\n\nA *model* is not a computation graph and should never be presented that way. \nCalling a computation graph a Bayesian network without even writing down how *inference* can ever result in such a computation graph is a basic error. \nThe authors claim this: \"So, no encoder or inference models are used since the generators also serve as their inverse transformations.\" Well, then this is not a Bayesian network. \nThe authors spend a lot of time analyzing constant inputs theoretically, and I'm not sure why this is even relevant. ",
"Summary:\n\nThe paper proposed an Bayesian network model, realized as a neural network, that learns\n1. latent representation of observed data (images in the paper).\n2. dynamics of interaction in sequential data in the form of a linear dynamical system w.r.t. latent representation, controlled by secondary latent variables.\n\nThe model is evaluated in two scenarios:\n1. Static images on CelebA dataset, where it shows that iterative guided updates of the latent representation improve reconstruction quality, compared to VAE.\n2. Sequential data experiment, the authors show that the interaction states can be used for semantic action segmentation.\n\n-------------------------------------------------------\nPros:\n1. The proposed model is unsupervised, and it can iteratively improve the latent representation and consequently the systhesized output, given more observations at inference.\n\n2. The paper proposes novel strategies to update the latent distributions' parameters.\n\n-------------------------------------------------------\nCons:\n1. The problem fornulation with Z, X and S are not clearly defined at the beginning, the reviewer must read further into page 4 and 5 to understand. Brief description should be provided in/around Fig 1. Furthermore, the log likelihood loss is not clearly defined before Eq (1).\n\n2. The proposed updates to mu_z, sigma_z, mu_s and sigma_s in Eq (2,3,6,9) and their convergence properties as well as stability are not justified by actual experiments and analysis.\n\n3. The overall training procedure is unclear: do the hidden layer weights get updated given repeated observations in the case of static model?\n\n4. In the static image experiment, why the authors did not compare to at least GAN (or better if GLO [Bojsnowski et al. 2017] is included)?\nThe iterative updates clearly give the proposed model advantage over VAE. VAE also relies on reconstruction loss, hence the synthesized output are often blurry.\nGAN, which can generate high quality images, should provide a better benchmark. One can use the same method as this paper, backpropagating the reconstruction error of GAN generator to the input layer to find the suitable noise vector of an image, then synthesize the image with the generator from that noise input.\n\n5. One the two objectives of the paper, image reconstruction in sequence, is not evaluated in the dynamic experiment. The paper does not provide any analysis for A, Z and \\hat{X} in the dynamic setting.\nInstead, the secondary latent states S are used for sequence segmentation. Why is it not compared to HMM baseline at least, in that case?\nFurthermore, the paper does not provide a standard methodology to segment actions from video, but rather, the reviewer must look at the plot in Fig 4 & 5 and read the description in page 8 to see the correspondence between the variation of S and the action switch in the video.\nIn addition, the segmentation is only carried out on one video in the paper.\nMoreover, the paper only experiments with S of length 2. Given the limited evaluation, the authors can try and report the results with different lengths of S (e.g. 1 or 3, 4).",
"The paper proposes a hierarchical probabilistic model that learns both static representations and the dynamics of the data. The model iteratively updates its latent representations of the data in order to improve its generative power. The model is applicable to both static images (iterative improvements of the samples) and videos (predictive coding like repesentations).\n\nPros:\n\n-- the motivation for the work and its connections to the cognitive/philosophical models of concepts and predictive coding is very interesting\n-- the iterative improvements in the celebA samples in Fig. 3 and the corresponding improvements in the log-likelihood in Tbl. 1 vs the vanilla VAE baseline are promising and suggest that the approach has potential\n\nCons:\n\n-- The major problem with this paper in my opinion is that the methods section is very confusing:\n 1) The section is too brief and there is absolutely no description of the model until page 4\n 2) The figures are hard to understand and the annotations are not informative (e.g. what is the difference between Fig.1 and Fig.2?) \n 3) The notation is unconventional and keeps changing (e.g. the generator is referred to as either \\varphi_A, \\varphi_X, \\varphi_X(Z_t), X_t|Z_t, or \\mu_X; \\sigma_X... the dimensionality of the image is denoted as i, N * c or Nc... I can go on). \n 4) The rescaling of the latent parameters seems engineered and arbitrary (e.g. \\beta scaling factor in Eq. 8 is chosen so that the sigmoid reaches 0.75 when the value is 0.5\\sigma of the threshold).\n\nDue to the points above I failed to fully understand the model despite trying hard to do so. In particular, I did not understand the most important part of the paper addressing the iterative update of the latents vs backprop update of the generative weights. \n\nMinor points:\n-- The introduction is too long and repetitive. The space saved should be used to describe the model more precisely.\n-- The parametrisation of S_t should be described when it is first introduced, not 2 paragraphs later.\n-- How does an inner product of two vectors result in a matrix (Sec. 3.3)?\n-- GANs also do not have an encoder network (despite what the authors claim in Sec. 4.1) and should be used as a baseline\n-- Why does the VAE baseline have a different decoder architecture than the proposed model?\n-- What is the pre-processing done for CelebA?\n-- What is the ground truth that was supposed to be matched by \\mu_S_t in the dynamic dataset?\n-- Figs. 4-5 are hard to understand. What do the different colours of the lines mean? The time stamps where the behaviours are changing should be marked in the plot (not just described in the text).\n\nTo conclude, the authors are advised to shorten the introduction and literature review sections and use the extra space to re-write and expand the methods section to make it very clear how their model works using the standard notation used in the literature. The results section detailing the dynamic setup of their approach needs to be made more clear as well. In the current form the paper is not ready for publication.\n\n\n\n\n\n\n"
] | [
6,
3,
4,
4
] | [
3,
3,
4,
4
] | [
"iclr_2018_Sk7cHb-C-",
"iclr_2018_Sk7cHb-C-",
"iclr_2018_Sk7cHb-C-",
"iclr_2018_Sk7cHb-C-"
] |
iclr_2018_BkS3fnl0W | Semi-supervised Outlier Detection using Generative And Adversary Framework | In a conventional binary/multi-class classification task, the decision boundary is supported by data from two or more classes. However, in one-class classification task, only data from one class are available. To build an robust outlier detector using only data from a positive class, we propose a corrupted GAN(CorGAN), a deep convolutional Generative Adversary Network requiring no convergence during training. In the adversarial process of training CorGAN, the Generator is supposed to generate outlier samples for negative class, and the Discriminator as an one-class classifier is trained to distinguish data from training datasets (i.e. positive class) and generated data from the Generator (i.e. negative class). To improve the performance of the Discriminator (one-class classifier), we also propose a lot of techniques to improve the performance of the model. The proposed model outperforms the traditional method PCA + PSVM and the solution based on Autoencoder. | rejected-papers | This paper presents a framework where GANs are used to improve detection of outliers (in this context, instances of the “background class”). This is a very interesting and, as demonstrated, promising idea. However, the general feeling of the reviewers is that more work is needed to make the technical and evaluations parts convincing. Suggestions for further work towards this direction include: theoretical analysis, better presentation of the manuscript and, most importantly, stronger experimental section. | train | [
"HJDhuQtlM",
"Sks502_lG",
"Hy2FsQKef",
"B1ufhqoQz",
"By7-yAXGG",
"HJSWKmQGf",
"HJT5g4XzM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The idea of using GANs for outlier detection is interesting and the problem is relevant. However, I have the following concerns about the quality and the significance:\n- The proposed formulation in Equation (2) is questionable. The authors say that this is used to generate outliers, and since it will generate inliers when convergence, the authors propose the technique of early stopping in Section 4.1 to avoid convergence. However, then what is learned though the proposed formulation? Since this approach is not straightforward, more theoretical analysis of the proposed method is desirable.\n- In addition to the above point, I guess the expectation is needed as the original formulation of GAN. Otherwise the proposed formulation does not make sense as it receives only specific data points and how to accumulate objective values across data points is not defined.\n- In experiments, although the authors say \"lots of datasets are used\", only two datasets are used, which is not enough to examine the performance of outlier detection methods. Moreover, outliers are artificially generated in these datasets, hence there is no evaluation on pure real-world datasets. To achieve the better quality of the paper, I recommend to add more real-world datasets in experiments.\n- As discussed in Section 2, there are already many outlier detection methods, such as distance-based outlier detection methods, but they are not compared in experiments.\n Although the authors argue that distance-based outlier detection methods do not work well for high-dimensional data, this is not always correct.\n Please see the paper:\n -- Zimek, A., Schubert, E., Kriegel, H.-P., A survey on unsupervised outlier detection in high-dimensional numerical data, Statistical Analysis and Data Mining (2012)\n This paper shows that the performance gets even better for higher dimensional data if each feature is relevant.\n I recommend to add some distance-based outlier detection methods as baselines in experiments. \n- Since parameter tuning by cross validation cannot be used due to missing information of outliers, it is important to examine the sensitivity of the proposed method with respect to changes in its parameters (a_new, lambda, and others). Otherwise in practice how to set these parameters to get better results is not obvious.\n\n* The clarity of this paper is not high as the proposed method is not well explained. In particular, please mathematically formulate each proposed technique in Section 4.\n\n* Since the proposed formulation is not convincing due to the above reasons and experimental evaluation is not thorough, the originality is not high.\n\nMinor comments:\n- P.1, L.5 in the third paragraph: architexture -> architecture\n- What does \"Cor\" of CorGAN mean?\n\nAFTER REVISION\nThank you to the authors for their response and revision. Although the paper has been improved, I keep my rating due to the insufficient experimental evaluation.",
"The idea of the paper is to use a GAN-like training to learn a novelty detection approach. In contrast to traditional GANs, this approach does not aim at convergence, where the generator has nicely learned to fool the discriminator with examples from the same data distribution. The goal is to build up a series of generators that sample examples close the data distribution boundary but are regarded as outliers. To establish such a behavior, the authors propose early stopping as well as other heuristics. \n\nI like the idea of the paper, however, this paper needs a revision in various aspects, which I simply list in the following:\n* The authors do not compare with a lot of the state-of-the-art in outlier detection and the obvious baselines: SVDD/OneClassSVM without PCA, Gaussian Mixture Model, KNFST, Kernel Density Estimation, etc\n* The model selection using the AUC of \"inlier accepted fraction\" is not well motivated in my opinion. This model selection criterion basically leads too a probability distribution with rather steep borders and indirectly prevents the outlier to be too far away from the positive data. The latter is important for the GAN-like training.\n* The experiments are not sufficient: Especially for multi-class classification tasks, it is easy to sample various experimental setups for outlier detection. This allows for robust performance comparison. \n* With the imbalanced training as described in the paper, it is quite natural that the confidence threshold for the classification decision needs to be adapted (not equal to 0.5)\n* There are quite a few heuristic tricks in the paper and some of them are not well motivated and analyzed (such as the discriminator training from multiple generators)\n* A cross-entropy loss for the autoencoder does not make much sense in my opinion (?)\n\n\nMinor comments:\n* Citations should be fixed (use citep to enclose them in ())\n* The term \"AI-related task\" sounds a bit too broad\n* The authors could skip the paragraph in the beginning of page 5 on the AUC performance. AUC is a standard choice for evaluation in outlier detection.\n* Where is Table 1?\n* There are quite a lot of typos.\n\n*After revision statement*\nI thank the authors for their revision, but I keep my rating. The clarity of the paper has improved but the experimental evaluation is lacking realistic datasets and further simple baselines (as also stated by the other reviewers)",
"This paper addresses the problem of one class classification. The authors suggest a few techniques to learn how to classify samples as negative (out of class) based on tweaking the GAN learning process to explore large areas of the input space which are out of the objective class.\n\nThe suggested techniques are nice and show promising results. But I feel a lot can still be done to justify them, even just one of them. For instance, the authors manipulate the objective of G using a new parameter alpha_new and divide heuristically the range of its values. But, in the experimental section results are shown only for a single value, alpha_new=0.9 The authors also suggest early stopping but again (as far as I understand) only a single value for the number of iterations was tested. \n\nThe writing of the paper is also very unclear, with several repetitions and many typos e.g.:\n\n'we first introduce you a'\n'architexture'\n'future work remain to'\n'it self'\n\nI believe there is a lot of potential in the approach(es) presented in the paper. In my view a much stronger experimental section together with a clearer presentation and discussion could overcome the lack of theoretical discussion.\n",
"Thank you again for the review comments. We take some useful suggestions from them and revise our paper in the following aspects:\n1. formulate and express our proposed improved techniques mathematically\n2. clarify the justifications of each proposed technique\n3. provide more evaluation measures (ROC and AUC) and a stronger discussion about experiment results\n4. correct the typos and polish some expressions",
"Thank you for your comments, our response to your questions are as follows:\n* The authors do not compare with a lot of the state-of-the-art in outlier detection and the obvious baselines: SVDD/OneClassSVM without PCA, Gaussian Mixture Model, KNFST, Kernel Density Estimation, etc\n\n1) SVDD/OneClassSVM without PCA\nUsing GAN Framework, we aim to build an one-class classifier that demonstrates robust performance in high-dimensional. There are so many irrelevant attributes in high dimensional (e.g. Image data). Therefore, it makes more sense to apply PCA to the data. Without PCA, the OneClassSVM shows a bad performance, AUC score = 0.6830.\n\n2) Gaussian Mixture Model, KNFST, Kernel Density Estimation\nAll the methods you named belong to the Density-based approach. The approach is known for the curse of dimensionality. It requires a very large number of training samples in high-dimensional space. We only compare our methods with the OneClassSVM method and the Autoencoder-based method. To our knowledge, They are respectively the state-of-the-arts of the Boundary-based approach and the Reconstruction Error-based approach. \n\n* The model selection using the AUC of \"inlier accepted fraction\" is not well motivated in my opinion. This model selection criterion basically leads too a probability distribution with rather steep borders and indirectly prevents the outlier to be too far away from the positive data. The latter is important for the GAN-like training.\n\nWe propose a measure, called positively biased AUC, to select the model. If the generated outliers are far away from the positive class, it is easier for the Discriminator to distinguish the inliers and outliers. The Discriminator can show a good score. In my opinion, contrary to what you said, the measure indirectly brings the outlier far away from the positive data.\n \nThe Generator generates outlier far away from the positive class at the beginning of training because of random initialisation. With the training going on, the new target of the Generator will drive it to generate negative data around the positive class. The generated data covers a large space. The indirect influence of the measure is too small, compared to the impact of the Generator on the generated outliers. In addition, the measure is not supposed to select the optimal model, but near optimal. That is why we call it positively biased AUC.\n\n* The experiments are not sufficient: Especially for multi-class classification tasks, it is easy to sample various experimental setups for outlier detection. This allows for robust performance comparison. \n\nEvery multi-class classification task can be transferred into multiple binary classification tasks. We can demonstrate the performance of the binary classifier on multi-class classification tasks. But it is not a good idea for the one-class classifier. The one-class classifier should be robust against not only other classes in the same dataset but also any other samples in other datasets, even noise. I hope I understand your question correctly.\n\n* With the imbalanced training as described in the paper, it is quite natural that the confidence threshold for the classification decision needs to be adapted (not equal to 0.5).\n\nOur task setting is one-class classification, in which we have no any outlier. It is an extreme fall of the imbalanced training. You are right, the best threshold is by no mean 0.5. But, in our paper, we do not use any confidence threshold. We evaluate the performance of the outlier detector with AUC score. The score takes all the confidence thresholds into consideration and gives an overall performance of the detector.\n\n* There are quite a few heuristic tricks in the paper and some of them are not well motivated and analyzed (such as the discriminator training from multiple generators)\n\nWe only justify three ideas, namely, Attaching more importance to generated data, Specifying a new objective for the Generator and Combining the previously generated outliers. They are CorGAN, CorGAN2 and CorGAN3 in the experiments respectively. The proposed Early Stopping is applied to all of them and it serves the model selection. The two ideas in section future work are not justified at all in our paper. That is why we put them in the future work section. \n\n* A cross-entropy loss for the autoencoder does not make much sense in my opinion (?)\n\nBoth MSE and Cross-Entropy make sense for the Autoencoder. See the link http://deeplearning.net/tutorial/dA.html#daa\n\nThank you for the minor comments. We definitely should fix them. By the way, the Table 1 is below the Figure 3 on the page 6.\n\nIf we misunderstand some questions or you have any other question, just let us know. We are very glad to further discuss with you.",
"Thank you for your comments and the useful suggestions. Our response to your questions is as follows:\n\n1) As you point out, we just show the experiment result for a single value alpha_new=0.9. Actually, we justify the choice in the Table 1. \n\nalpha_new = 1 → The objective of Generator is the exact same as in the original GAN. See Equation 1 in the paper:\n\"Goodfellow, Ian, et al. \"Generative adversarial nets.\" Advances in neural information processing systems. 2014.\"\n\nalpha_new ∈ (∼ 0.9, 1) → The adjustment of the objective of Generator is proposed to improve the training process of GANs. See section 3.4 One-sided label smoothing in the paper:\n\"Salimans T, Goodfellow I, Zaremba W, et al. Improved techniques for training GANs[C]//Advances in Neural Information Processing Systems. 2016: 2234-2242.\"\n\nIn those two cases, the training process will converge, which is what we want to avoid.\n\nalpha_new ∈ (0, ∼ 0.5) → The Generator has similar objective as the Discriminator. It will tend to generate data, from which the Discriminator is able to distinguish training data. That is to say that all the generated data distribute far from the positive class. \n\nWhat we want is that the Generator is capable of generating outliers that explore as a large place as possible (both the place far from and around the positive class), especially the place near the positive class. The values in the interval (∼ 0.5, ∼ 0.9) are good candidates. If we assign one value from this interval to alpha_new, the Generator will generate data far from positive class at the beginning of the training phase because of the random initialization. After several training epochs, it will generate data that distribute around the positive class.\n\nSince we aim to build a robust outlier detector against any kind of outliers (including the ones distribute near the positive class), we choose 0.9 for the Generator so that the generated outliers can cover more space around the positive class to form a tight boundary.\n\nThat is how the choice is justified. The above explanations are not only our intuitive understanding. They are also supported by experiments. Since the corresponding experiments are not the core part of our paper, we do not incorporate them in the experiment section in our paper.\n\nWhat is more, we may need to clarify, why not 0.91 or 0.89. The same problem in the paper [Improved techniques for training GANs]: why not take the value in the interval (0.91, 1) or (0.89, 1)to improve the training process of GAN? We need more details about the theoretical foundation, which is mentioned in our future work.\n\n2) Early stopping is proposed to avoid the convergence. There are two ways to implement Early Stopping:\n a): Stop the training process at a certain epoch, as you understand. \n b): As we described in the section 4.1 Early stopping in our paper, we do not stop the training at a certain epoch. We just save the best model we get, with the training process going on. At the end, we take the most recent saved model as the final model. It is similar to the model selection. The measure we use to select model is the score (positive biased AUC), which does not require negative samples in the validation dataset. The measure is proposed in our paper (see Figure 3).\n\n3) You are right, we definitely should fix the typos and correct the expression of some sentences.\n\nLooking forward to a further discussion with you!",
"Thank you for your comments, the answers to your questions are following:\n - The proposed formulation in Equation (2) is questionable. ... ... Since this approach is not straightforward, more theoretical analysis of the proposed method is desirable. What does \"Cor\" of CorGAN mean?\n\nThe main idea of the paper is to corrupt the GAN with proposed technique so that the GAN does not converge. In the case of the corrupted GAN, the Generator is able to keep generating outliers. That is why we call it CorGAN (corrupted GAN). \n\nOne of the techniques is Early Stopping. If we take the Discriminator before the convergence, all the generated samples used to train D are outliers. \n\nAnother technique is specifying a new objective for the Generator. Alpha_new in the equation 2 is a variable. It is not necessary 1 as in the original GAN. If its value is from the interval (0, 0.9), the GAN is not capable of getting converged. Without convergence, the Generator will generate only outliers. See more details in response to the first review comment.\n\n- In addition to the above point, ... ... how to accumulate objective values across data points is not defined. \n\nOur built Generator is supposed to generate data far from the positive class and also data around the positive class. That is to say that the Generator should explore as a large space as possible. We care for the distribution of all the generated samples. We are not interested in where the training process ends exactly. So the expectation of the formulation is not that relevant. That is also a reason why we propose combining previously generated outliers to training the Discriminator. The accumulation of the generated outliers is implemented by the Reservoir Sampling Algorithm (see the last paragraph in the section 4.4 in our paper).\n\n- In experiments, although the authors say \"lots of datasets are used\", ... ... To achieve the better quality of the paper, I recommend to add more real-world datasets in experiments. \n\nWe aim to build a robust outlier detector against any kind of outliers. Actually, we build just one outlier detector and generate outliers with help of only one training dataset, i.e. digit of 9 in MNIST. In order to test the robustness of the outlier detector, we use three outlier test datasets. The first one is digits of 0-8 in MNIST to test how tight the boundary is. The second one is a real world image dataset. We test the performance of the detector trained on MNIST on a real-world dataset (CIFAR10). In addition, we also demonstrate the robustness of the detector against not only real-world images but also noise. Therefore, we artificially create images. The values of their pixels are random or subject to various distributions. \n\n- As discussed in Section 2, there are already many outlier detection methods, ... ... Please see the paper: \n-- Zimek, A., Schubert, E., Kriegel, H.-P., A survey on unsupervised outlier detection in high-dimensional numerical data, Statistical Analysis and Data Mining (2012) \n... ...\nrecommend to add some distance-based outlier detection methods as baselines in experiments. \n\nIn the second paragraph in section introduction, we point out several names for the task setting described in our paper, namely, outlier detection, Novelty Detection Concept Learning and One-class classification. They are used interchangeably in our paper. However, they may have specific meaning in other works.\n \nThe distance-based method described in the above paper is an outlier detection method. It aims to find the samples that different from most other samples. Our paper focuses on novelty detection, which tries to find samples different from the given positive samples. When a large mount similar outliers exist in the task, those two methods given two different results. The first method does not work anymore. For instance, in our experimental setting, the outliers are similar (from the same class). The distance-based methods, such as DBSCAN, OPTICS and LOF will identifier the most all outliers as inliers.\n\nBut you do make a good point here, we can adapt their approaches in our task setting (novelty detection), which may be an idea of a new paper in my opinion. \n\n- Since parameter tuning by cross-validation cannot be used due to missing information of outliers, ... ... how to set these parameters to get better results is not obvious.\n\nGenerally, in OCC task, we have no available outliers to do cross-validation. In our paper, we propose a measure to select models, which does not require outliers in validation dataset. It is called positively biased AUC score (see Figure 3 in the paper). The selected model by this measure is not necessarily optimal, but near optimal.\n\n\n"
] | [
4,
4,
3,
-1,
-1,
-1,
-1
] | [
3,
4,
5,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BkS3fnl0W",
"iclr_2018_BkS3fnl0W",
"iclr_2018_BkS3fnl0W",
"iclr_2018_BkS3fnl0W",
"Sks502_lG",
"Hy2FsQKef",
"HJDhuQtlM"
] |
iclr_2018_HkinqfbAb | Automatic Parameter Tying in Neural Networks | Recently, there has been growing interest in methods that perform neural network compression, namely techniques that attempt to substantially reduce the size of a neural network without significant reduction in performance. However, most existing methods are post-processing approaches in that they take a learned neural network as input and output a compressed network by either forcing several parameters to take the same value (parameter tying via quantization) or pruning irrelevant edges (pruning) or both. In this paper, we propose a novel algorithm that jointly learns and compresses a neural network. The key idea in our approach is to change the optimization criteria by adding k independent Gaussian priors over the parameters and a sparsity penalty. We show that our approach is easy to implement using existing neural network libraries, generalizes L1 and L2 regularization and elegantly enforces parameter tying as well as pruning constraints. Experimentally, we demonstrate that our new algorithm yields state-of-the-art compression on several standard benchmarks with minimal loss in accuracy while requiring little to no hyperparameter tuning as compared with related, competing approaches. | rejected-papers | This paper presents yet another scheme for weight tying for compressing neural networks, which looks a lot like a less Bayesian version of recent related work, and gets good empirical results on realistic problems.
This paper is well-executed and is a good contribution, but falls below the bar on
1) Discovering something new and surprising, except that this particular method (which is nice and simple and sensible) works well. That is, it doesn't advance the conversation or open up new directions.
2) Potential impact (although it might be industrially relevant)
Also, the title is a bit overly broad given the amount of similar existing work. | train | [
"S13TkvpQz",
"S14u6LpQG",
"H1J8TLamG",
"HyrzaIp7G",
"Bky9cL_eG",
"Byk0Q2_xz",
"Hy-t_ztgG"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We have updated the draft to address the reviewers concerns (plus rearranging to still keep it within the 8 page limit). Most notably, we have added additional experiments on VGG-16 and improved the clarity of the presentation by adding additional details in both the algorithm description and experimental sections.",
"Thanks for the feedback. In table 2 of our revised paper, we added a new experiment that compares with Bayesian compression on VGG16 on CIFAR-10. This is comparable to major existing work (that we’re aware of) on compressing neural networks. In table 2 we also compare to Deep Compression and GMM prior at the same level of classification accuracy, to address concerns about the accuracy loss in our method.\n\nAs with most machine learning methods, some tuning may be needed for optimal performance. In our experiments we simply tried K-1 (number of non-zero parameter values) on a log scale of 4, 8, 16…, and settled on the first K that gave acceptable accuracy loss. The k-means and l1 penalty factors, lamba_1 and lambda_2, were tuned in [1e-6, 1e-3] with a combination of grid search and manual tuning. We believe this is less tuning compared to probabilistic methods like GMM or scaled mixture priors using many more parameters/hyperparameters that are less intuitive and often require careful initialization. In fact, the main reason why we couldn’t compare kmeans with GMM prior ourselves on larger datasets was the latter required significantly more computation and tuning (we were often unable to get it to work). \n\nRegarding additional comments:\na). Fixed in revised paper.\nb). Fixed in revised paper. \nc). See section 3 of revised paper; as described, at the beginning of the 1D kmeans algorithm, we sort the parameters on the number line and initialize K partitions corresponding to the K clusters; E-step then simplifies to re-drawing the partitions given partition means (requiring K binary-searches), and M-step recalculates partition means (partition sums/sizes are maintained for efficiency).\nd). Fixed in revised paper; we show the typical training dynamics in figures 3 and 4.\ne). Thanks for catching this; we used the wrong image where K=7. See the correct one with K=8 in revised paper.\nf). You mean methods like Optimal Brain Damage and Optimal Brain Surgeon? Admittedly the kmeans distortion is only a rough surrogate to the actual quantization loss, but we found it sufficient for compression; the fact that our method doesn’t use more sophisticated techniques such as second order information means it adds very little overhead to training. Again we’re striving for simplicity and efficiency.\ng). By sparsity we meant the fraction of parameters that are zero.\nh). Fixed in revised paper; added new discussion in results section about the observed structured sparsity (entire units and filters being pruned); we observed this with l1 alone, however, but to a lesser extent and with more accuracy loss.\ni). Fixed in revised paper.\n",
"1. Please see the revised paper for a clearer discussion of our method. We use kmeans prior alone in our investigation of automatic parameter tying (e.g., in the sections on algorithmic behavior and generalization effect); we always use the L1 norm together with kmeans for the purpose of compression, since quantization alone is not sufficient for state of the art compression (e.g., for 32-bit floats, K=2 roughly gives compression rate of 32; to compress well over 100 times would require K=1, i.e. the entire network using a single parameter value which is infeasible). \n\n2. The answer is no, as explained above. \n\n3. As we discussed in section 3 of revised paper, we implemented the sparse encoding scheme proposed by Han, following the detailed appendix in Ullrich. Basically network parameters are stored as CSR matrices (we append the bias from each layer as extra an column to the layer’s weight matrix); the CSR data structures (indices for the position of non-sparse entries, as well as assignment indices) are further compressed with standard Huffman coding.\n\n4. We haven’t gotten an opportunity to investigate sequential models like LSTMs, but we don’t think anything particular about them may prevent our method from being used. It might require some more tuning to make sure the pull from the cluster centers aren’t strong enough to overpower the gradient signals from data loss, and might require initializing to a pre-trained solution rather than from scratch. That said, we’ve found our method to be rather agnostic towards the nature of different parameters in the network (e.g. weights/biases in all conv/fc layers, along with batch normalization parameters), so it should be able to handle things like memory cells/gates.\n",
"Please see the revised paper for a clearer discussion of our proposed method. L1 penalty is indeed used for soft-tying in the sparse-formulation, and yes the hard-tying stage fixes cluster assignments, which is essentially the same as the Hashed Net method except that the assignments are learned from the soft-tying stage, instead of being random. \nFollowing our discussion in section 3 and 4.1, randomly (hard) tying parameters corresponds to restricting the solution to a random, low dimensional linear subspace; for (especially deep) neural networks that are already hard to train, this extra restriction would significantly hamper learning. The idea is illustrated by figure 5(a) with smaller K and 5(b) for t=20000. Hashed Net effectively uses a very large K with random tying, which poses little/no problem to training, but a larger K would result in degraded compression efficiency for our method. We found soft-tying to be crucial in guiding the parameters to the “right” linear subspace (determined by the assignments, which is itself iteratively improved), such that the projection of parameters onto it is minimized, leading to small accuracy loss when switching to hard-tying; so in this sense we don’t think it’s the same as pre-training the model. That said, starting from a pre-trained solution does seem to make the soft-tying phase easier.\n\nThe reference error (no regularization) VGG11 on CIFAR10 in our experiment was about 21%, the same as training with sparse APT from scratch; we apologize for failing to mention that. We replaced this part of experiment with VGG16 (15 million parameters) in the revised paper, to compare with Bayesian compression (Louizos et al. 2017). We agree that the number of parameters (and more generally the architecture) does influence the difficulty of optimization and extent to which a network can be compressed. \n\nHopefully we made it clear in the revised paper that the kmeans prior for quantization alone is not enough for compression, e.g .K=2 (storing 32-bit floats as 1 bit indices) would roughly give compression rate (without post-processing) of only 32 and likely high accuracy loss with our current formulation. We did a small scale evaluation of l1 penalty alone followed by thresholding for compression, and didn’t find it as effective as kmeans+l1 for achieving sparsity. Note that the Deep Compression work already did an ablation test and reported compression rates with pruning (l1+thresholding) only, and we didn’t find it necessary to repeat this work, since we use the same compression format as theirs. Please see revised table 2 for our method’s performance at the same classification error as Deep Compression and Soft Weight Sharing (GMM prior), to clear up the concern with accuracy loss in our method.\n\nRegarding the minor issues:\n\n-We feel that many existing methods can be difficult/expensive to apply in practice, and our method has the virtue of being very simple, easy to implement, and efficient (linear time/memory overhead) while achieving good practical performance without much tuning.\n\n-See figure 5(b) added in the appendix.\n\n-As we discuss at the end of sec 3.1 in revised paper, at the end of soft-tying we identify the zero cluster as the one with smallest magnitude, and fix it at zero throughout hard-tying. It is possible to use a threshold to prune multiple clusters of parameters that are near zero, but generally we didn’t find it necessary, as a large zero cluster naturally develops during soft-tying for properly chosen K.\n\n-We weren’t aware of this work; thanks for pointing it out. We’ve added some relevant discussion. The biggest difference compared to our method is that our formulation uses hard assignments even in the soft-tying phase, whereas their method calculates soft-assignment responsibilities of cluster centers for each parameter (similar to the GMM case) and that could take O(NK) time/memory. They achieved smaller accuracy loss on CIFAR-10 than ours, but with K=75 (instead of our 33). However, it’s not clear how much computation was actually involved.\n",
"Approach is interesting however my main reservation is with the data set used for experiments and making general (!) conclusions. MNIST, CIFAR-10 are too simple tasks perhaps suitable for debugging but not for a comprehensive validation of quantization/compression techniques. Looking at the results, I see a horrific degradation of 25-43% relative to DC baseline despite being told about only a minimal loss in accuracy. A number of general statements is made based on MNIST data, such as on page 3 when comparing GMM and k-means priors, on page 7 and 8 when claiming that parameter tying and sparsity do not act strongly to improve generalization. In addition, by making a list of all hyper parameters you tuned I am not confident that your claim that this approach requires less tuning. \n\nAdditional comments:\n\n(a) you did not mention student-teacher training\n(b) reference to previously not introduced K-means prior at the end of section 1\n(c) what is that special version of 1-D K-means?\n(d) Beginning of section 4.1 is hard to follow as you are referring to some experiments not shown in the paper.\n(e) Where is 8th cluster hiding in Figure 1b?\n(f) Any comparison to a classic compression technique would be beneficial.\n(g) You are referring to a sparsity at the end of page 8 without formally defining it. \n(h) Can you label each subfigure in Figure 3 so I do not need to refer to the caption? Can you discuss this diagram in the main text, otherwise what is the point of dumping it in the appendix?\n(i) I do not understand Figure 4 without explanation. ",
"This is yet another paper on parameter tying and compression of DNNs/CNNs. The key idea here is a soft parameter tying under the K-means regularization on top of which an L1 regularization is further imposed for promoting sparsity. This strategy seems to help the hard tying in a later stage while keeping decent performance. The idea is sort of interesting and the reported experimental results appear to be supportive. However, I have following concerns/comments. \n\n1. The roles played by K-means and L1 regularization are a little confusing from the paper. In Eq.3, it appears that the L1 regularization is always used in optimization. However, in Eq.4, the L1 norm is not included. So the question is, in the soft-tying step, is L1 regularization always used? Or a more general question, how important is it to regularize the cross-entropy with both K-means and L1? \n\n2. A follow-up question on K-means and L1. If no L1 regularization, does the K-means soft-tying followed by a hard-tying work as well as using the L1 regularization throughout? \n\n3. It would be helpful to say a few words on the storage of the model parameters. \n\n4. It would be helpful to show if the proposed technique work well on sequential models like LSTMs.",
"As the authors mentioned, weight-sharing and pruning are not new to neural network compression. The proposed method resembles a lot with the deep compression work (Han et. al. 2016), with the distinction of clustering across different layers and a Lasso regularizer to encourage sparsity of the weights. Even though the change seems minimal, the authors has demonstrated the effectiveness on the benchmark.\n\nBut the description of the optimization strategy in Section 3 needs some refinement. In the soft-tying stage, why only the regularizer (1) is considered, not the sparsity one? In the hard-tying stage, would the clustering change in each iteration? If not, this has reduced to the constrained problem as in the Hashed Compression work (Chen et. al. 2015) where the regularizer (1) has no effect since the clustering is fixed and all the weights in the same cluster are equal. Even though it is claimed that the proposed method does not require a pre-trained model to initialize, the soft-tying stage seems to take the responsibility to \"pre-train\" the model.\n\nThe experiment section is a weak point. It is much less convincing with no comparison result of compression on large neural networks and large datasets. The only compression result on large neural network (VGG-11) comes with no baseline comparisons. But it already tells something: 1) what is the classification result for reference network without compression? 2) the compression ratio has significantly reduced comparing with those for MNIST. It is hard to say if the compression performance could generalize to large networks.\n\nAlso, it would be good to have an ablation test on different parts of the objective function and the two optimization stages to show the importance of each part, especially the removal of the soft-tying stage and the L1 regularizer versus a simple pruning technique after each iteration. This maybe a minor issue, but would be interesting to know: what would the compression performance be if the classification accuracy maintains the same level as that of the deep compression. As discussed in the paper, it is a trade-off between accuracy and compression. The network could be compressed to very small size but with significant accuracy loss.\n\nSome minor issues:\n- In Section 1, the authors discussed a bunch of pitfalls of existing compression techniques, such as large number of parameters, local minimum issues and layer-wise approaches. It would be clearer if the authors could explicitly and succinctly discuss which pitfalls are resolved and how by the proposed method towards the end of the Introduction section. \n- In Section 4.2, the authors discussed the insensitivity of the proposed method to switching frequency. But there is no quantitative results shown to support the claims.\n- What is the threshold for pruning zero weight used in Table 2?\n- There are many references and comparisons missing: Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations in NIPS 17 for instance. This paper also considers quantization for compression which is related to this work."
] | [
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"iclr_2018_HkinqfbAb",
"Bky9cL_eG",
"Byk0Q2_xz",
"Hy-t_ztgG",
"iclr_2018_HkinqfbAb",
"iclr_2018_HkinqfbAb",
"iclr_2018_HkinqfbAb"
] |
iclr_2018_H15RufWAW | GraphGAN: Generating Graphs via Random Walks | We propose GraphGAN - the first implicit generative model for graphs that enables to mimic real-world networks.
We pose the problem of graph generation as learning the distribution of biased random walks over a single input graph.
Our model is based on a stochastic neural network that generates discrete output samples, and is trained using the Wasserstein GAN objective. GraphGAN enables us to generate sibling graphs, which have similar properties yet are not exact replicas of the original graph. Moreover, GraphGAN learns a semantic mapping from the latent input space to the generated graph's properties. We discover that sampling from certain regions of the latent space leads to varying properties of the output graphs, with smooth transitions between them. Strong generalization properties of GraphGAN are highlighted by its competitive performance in link prediction as well as promising results on node classification, even though not specifically trained for these tasks. | rejected-papers | This paper proposes an implicit model of graphs, trained adversarially using the Gumbel-softmax trick. The main idea of feeding random walks to the discriminator is interesting and novel. However,
1) The task of generating 'sibling graphs', for some sort of bootstrap analysis, isn't well-motivated.
2) The method is complicated and presumably hard to tune, with two separate early-stopping thresholds that need to be tuned
3) There is not even a mention of a large existing literature on generative models of graphs using variational autoencoders. | val | [
"BkCkJetef",
"SJhXxLYgz",
"rkKo_YeWG",
"HynfzR27M",
"SktZrEd-z",
"SkKn4NO-G",
"H1oI4EuZf",
"HyECmNOZz",
"B1CHXVu-M",
"rkG7qbQZz",
"SkRp15ZWM",
"rJa0pv5ef",
"B1M0aoceM",
"B1xzITYxf",
"SknlQLFef",
"B1wkj2BgM",
"SyDRyMBlf",
"SycFao-kf",
"Bk-RVrRAW"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"author",
"author",
"public",
"author",
"public",
"author",
"public"
] | [
"I am overall positive about the work but I would like to see some questions addressed. \n\nQuality: The paper is good but does not address some important issues. The paper proposes a GAN model to generate graphs with non-trivial properties. This is possibly one of the best papers on graph generation using GANs currently in the literature. However, there are a number of statistical issues that should be addressed. I fear the paper is not ready yet, but I am not opposed to publication as long as there are warnings in the paper about the shortcomings.\n\nOriginality: This is an original approach. Random walks sometimes are overused in the graph literature, but they seem justified in this work. But it also requires extra work to ensure they are generating meaningful graphs.\n\nSignificance: The problem is important. Learn to generate graphs is a key task in drug discovery, relational learning, and knowledge discovery.\n\nEvaluation: The link prediction task is too easy, as links are missing at random. It would be more useful to predict links that are removed with an unknown bias. The graph (wedge, claw, etc) characteristics are good (but simple) metrics; however, it is unclear how a random graph with the same size and degree distribution (configuration model) would generate for the same metrics (it is not shown for comparison). \n\nIssues that I wish were addressed in the paper: \na)\tHow is the method learning a generator from a single graph? What are the conditions under which the method is likely to perform well? It seems to rely on some mixing RW conditions to model the distinct graph communities. What are these mixing conditions? These are important questions that should have at least an empirical exploration.\nb)\tWhat is the spatial independence assumption needed for such a generator? \nc)\tWould this approach be able to generate a lattice? Would it be able to generate an expander graph? What about a graph with poorly connect communities? Is there any difficulties with power law graphs? \nd)\tHow is the RW statistically addressing the generation of high-order (subgraph) features?\ne)\tCan this approach be used with multiple i.i.d. graphs? \nf)\tIsn’t learning the random walk sample path a much harder / higher-dimensional task than it is necessary? Again, the short walk may be capturing the communities but the high-dimensional random walk sample path seems like a high price to pay to learn community structure.\ng)\tClearly, with a large T (number of RW steps), the RW is not modeling just a single community. Is there a way to choose T? How larger values of T to better model inter-community links? Would different communities have different choices of T? \nh)\tAnd a related question, how well can the method generate the inter-community links?\ni)\tThe RW model is actually similar to an HMM. Would learning a mixture of HMMs (one per community) have similar performance?\n",
"The authors proposed a generative model of random walks on graphs. Using GAN, the architecture allows for model-agnostic learning, controllable fitting, ensemble graph generation. It also produces meaningful node embeddings with semi-interpretable latent spaces. The overall framework could be relevant to multiple areas in graph analytics, including graph comparison, graph sampling, graph embedding and relational feature selection. The draft is well written with convincing experiments. I support the acceptances of this paper.\n\nI do have a few questions that might help further improve the draft. More baseline besides DC-SBM could better illustrate the power of GAN in learning longer random walk trajectories. DC-SBM, while a generative model, inherently can only capture first order random walks with target degree biases, and generally over-fits into degree sequences. Are there existing generative models based on walk paths?\n\nThe choice of early stopping is a very interesting problem especially for the EO-creitenrion. In Fig3 (b), it seems assortativity is over-fitted beyond 40k iterations. It might be helpful to discuss more about the over-fitting of different graph properties.\n\nThe node classification experiment could use a bit more refinement. The curves in Fig. 5(a) are not well explained. What is the \"combined\"? The claim of competitive performance needs better justification according to the presentation of the F1 scores.\n\nThe Latent variable interpolation experiment could also use more explanations. How is the 2d subspace chosen? What is the intuition behind the random walks and graphs of Fig 6? Can you provide visualizations of the communities of the interpolated graphs in Fig 7? ",
"This paper proposes a WGAN formulation for generating graphs based on random walks. The proposed generator model combines node embeddings, with an LSTM architecture for modeling the sequence of nodes visited in a random walk; the discriminator distinguishes real from fake walks.\n\nThe model is learned from a single large input graph (for three real-world networks) and evaluated against one baseline generative graph model: degree-corrected stochastic block models. \n\nThe primary claims of the paper are as follows:\ni) The proposed approach is a generative model of graphs, specifically producing \"sibling\" graphs\nii) The learned latent representation provides an interpretation of generated graph properties\niii) The model generalizes well in terms of link and node classification\n\nThe proposed method is novel and the incorporated ideas are quite interesting (e.g., discriminating real from fake random walks, generating random walks from node embeddings and LSTMs). However, from a graph generation perspective, the problem formulation and evaluation do not sufficiently demonstrate the utility of proposed method. \n\nFirst, wrt claim (i) the problem of generating \"sibling\" graphs is ill-posed. Statistical graph models are typically designed to generate a probability distribution over all graphs with N nodes and, as such, are evaluated based on how well they model that distribution. The notion of a \"sibling\" graph used in this paper is not clearly defined, but it seems to only be useful if the sibling graphs are likely under the distribution. Unfortunately, the likelihood of the sampled graphs is not explicitly evaluated. On the other hand, since many of the edges are shared the \"siblings\" may be nearly isomorphic to the input graph, which is not useful from a graph modeling perspective. \n\nFor claim (i), the comparison to related work is far from sufficient to demonstrate its utility as a graph generation model. There are many graph models that are superior to DC-SBM, including KPGMs, BETR, ERGMs, hierarchical random graph models and latent space models. Moreover, a very simple baseline to assess the LSTM component of the model, would be to produce a graph by sampling links repeatedly from the latent space of node embeddings. \n\nNext, the evaluation wrt to claim (ii) is novel and may help developers understand the model characteristics. However, since the properties are measured based on a set of random walks it is still difficult to interpret the impact on the generated graphs (since an arbitrary node in the final graph will have some structure determined from each of the regions). Do the various regions generate different parts of the final graph structure (i.e., focusing on only a subset of the nodes)? \n\nLastly, the authors evaluate the learned model on link and node prediction tasks and state that the model's so-so performance supports the claim that the model can generalize. This is the weakest claim of the paper. The learned node embeddings appear to do significantly worse than node2vec, and the full model is worse than DC-SBM. Given that the proposed model is transductive (when there is significant edge overlap) it should do far better than DC-SBM which is inductive. \n\nOverall, while the paper includes a wide range of experimental evaluation, they are aimed too broadly (and the results are too weak) to support any specific claim of the work. If the goal is to generate transductively (with many similar edges), then it would be better to compare more extensively to alternative node embedding and matrix factorization approaches, and assess the utility of the various modeling choices (e.g., LSTM, in/out embedding). If the goal is to generate inductively, over the full distribution of graphs, then it would be better to (i) assess whether the sampled graphs are isomorphic, and (ii) compare more extensively to alternative graph models (many of which have been published since 2010). \n",
"Based on the reviewers' comments we have made the following improvements to our paper:\n* Added more details on the experimental setup (Section 4.4).\n* Clarified the role of the embedding-like matrices W_up and W_down (Section 4.3).\n* Added comparisons with more baselines (Sections 2 & 4.1).\n* Extended the discussion of the model's limitations & future work (Section 5).\n* Fixed several typos and improved wording in a few places.\n\nOn the request of the program chairs, we would like to provide pointers to related papers that are also under submission to ICLR2018. While multiple deep generative models for graphs are proposed (e.g., https://openreview.net/forum?id=Hy1d-ebAb, https://openreview.net/forum?id=SJlhPMWAW, https://openreview.net/forum?id=BJcAWaeCW), our work is the only one that focuses on the single large real-world graph setting.",
"Thank you for comments. Based on your comments, we have a uploaded a revised version of our paper. See the details below.\n\n1) Baselines\nSince our main goal is to develop an implicit model for graph generation the focus of the experimental evaluation was to show that the implicitly generated graphs are useful, rather than outperforming existing explicit generators (most of which are designed with specific graph patterns in mind). Thus, we chose the well established DC-SBM as a baseline. In the revised version we have added new baselines (on the suggestions of the reviewers) such as the configuration model and ERGM. You can find the results for these in Table 2. We are open to suggestions about further graph generation models we could compare to in order to highlight the properties and limitations of GraphGAN.\n\n2) Generative models based on RWs\nThe only remotely related model we are aware of that involves random walks is the Butterfly model (McGlohon et al., “Weighted graphs and disconnected components: patterns and a generator\", KDD’08). However, it is a variant of the preferential attachment principle, and focuses on networks that grow over time. It uses first-order random walks, and thus cannot capture higher-order interactions. Moreover, it is not applicable to our task since it generates the random walks on the fly (i.e., is not based on a set of random walks as input).\n\n3) EO-criterion\nThis is an interesting point, and we briefly talk about it in Section 3.2. We can view the VAL and EO criteria as a trade-off between better generalization (in terms of link prediction) and more accurate reconstruction of the graph. Thus, as the EO score increases, our model approximates the distribution of random walks in the original graph more closely, which leads to constructing graphs more similar to the input, up to the point of overfitting the input graph. The EO criterion gives us control over this.\n\n4) Node \"embeddings\" (Figure 5)\nThank you for pointing this out. We have updated this section and will briefly summarize the changes in the following. Using the term ‘embedding’ was unfortunate in the figure. We were referring to W_down, i.e. the weight matrix projecting down the one-hot node vectors into low-dimensional space. The term ‘context’, on the other hand, was referring to W_up, which projects up the low-dimensional LSTM output. ‘Combined’ was the concatenation of the W_up and W_down, which gave an additional small improvement over only using W_up. When using the term ‘competitive’ we were referring to the link prediction performance, and not node classification; we have made this distinction more clear in the revised version (see, e.g., the updated abstract).\n\n5) Latent space interpolation (Figures 6, 7)\nWe improved the wording in the revised paper. To summarize: The generator takes as input a noise vector z, drawn from a d-dimensional standard normal distribution. For the latent space interpolation experiment we set d=2. That is, the generator receives samples drawn from a bivariate Gaussian distribution and transforms these into random walks. By using the inverse of the cumulative distribution of this 2D Gaussian distribution we can divide the input domain (i.e. R^2) into bins of equal probability mass, and group the generated random walks into their respective bins based on the noise samples that were used to generate them. Based on these random walks, we construct the score matrix and assemble a graph using the procedure described in Section 3.3. We can now measure the properties of the random walks coming from each of the latent space bins (e.g. the average degree of the first node in the random walks: Figure 6a) as well as the graphs constructed from the random walks (e.g. Gini coefficient: Figure 6c) and observe how these properties smoothly change when interpolating in the latent space.\n\nTo better visualize the latent space interpolation performed in Figure 7, we have compiled a short animation (https://figshare.com/articles/GraphGAN_Latent_Space_Interpolation/5684137). In the bottom-right section, you can observe how the generated graphs’ structure changes when interpolating along a trajectory in the latent space.\n",
"d)\nGiven our architecture we can capture high-order (subgraph) features, despite not explicitly modeling them. This is due to two different reasons, both involving memory.\n\nFirst, since our generator is based on an LSTM unit -- which has memory -- it can capture high-order interactions and utilize them during generation. Note that when generating the n-th node in a RW the LSTM has access to the entire history of nodes generated before and can utilize this history/memory to encode high-order features.\n\nSecondly, the input we feed to GraphGAN are second-order random walks as introduced in node2vec [2]. The memory factor records each previous step and influences the walking direction, leading to a biased random walk, essentially having a trade-off between breadth-first search (BFS) and depth-first search (DFS).\n\ne)\nThis depends on what exactly is meant by the i.i.d. property. In case all graphs have the same number / ordering of nodes, it is straightforward to apply GraphGAN to them. However, in this work we wanted to focus on the single graph setting -- which is common in many fields.\n\nWe agree that applying our model to collections of smaller i.i.d. graphs (such as chemical compounds or molecules) is an interesting and important research question. However, this setting will require very different evaluation protocols, (and potentially, appropriate alterations in the model architecture), which is why we leave it to follow-up work.\n\nf)\nLearning to generate RWs is a conscious decision, motivated by our intention to solve two key challenges: permutation invariance and learning from a single graph. First, by learning to generate RWs (compared to say learning to generate the full adjacency matrix) our model is invariant to arbitrary permutations of the nodes. Second, the class of implicit models we are considering requires multiple samples for training. Thus, we turn to using RWs which naturally represent the input graph. In short, by using RWs we were able to solve 2 out of 3 key challenges in learning implicit graph models and they are thus crucial to the success of our model. This also relates to question (d) where we talk about why the RWs are able to capture such properties.\n\nRegarding the “high price”: We want to highlight (see also answers to (g), (h), (i)) that communities are *not* explicitly modeled by GraphGAN (nor they are available during training) and our goal is not to learn the community structure. The focus is on showing that implicit graph generation models are able to capture important graph properties (whatever those might be, possibly including communities) without manually specifying them beforehand.\n\ng & h) \nPlease note that we have already performed an experiment that shows how we can choose the number of RW steps T. Figure 4 shows the link prediction performance as the length of the random walks increases. We observe that while the performance for RWs of small length (T <=4) is not satisfactory, having RWs of length 16 already yields competitive link prediction performance. Furthermore, we observe that the performance for RW length 20 over 16 is marginal and does not outweigh the additional computational cost (note that the number of model parameters does not increase with T because of the recurrent architecture). In short, we can choose T empirically by looking that link prediction performance.\n\nAs described in Section 4.2, in our experimental setup for link prediction we hold out 10% / 5 % of edges for the validation / test set respectively, along with the same number of non-edges. Since these edges/non-edges are selected completely at random, the validation/test set contains both inter- and intra-community links. This coupled with the fact that our link prediction performance is competitive (see Table 3) clearly shows that our method is able to model/generate the inter-community edges very well. \n\nAgain, please note that communities are not explicitly modeled or even available to GraphGAN during training. The focus is on implicit graph generation. The implicit model does end up learning about the community structure, since it turns out to be useful for graph generation, but this was not manually incorporated/modeled from our side.\n\ni) \nAs previously mentioned our model is learned in a completely unsupervised fashion, i.e., the community information is not available to GraphGAN during training. More importantly, we do not even want to explicitly model the communities, since the main goal of our work was to learn an implicit graph generator. Your suggestion relies on apriori community information, which is not available in our case.\n\nReferences\n---------------\n[1] Grover, Aditya, and Jure Leskovec. \"node2vec: Scalable feature learning for networks.\" KDD’16.\n[2] Wang, Daixin, Peng Cui, and Wenwu Zhu. \"Structural deep network embedding.\" KDD’16.\n[3] Kipf, Thomas N., and Max Welling. \"Variational Graph Auto-Encoders.\" arXiv:1611.07308 (2016).\n",
"Thank you for your review and constructive feedback.\n\nWe noticed that several of your questions (a, c, f, g, h, i) revolve around community structure. We would like to highlight that our model does not have access to community information at any point and more importantly does not have the goal of explicitly modeling communities. We address all your concerns below and additionally as you requested, we extended the discussion section to clearly highlight the model limitations (see Section 5 in the revised manuscript).\n\n1) Link prediction\nWe replicated the experimental setup for link prediction that is standard in recent works ([1, 2, 3]). We agree that different test set sampling strategies could provide a more in-depth analysis of our link prediction performance. However, our main goal is to demonstrate the feasibility and utility of implicit generative modeling for graphs, and not to develop a new state-of-the-art method for link prediction. This experiment mainly serves the purpose of demonstrating the generalization properties of the proposed method.\n\nOn a related note, since implicit models for graph generation have not been studied so far, effective methods for their evaluation are yet to be developed. Therefore, we use the link prediction task as one possible way to evaluate our implicit model. This, together with the other experiments, gives us insight into the graph properties that our model is able to capture.\n\n2) Configuration model\nThank you for this suggestion. We have added results for the configuration model to Table 2 in the revised version, as well as to Table 6 in the appendix. Some properties of the graphs generated by the configuration model (e.g., degree distribution) are identical to the input graph statistics by the definition of the configuration model. However, the random edge shuffling performed by the configuration model completely destroys the community structure, which makes the resulting graph very different from the original.\n\nAdditionally, we have performed experiments, where only a fraction of edges are rewired by the configuration model, such that the edge overlap (EO) score of the rewired graph matches the EO score of GraphGAN. Still, even in such a scenario, the configuration model significantly alters the community structure. You can see the quantitative results in Table 2 of the revised version of the paper.\n\nRegarding your questions (a) - (c) (questions (d)-(i) are in pt. 2)\na)\nIndeed, learning graph generators from a single graph is one of the key challenges tackled in our paper. In fact, part of our motivation for using RWs was precisely to solve this challenge (see also (f)). Given the nature of the GAN framework we required multiple samples to train the generator. Thus we turn to using RWs since they naturally represent our single input graph with multiple samples. \n\nIn this first foundational work we explored connected graphs (by extracting the largest connected component as a preprocessing step). We did not investigate the behaviour of GraphGAN when we have e.g. many disconnected components. This could be considered as one condition for GraphGAN to perform well. \n\nFurthermore, the focus of this paper was to show that implicit graph generators are able to capture properties of the graph without manually specifying them. While our goal is not to determine the stationary distribution of RWs (for which mixing conditions are relevant), we agree that drawing theoretical connections between GraphGAN and the established results is an exciting direction for future work. Please note that some empirical exploration of this aspect is already included (Figure 4, where we analyze the effect of RW length on link prediction performance). See also our answer to (g).\n\nb)\nOur model does not make *any* spatial dependence assumptions about the adjacency matrix, assuming you are referring to our discussion in paragraph 2 of Section 2. The main point in the paper is that one should not naively treat the adjacency matrix as a binary image and apply standard CNN-based GAN architectures to it. Such architectures for images contain the built-in assumption that pixels located closely (within the same receptive field) are in some way correlated. Clearly, when talking about an adjacency matrix such assumption is not sensible, as permutations of rows/columns correspond to the exact same graph but very different receptive fields. Our model addresses this issue by operating on the random walks. \n\nc)\nWe are indeed able to generate graphs with power-law degree distributions and sparse connectivity. We can conclude this since our model was evaluated and shows good performance on real-world graphs, that all exhibit exactly those patterns (see Table 6). Since our focus is on complex real-world networks, we felt that experiments on toy graphs (lattice, expander) would distract from the main story.",
"1) Generalization\nThe problem of detecting (near-)isomorphism between two graphs is extremely challenging in general (when the nodes may be permuted). In our case, since the ordering in both the original and sibling graphs is identical, having low edge overlap directly implies that they are not (nearly) isomorphic, (note that the model is still invariant to node permutations). Additionally, given the strong link prediction performance, we can surely claim that the model does not simply \"memorize\" the original graph, and that the \"sibling\" graphs contain edges that are plausible but not present in the input graph.\n\n2) Link prediction\nGiven the results in Table 3, the claim that our model achieves “so-so” link prediction is unjustified. Despite not being designed specifically for this task, we still outperform the competing methods in 4 out of 6 cases on smaller graphs. The less dominant performance on large graphs (>15k nodes) is clearly indicated in the paper, and the possible causes and solutions are mentioned in Section 5.\n\n3) Node \"embeddings\"\nAs mentioned above, GraphGAN is neither designed to learn embeddings, nor is using established embedding approaches during the graph generation process. Using the term “embedding” was unfortunate and might have been a source of confusion. The so-called “embeddings” W_down & W_up are projection matrices between the low-dimensional LSTM space and the high-dimensional node space. Due to the lack of established evaluation techniques for implicit generative graph models, we decided to discuss the properties of the these matrices to give the reader a better insight about the behavior of the model. We made this point more clear in the revised version of the paper.\n\n4) Baselines\nAs per your suggestion, we now additionally compare against further baselines:\n\n4.1) Sampling graphs from node embeddings\nOn your suggestion, we repeatedly sampled links from the latent space of node embeddings, and included the results in Table 2 the revised version (“naive node2vec”). As we can see, such procedure leads to a dramatically worse reconstruction, which justifies the use of an LSTM. Poor performance of embedding-based graph generation makes sense, as that is not what the embeddings are designed to do.\n\n4.2) ERGM\nWhile ERGMs [2] do well at reconstructing the graph statistics that are explicitly modeled by the s(G) term (such as degree distribution, assortativity, average edge density), they perform significantly worse when it comes to other metrics (e.g., community structure, LCC size), as can be seen in Table 6.\n\nThis fundamental limitation is exactly the reason we turn to implicit models in the first place: We want to have a model that automatically detects patterns in graph structure and generates new networks that follow them, without having to manually specify them.\n\n4.3) Configuration model\nBased on another reviewer’s comment we have added a comparison with the configuration model. Similarly to ERGM, it preserves only those characteristics, for which it was explicitly designed for.\n\n5) Latent space interpolation\nOne of your concerns was that the impact of the latent space on the generated graphs is not clear. In fact, Figure 6c&d, Figure 7 and Figures 10&11 in the appendix (namely subfigures, c, d, e, f, g, h, i, j, o, p) specifically measure properties of the entire graphs, not the random walks.\n\nFurthermore, the various regions of the latent space are clearly responsible for generating graphs with noticeably different structure as you can see in Figure 7 as well as in this animation that interpolates in the latent space (https://figshare.com/articles/GraphGAN_Latent_Space_Interpolation/5684137).\n\nReferences\n[1] Yuxiao Dong et al., “Structural diversity and homophily: A study across more than one hundred big networks”, KDD’17.\n[2] Hunter, David R., et al. \"ergm: A package to fit, simulate and diagnose exponential-family models for networks.\" Journal of statistical software 24.3 (2008): nihpa54860.\n[3] Hamilton, William L., Rex Ying, and Jure Leskovec. \"Inductive Representation Learning on Large Graphs.\" arXiv preprint arXiv:1706.02216 (2017).\n",
"Thank you for your review.\n\nWe would like to clarify some important points that might have been a source of misunderstanding. As this is the first work of its kind (implicit generative model for graphs), we neither expect nor claim that GraphGAN in its current form is superior to every existing explicit model in every possible regard. Rather, our goal is to lay a foundation for the study of implicit models for graph generation. Such models will let us capture important properties of real-world graphs, without having to manually specify them in our models.\n\nWe are convinced that the results already confirm this statement, and show the feasibility and utility of implicit models. Still, based on your suggestions, we added a comparison with more baselines. As expected, and reinforcing our previous point, all properties which these approaches explicitly model are preserved, while the rest deviate significantly from the input graph. This highlights the need for implicit models, such as the one proposed in our paper. Furthermore, there exist properties not captured by any of the existing models as shown in [1], which again emphasizes the need for implicit models.\n\nWe have already analyzed the reconstructive (graph statistics) and generalization (link prediction) properties of our model. As you mentioned, because of the likelihood-free nature of the model, we cannot evaluate the likelihood of the held-out edges or an entire sampled graph. We are not aware of other experimental protocols applicable to this novel problem setting. If you think that some important aspects are not evaluated, please let us know.\n\nUsing your terminology GraphGAN would be considered inductive. In the revised version we include comparison with more baselines. However, we do not completely agree that “high edge overlap” => “transductive” (above which EO threshold does a model qualify?). This definition would mean that ERGM, which has high edge overlap (see Table 2), should be considered transductive, which it isn’t. \n\nWe would also like to clarify, that our model is *not* generating random walks from node embeddings, and is not yet another method for learning node embeddings. Please, see our discussion on embeddings below.\n\nIn the following comment we address your other concerns.\n",
"As stated in our reply to an earlier comment (see below), we are aware of this; while we believe that both works are very distinct, we are considering alternative names to avoid confusion.",
"Neat Work,\nBut I found that there was another paper named \"GraphGAN\" on ArXiv: https://arxiv.org/abs/1711.08267, which has been accepted by AAAI 2018.\nIt might be confusing for readers to distinguish these two models.",
"Thanks for your clear reply! And one more question:\n\nIn Section 3.1, the next sample is generated as v_{t} = onehot(argmax v_{t}^{*}). How is this step differentiable? As argmax is a hard assignment, the gradients cannot be passed to v_{t}^{*} during backward as you claimed. Maybe I misunderstand somewhere?",
"We use the Straight-Through Gumbel-Softmax estimator that is described in [1]. In a nutshell, this allows us to approximate sampling from a categorical distribution in a differentiable way.\n\n[1] Jang, Eric, Shixiang Gu, and Ben Poole. \"Categorical reparameterization with Gumbel-softmax.\" ICLR 2017",
"Thank you for your comment and interest in our work!\n\n1) For the EO criterion, S is constructed the same as in Val. In both cases S is constructed based on the last 1k iterations (not 1k random walks), which typically amounts to around 750k random walks (depending on the specific setting). Thus, we expect the approximation to be reasonably good. Note, that we update the list of the RWs generated in the last 1k iterations incrementally (in a sliding window / queue fashion). This means that at every iteration we only need to subtract the 750 oldest entries from S, and add the 750 newest, which is highly efficient.\n\n2) We are not sure whether we correctly understood your definition of a generative model. We base our notion of a generative model for graphs on [1]. In this context, such a model is used to generate entire graphs that exhibit desired properties (e.g., matching the properties of a given input graph). Some examples include the Configuration Model [2] and Barabási–Albert Model [3]. While the purpose of GraphGAN is to generate entire graphs, we can also use it for node-level tasks such as link prediction, as shown in the experimental section.\n\nAs for the concrete application scenarios, we again refer to [1]. One case where our model is readily applicable is simulation studies (using the language of [1]). Imagine that we are developing a new algorithm for some graph-related problem, e.g. community detection. Often, we don't have access to much labeled data that all comes from the same distribution. However, GraphGAN still lets us estimate how our new algorithm will behave in the wild. For this, we can create sibling graphs using GraphGAN and evaluate performance of the new algorithm on them.\n\nThere are surely many other tasks that GraphGAN could be applied to, that we leave for follow-up work, such as anomaly detection, graph compression, data anonymization, etc.\n\nWe hope this answer clarifies the uncertainties you had about our work. Please do not hesitate to post follow-up questions.\n\nReferences:\n[1] Deepayan Chakrabarti and Christos Faloutsos. Graph mining: Laws, generators, and algorithms.\nACM computing surveys (CSUR), 38(1):2, 2006.\n[2] http://homepage.divms.uiowa.edu/~sriram/196/spring12/lectureNotes/Lecture11.pdf\n[3] Albert-Laszlo Barabasi and Reka Albert. Emergence of scaling in random networks. Science, 286 (5439):509–512, 1999.\n",
"It's a very interesting work! There are two parts that I'm confused after reading the paper:\n\n1. In Section 3.2, while training with EO-Criterion early stopping strategy, you construct a score matrix S at every validation step. How is S constructed in EO? In Val-Criterion, S is constructed through 1k recently generated random walks. While in EO, is S still constructed the same as in Val, or generated through 500k generated random walks as you described in Section 3.3? It's time-consuming if you construct S from a large corpus of random walks at every validation iteration, and if you use the small corpus as used in Val, how to guarantee the approximation error of edge overlap ratio is bounded, i.e., won't be too large to damage the performance?\n\n2. This work generates sibling graphs of the original graph. In what applications can we utilize this method? Normal graph generative models generate a relevant node given a prior node, but this paper generates a new graph given a prior graph, thus seems cannot be directly used in node-level graph applications. Any references will be better :)\n\nLooking forward to your reply! Thanks!",
"While both models coincidentally have the same acronym and use the GAN framework, they are very distinct in their nature and have different goals.\n\nThe model proposed in the paper you referenced is an explicit (prescribed) probabilistic model whose goal is to learn node embeddings. The explicitly specified probability distribution G(v | v_c) can be computed directly given the embedding \\theta_G (Equation 5). Such model could also be learned by other means (e.g., by directly minimizing cross entropy + negative sampling for non-edges), and the use of GANs in this setting is rather unconventional.\n\nIn contrast, our approach defines an implicit generative model for random walks in the graph. Its main goal is to generate new graphs that have similar properties to original (but are not exact replicas). As is the case for implicit models, samples can be drawn from it, but direct computation of the probabilities is not possible. In such a scenario, GAN training is one of the few available options. Our implicit model is not restricted to pairwise interactions and can capture higher-order properties of the graph.\n\nNote, that link prediction is the optimization objective in the work you mentioned. Thus, it is not surprising that the obtained node embeddings achieve high scores in the related tasks. Meanwhile, our model is not trained for link prediction, and the embeddings are just a byproduct of the learning process.\n\nIn addition to pointing out these fundamental differences, we would also like to highlight that the above-mentioned work was just made public on arXiv two days ago (Nov 22nd); which is why it could not be included in the Related Work section of our paper at the time of submission almost a month ago.\n\nTL;DR: While at first glance the approaches appear to be related (both are called GraphGAN), after carefully reading the papers, it becomes clear that the two models are fundamentally different and have orthogonal goals. The other work: explicit model + pairwise interactions for learning node embeddings. Our work: implicit model + higher-order interactions, with the goal of generating new graphs.\n",
"An AAAI18 paper released recently also propose a graph GAN framework https://arxiv.org/abs/1711.08267, what's the difference between this paper and their paper? It seems that their results is more dominant in link prediction than this paper.",
"Thank you very much for your interest in our paper and your comment. \n\nIt seems that the source of confusion is that on the one hand, we show that graphs generated using random sampling from the latent space are similar to the input graph (Table 2, Fig. 3a), while on the other hand, in the latent space interpolation (Fig. 6 and 7), the generated graphs have very different properties compared to the input graph.\n \nLet's recap how the latent space interpolation is performed for clarity. Remember that a single noise vector does not produce a complete graph, but rather one random walk. We therefore sample a large number of random walks from the latent space and use the method described in Sec. 3.3 to assemble a graph from these random walks.\n\nIf we now restrict the sampling to specific subregions of the latent space, intuitively, we obtain random walks that have some specific properties, which in turn makes the graphs assembled from them have specific properties. However, if we sample from the entire latent space, we are in a way \"averaging\" over all of these properties and the sampled random walks (and the resulting graph) have similar properties as the original, e.g. as you have noticed in Table 2 and Figure 3a.\n\nWe hope that this answer helps you to better understand our experiment. Please do not hesitate to comment again if you have any other or follow-up questions.\n\ntl;dr: Specific regions of the latent space encode specific properties, the \"average\" over all regions has properties similar to the original. ",
"It's a great paper to read. I have a question on the latent space interpolation experiment of the paper. \nIf I got it right, you do random sampling in hidden space to produce the results in Table 2, and the statistics seem to be pretty stable. However, when you do latent space interpolation, the statistics seem to vary a lot. Why does this happen?\nYou mention that \"certain regions of z correspond to generated graphs with very different degree distributions\", however it Figure 3(a), by random generating a graph, the degree distribution matches the ground truth well. I'm confused about why the latent space has such kind of property.\nMaybe I made some mistakes when trying to understand the experiment. Looking forward to your reply! Thanks!"
] | [
6,
7,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H15RufWAW",
"iclr_2018_H15RufWAW",
"iclr_2018_H15RufWAW",
"iclr_2018_H15RufWAW",
"SJhXxLYgz",
"H1oI4EuZf",
"BkCkJetef",
"B1CHXVu-M",
"rkKo_YeWG",
"SkRp15ZWM",
"iclr_2018_H15RufWAW",
"B1xzITYxf",
"rJa0pv5ef",
"SknlQLFef",
"iclr_2018_H15RufWAW",
"SyDRyMBlf",
"iclr_2018_H15RufWAW",
"Bk-RVrRAW",
"iclr_2018_H15RufWAW"
] |
iclr_2018_rJiaRbk0- | Towards Binary-Valued Gates for Robust LSTM Training | Long Short-Term Memory (LSTM) is one of the most widely used recurrent structures in sequence modeling. Its goal is to use gates to control the information flow (e.g., whether to skip some information/transformation or not) in the recurrent computations, although its practical implementation based on soft gates only partially achieves this goal and is easy to overfit. In this paper, we propose a new way for LSTM training, which pushes the values of the gates towards 0 or 1. By doing so, we can (1) better control the information flow: the gates are mostly open or closed, instead of in a middle state; and (2) avoid overfitting to certain extent: the gates operate at their flat regions, which is shown to correspond to better generalization ability. However, learning towards discrete values of the gates is generally difficult. To tackle this challenge, we leverage the recently developed Gumbel-Softmax trick from the field of variational methods, and make the model trainable with standard backpropagation. Experimental results on language modeling and machine translation show that (1) the values of the gates generated by our method are more reasonable and intuitively interpretable, and (2) our proposed method generalizes better and achieves better accuracy on test sets in all tasks. Moreover, the learnt models are not sensitive to low-precision approximation and low-rank approximation of the gate parameters due to the flat loss surface. | rejected-papers | This paper proposes training binary-values LSTMs for NLP using the Gumbel-softmax reparameterization. The motivation is that this will generalize better, and this is demonstrated in a couple of instances.
However, it's not clear how cherry-picked the examples are, since the training loss wasn't reported for most experiments. And, if the motivation is better generalization, it's not clear why we would use this particular setup. | train | [
"HyA3jBqgG",
"S15OPlugz",
"Syo-smqgf",
"rJBQl4j7z",
"BJj7JzW7G",
"BJvcAWbQG",
"rJ4UxzbQz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper propose a new \"gate\" function for LSTM to enable the values of the gates towards 0 or 1. The motivation behind is a flat region of the loss surface is likely to generalize well. It shows the experimental results are comparable or better than vanilla LSTM and much more robust to low-precision approximation and low-rank approximation.\n\nIn section 3.2, the paper claimed using a smaller temperature cannot guarantee the outputs to be close to the boundary. Is there any experimental evidence to show it's not working? It also claimed pushing output gate to 0/1 will drop the performance. It actually quite interesting because there are bunch of paper claimed output gate is not important for language modeling, e.g. https://openreview.net/pdf?id=HJOQ7MgAW . \n\nIn the sensitive analysis, what if apply rounding / low-rank for all the parameters? \n\nHow was this approach compare to binarynet https://arxiv.org/abs/1602.02830 ? Applying the same idea, but only for forget gate/ input gate. Also, can we apply this idea to the binarynet? \n\nOverall, I think it's an interesting paper but I feel it should compare with some simple baseline to binarized the gate function. \n\nUpdates: Thanks a lot for all the clarification. It do improve the paper quality but I'm still thinking it's higher than \"6\" but lower than \"7\". To me, improve ppl from \"52.8\" to \"52.1\" isn't very significant. For WMT, it improve on DE->EN but not for EN->DE (although it improve both for the author's own baseline). So I'm not fully convinced this approach could improve the generalization. But I feel this work can have many other applications such as \"binarynet\". ",
"This paper aims to push the LSTM gates to be binary. To achieve this, the paper proposes to employ the recent Gumbel-Softmax trick to obtain end-to-end trainable categorical distribution (taking 0 or 1 value). The resulted G2-LSTM is applied for language model and machine translation in the experiments. \n\nThe novelty of this paper is limited. Just directly apply the Gumbel-Softmax trick. \n\nThe motivation is not explained clearly and convincingly. Why need to pursue binary gates? According to the paper, it may give better generalization performance. But there is no theoretical or experimental evidence provided by this paper to support this argument. \n\nThe results of the new G2-LSTM are not significantly better than baselines in the experiments.",
"The paper argues for pushing the input and forget gate’s output toward 0 or 1, i.e., the LSTM tends to reside in flat region of surface loss, which is likely to generalize well. To achieve that, the sigmoid function in the original LSTM is replaced by a function G that is continuous and differentiable with respect to the parameters (by applying the Gumbel-Softmax trick). As a result, the model is still differentiable while the output gate is approximately binarized. \n\nPros:\n-\tThe paper is clearly written\n-\tThe method is new and somehow theoretically guaranteed by the proof of the Proposition 1\n-\tThe experiments are clearly explained with detailed configurations\n-\tThe performance of the method in the model compression task is promising \n\nCons:\n-\tThe “simple deduction” which states that pushing the gate values toward 0 or 1 correspond to the region of the overall loss surface may need more theoretical analysis\n-\tIt is confusing whether the output of the gate is sampled based on or computed directly by the function G \n-\tThe experiments lack many recent baselines on the same dataset (Penn Treebank: Melis et al. (2017) – On the State of the Art of Evaluation in Neural Language Models; WMT: Ashish et.al. (2017) – Attention Is All You Need) \n-\tThe experiment’s result is only slightly better than the baseline’s\n-\tTo be more persuasive, the author should include in the baselines other method that can “binerize” the gate values such as the one sharpening the sigmoid function. \n\n\nIn short, this work is worth a read. Although the experimental results are not quite persuasive, the method is nice and promising. \n",
"Thanks all reviewers for their valuable comments, we updated a new version of the paper by including the following results:\n\n1. We make discussion about the sharpening sigmoid method proposed by the reviewers, and add the algorithm as one of the baselines in the experiments. The experimental results still show that our proposed method achieves the best performance in all tasks.\n\n2. We update the experimental results on language modelling task which achieves the best performance (52.1) as far as we know without using any hyperparameter search method.\n",
"\n[Regarding the computation of function G]\n\nDuring training, the output of the gate is computed directly by function G, while the function G contains some random noise U.\n\n[Regarding the sharpened sigmoid function experiment]\n\nThanks for figure this out. First, we want to point out that theoretically it doesn’t help: Simply consider function f_{W,b}(x) =sigmoid((Wx+b)/tau), where tau is the temperature, it is computationally equivalent to f_{W’,b’}(x) =sigmoid(W’x+b’) by setting W’=W/tau and b’ = b/tau. Then using a small temperature is equivalent to rescale the initial parameter as well as gradient to a larger range. Usually, setting an initial point in a larger range with a larger learning rate will harm the optimization process.\n\nWe also did a set of experiments and updated the paper to show it doesn’t help in practice.\n\n[Regarding the significance of experimental results]\n\nFor machine translation, we achieved the SOTA performance on German->English task and the improvement is significate (+ about 1 point) in the field of translation, not to mention that our model is much better than some other submissions https://openreview.net/forum?id=HktJec1RZ. For English->German task, we noticed that “Attention is all you need” is the state of the art but it is not LSTM-based; thus we didn’t list that result in the paper.\n\nFor language model, thanks for the reference, we have studied the papers. By leveraging several tricks in literature, we significantly improve the performance from 77.4 to 52.1 (the best number as far as we know) without using any hyperparameter search method, we reported the detail in the paper. \n",
"[Regarding the small temperature experiment]\n\nThanks for figure this out. First, we want to point out that theoretically it doesn’t help: Simply consider function f_{W,b}(x) =sigmoid((Wx+b)/tau), where tau is the temperature, it is computationally equivalent to f_{W’,b’}(x) =sigmoid(W’x+b’) by setting W’=W/tau and b’ = b/tau. Then using a small temperature is equivalent to rescale the initial parameter as well as gradient to a larger range. Usually, setting an initial point in a larger range with a larger learning rate will harm the optimization process.\n\nWe also did a set of experiments and updated the paper to show it doesn’t help in practice.\n\n[Regarding the binary net]\n\nDespite the different between the model structure (gate-based LSTM v.s. CNN), the main difference is that we regularize the output of the activation of the gates to binary value only, but not to regularize the weights. One should notice that the accuracy of Binary Net is usually much worse than the baseline model. However, we show that (1) Our models generalize well among different tasks. (2) The accuracy of the models after low-rank/low-precision compression using our method is competitive to (or even better than) the baseline. Besides, our techniques can also be applied to binarynet training.\n\n[Regarding apply rounding / low-rank for all the parameters]\n\nWe will do the experiment but as our proposed method is focusing on LSTM unit. We are not sure whether the performance will drop a lot when we apply rounding/low-rank to embedding and attention.",
"\n[Regarding the experiment]\n\nWe are afraid that the reviewer makes a wrong judgement to the performance results, our model is much better than the baseline on two tasks. \n\nFor machine translation, we achieved the SOTA performance on German->English task and the improvement is significate (+ about 1 point) in the field of translation, not to mention that our model is much better than some other submissions https://openreview.net/forum?id=HktJec1RZ. \n\nFor language model, by leveraging several tricks in literature, we significantly improve the performance from 77.4 to 52.1 (the best number as far as we know). This number is achieved without using any hyperparameter search method, we reported the detail in the paper. \n\n[Regarding the motivation]\n\nWe have discussed in section 2.1 that there are a bunch of work empirically and theoretically studying the relationship between flat loss surface and generalization, not to mention that there are some continuous study and verification in ICLR 2018 submissions, e.g., https://openreview.net/forum?id=HkmaTz-0W . Thus our method is well motivated: by pushing the softmax operator towards its flat region will lead to better generalization. \n\n[Regarding the novelty of the paper]\n\nWe are regretful to see the reviewer claims that there is little novelty in the paper. First, we are the first to apply Gumbel-softmax trick for robust training of LSTM by pushing the value of the gate to the boundary. We empirically show that our method achieves better accuracy even achieves the SOTA performance in some tasks. Second, we show that by different low-precision/low-rank compressions, our model is even still comparable to the baseline models before compressions. \n"
] | [
6,
4,
6,
-1,
-1,
-1,
-1
] | [
3,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rJiaRbk0-",
"iclr_2018_rJiaRbk0-",
"iclr_2018_rJiaRbk0-",
"iclr_2018_rJiaRbk0-",
"Syo-smqgf",
"HyA3jBqgG",
"S15OPlugz"
] |
iclr_2018_r1h2DllAW | Discrete-Valued Neural Networks Using Variational Inference | The increasing demand for neural networks (NNs) being employed on embedded devices has led to plenty of research investigating methods for training low precision NNs. While most methods involve a quantization step, we propose a principled Bayesian approach where we first infer a distribution over a discrete weight space from which we subsequently derive hardware-friendly low precision NNs. To this end, we introduce a probabilistic forward pass to approximate the intractable variational objective that allows us to optimize over discrete-valued weight distributions for NNs with sign activation functions. In our experiments, we show that our model achieves state of the art performance on several real world data sets. In addition, the resulting models exhibit a substantial amount of sparsity that can be utilized to further reduce the computational costs for inference. | rejected-papers | This paper presents a somewhat new approach to training neural nets with ternary or low-precision weights. However the Bayesian motivation doesn't translate into an elegant and self-tuning method, and ends up seeming kind of complicated and ad-hoc. The results also seem somewhat toy. The paper is fairly clearly written, however. | test | [
"Sy_eXHHfz",
"BkZJDo8-f",
"ry637oIWz",
"SJJ5dvJgM",
"HkshYX9xz",
"H1S_cEcxM"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We added a revision of our paper where we changed the following aspects.\n\n(1) We left the structure of the paper largely unchanged. We removed some details about our method from the introduction but we kept the related work there as it is needed to motivate our work and to highlight the gap in the literature we intend to fill.\n\n(2) We added results of experiments performed on a larger TIMIT data set for phoneme classification. Furthermore, the data sets are now described in more detail in the supplementary material. Our model (single: with sign activation function, 3 bit weights in the input layer, and ternary weights in the following layers) performs on par with NN (real) and it outperforms NN STE on the TIMIT data set and the more challenging variants of MNIST with different kinds of background artifacts.\n\n(3) We added the training time in the experiments section.\n\n(4) A few minor changes.",
"Thank you for your valuable comments.\n\n- A general comment:\nIt appears that this review is mainly concerned with our method being slightly off from the textbook Bayesian approach. One major motivation of obtaining discrete-valued NNs by first inferring a distribution is that the distribution parameters are real-valued and can therefore be optimized with gradient based optimization. Directly optimizing the discrete weights would result in an intractable combinatorial optimization problem.\n\n- Issue with Gaussian approximation:\nWe agree that, due to the Gaussian approximation, we are not maximizing a lower bound anymore (at least we did not investigate on this). However, our motivation was to come up with a principled scheme to obtain resource-efficient NNs with discrete weights that achieve good performance, and, therefore, we accept to slightly depart from the full Bayesian path. By \"more principled\" we refer to existing work on resource efficient NNs (rather than to work in the Bayesian literature) where mostly some quantization step is applied or, in the case of the straight through estimator, a gradient that is clearly zero is \"approximated\" by something non-zero. With these methods, it is often not even clear if the gradient update procedures are optimizing any objective. We believe that this direction of research requires more principled methods such as the presented one.\n\n- Likelihood weighting:\nWe believe that the prior-term dominating the likelihood-term is an artifact of variational inference by minimizing the KL-divergence between approximate and true posterior that especially manifests itself in case of NNs. In many hierarchical Bayesian models, there is a latent variable per data point that one aims to estimate and therefore the numbers of KL-terms and likelihood-terms are balanced at all times. For NNs, the number of KL-terms is fixed as soon as we fix the structure of the NN, and, as is commonly known, larger NNs tend to perform better. Hence, using vanilla KL-minimization results in a dilemma if we want to estimate the parameters of a NN whose number of parameters is orders of magnitudes larger than the number of data samples. Using a flat (constant) prior only partly solves this problem as an entropy-term, which itself dominates the likelihood-term, would still be present. This entropy-term would cause the approximate posterior to take on larger variances which would again severely degrade performance. We agree that parameter sharing could help since it would reduce the number of KL-terms, but this would result in a different model.\n\n- Performance:\nOur \"single\" model outperforms the NN STE [1] by ~2-3% on MNIST background and MNIST background random, respectively. On the other data sets we are on par. Furthermore, we achieve similar performance as NN (real) which is more computationally expensive to evaluate.\n\n- Advantages of our model (compared to other resource-efficient methods):\n - Well defined objective function\n - Probabilistic forward pass simultaneously handles both discrete distributions and the sign activation function\n - Flexible choice of discrete weight space; can be different in each layer (other methods are often very rigid in this regard)\n - Low precision weights in the first layer\n - Additional information available in the approximate posterior\n\n- Sparsity:\nRegarding other methods: Binary and real weights (e.g. as in [1]), respectively, do not exhibit any sparsity at all, i.e. each connection of the NN is present. We point out that our method introduces, at least on some data sets, a substantial amount of sparsity that can be utilized to reduce computational costs. This was not a design goal in itself and we do not claim that our method is competitive with other approaches that explicitly aim to achieve sparsity. We think that the way that sparsity arises in our model is compelling: The value zero is explicitly modeled and we do not prune weights after training by some means of post-processing.\n\n- Minor comment on the title:\nIt seems there is a misunderstanding. In our experiments, the \"single\" model refers to a single low-resource NN obtained as the most probable NN from the approximate posterior. In this NN, the activations and outputs are *not* continuous - given that the inputs are low-precision fixed-point values (as in images), the activations in the first hidden layer are obtained by low-precision fixed point operations (or equivalently integer operations), and the activations in the following layers are obtained by accumulating -1 and +1. The activation functions are sign functions that result in either -1 or +1. The output activations are also integer valued as they only accumulate -1 and +1 (the softmax is not needed at test time). Only for the \"pfp\" model and during optimization we have to deal with real-valued quantities.\n\n- Other minor comments:\nThank you, we will use your comments to improve the paper.\n\n[1] Hubara et al., Binarized neural networks, NIPS 2016",
"Thank you for your valuable comments.\n\n- Data sets:\nWe are currently running another experiment on a TIMIT data set (phoneme classification) which is larger (N~140k), has more classes (39), but has less features (92). Other papers on resource efficient NNs typically evaluate on larger image tasks like CIFAR-10, SVHN and ImageNet. However, we refrain from doing so as we have not yet considered convolutional NNs and it is known that plain fully-connected NNs are far too weak to come close to state-of-the-art performance.\n\n- Likelihood-weighting and annealing/variational tempering:\nWe assume you are referring to [1]. We agree that our weighting scheme is similar to these methods but they are used in different ways and for different purposes. We will comment on this in our revision.\n\n- Structure of the paper:\nThank you for pointing this out. We will consider this in our revision.\n\n- Training time and Bayesian optimization:\nTraining time naturally increases compared to the training time of plain NNs since computing the required first and second moments of the probabilistic forward pass is more time-consuming. On a Nvidia GTX 1080 graphics card, a training epoch on MNIST with a minibatch size of 100 (500 parameter updates) takes approximately 8.8 seconds for the general 3 bit distribution and 7.5 seconds for the discretized Gaussian distribution compared to 1.3 seconds for plain NNs. Especially the first layer is a bottleneck since here the moments require computing weighted sums over all discrete values. Of course we could have hand-tuned the hyperparameters, but we believe that Bayesian optimization is a useful tool that relieves us from putting too much effort into finding suitable hyperparameters. Furthermore, it allows for a fair comparison between models by evaluating them for the same number of iterations. We will include the training times in our revision.\n\n[1] S. Mandt et al., Variational Tempering, AISTATS 2016",
"Summary: \nThe paper considers a Bayesian approach in order to infer the distribution over a discrete weight space, from which they derive hardware-friendly low precision NNs. This is an alternative to a standard quantization step, often performed in cases such as emplying NNs on embedded devices.\nThe NN setting considered here contains sign activation functions.\nThe experiments conducted show that the proposed model achieves nice performance on several real world data Comments\n\nDue to an error in the openreview platform, I didn't have the chance to bid on time. This is not within my areas of expertise. Sorry for any inconvenience.",
"In this work, discrete-weight NNs are trained using the variational Bayesian framework, achieving similar results to other state-of-the-art models. Weights use 3 bits on the first layer and are ternary on the remaining layers.\n\n\n- Pros:\n\nThe paper is well-written and connections with the literature properly established.\n\nThe approach to training discrete-weights NNs, which is variational inference, is more principled than previous works (but see below).\n\n- Cons:\n\nThe authors depart from the original motivation when the central limit theorem is invoked. Once we approximate the activations with Gaussians, do we have any guarantee that the new approximate lower bound is actually a lower bound? This is not discussed. If it is not a lower bound, what is the rationale behind maximizing it? This seems to place this work very close to previous works, and not in the \"more principled\" regime the authors claim to seek.\n\nThe likelihood weighting seems hacky. The authors claim \"there are usually many more NN weights than there are data samples\". If that is the case, then it seems that the prior dominating is indeed the desired outcome. A different, more flat prior (or parameter sharing), can be used, but the described reweighting seems to be actually breaking a good property of Bayesian inference, which is defecting to the prior when evidence is lacking.\n\nIn terms of performance (Table 1), the proposed method seems to be on par with existing ones. It is unclear then what the advantage of this proposal is.\n\nSparsity figures are provided for the current approach, but those are not contrasted with existing approaches. Speedup is claimed with respect to an NN with real weights, but not with respect existing NNs with binary weights, which is the appropriate baseline.\n\n\n- Minor comments:\n\nPage 3: Subscript t and variable t is used for the targets, but I can't find where it is defined.\n\nOnly the names of the datasets used in the experiments are given, but they are not described, or even better, shown in pictures (maybe in a supplementary).\n\nThe title of the paper says \"discrete-valued NNs\". The weights are discrete, but the activations and outputs are continuous, so I find it confusing. As a contrast, I would be less surprised to hear a sigmoid belief network called a \"discrete-valued NN\", even though its weights are continuous.",
"The authors consider the problem of ultra-low precision neural networks motivated by \nlimited computation and bandwidth. Their approach first posits a Bayesian neural network\na discrete prior on the weights followed by central limit approximations to efficiently \napproximate the likelihood. The authors propose several tricks like normalization and cost \nrescaling to help performance. They compare their results on several versions of MNIST. The \npaper is promising, but I have several questions:\n\n1) One major concern is that the experimental results are only on MNIST. It's important \nto have another (larger) dataset to understand how sensitive the approach is to \ncharacteristics of the data. It seems plausible that a more difficulty problem may \nrequire more precision.\n\n2) Likelihood weighting is related to annealing and variational tempering\n\n3) The structure of the paper could be improved:\n - The introduction contains way too many details about the method \n and related work without a clear boundary.\n - I would add the model up front at the start of section 2\n - Section 2.1 could be reversed or equations 2-5 could be broken with text \n explaining each choice \n\n4) What does training time look like? Is the Bayesian optimization necessary?"
] | [
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
1,
4,
4
] | [
"iclr_2018_r1h2DllAW",
"HkshYX9xz",
"H1S_cEcxM",
"iclr_2018_r1h2DllAW",
"iclr_2018_r1h2DllAW",
"iclr_2018_r1h2DllAW"
] |
iclr_2018_S1Y7OOlRZ | Massively Parallel Hyperparameter Tuning | Modern machine learning models are characterized by large hyperparameter search spaces and prohibitively expensive training costs. For such models, we cannot afford to train candidate models sequentially and wait months before finding a suitable hyperparameter configuration. Hence, we introduce the large-scale regime for parallel hyperparameter tuning, where we need to evaluate orders of magnitude more configurations than available parallel workers in a small multiple of the wall-clock time needed to train a single model. We propose a novel hyperparameter tuning algorithm for this setting that exploits both parallelism and aggressive early-stopping techniques, building on the insights of the Hyperband algorithm. Finally, we conduct a thorough empirical study of our algorithm on several benchmarks, including large-scale experiments with up to 500 workers. Our results show that our proposed algorithm finds good hyperparameter settings nearly an order of magnitude faster than random search. | rejected-papers | This paper presents a simple tweak to hyperband to allow it to be run asynchonously on a large cluster, and contains reasonably large-scale experiments.
The paper is written clearly enough, and will be of interest to anyone running large-scale ML experiments. However, it falls below the bar by:
1) Not exploring the space of related ideas more.
2) Not providing novel insights.
3) Not attempting to compare against model-based parallel approaches. | test | [
"ry6owJ9lM",
"ByZovF3eM",
"r1GP9AFeM",
"SJgMG1tmM",
"H1W9naEXG",
"SkRUhpVmz",
"Bk5no64Xf",
"Bk0Qc6EmG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper introduces a simple extension to parallelize Hyperband. \n\nPoints in favor of the paper:\n* Addresses an important problem\n\nPoints against:\n* Only 5-fold speedup by parallelization with 5 x 25 workers, and worse performance in the same budget than Google Vizier (even though that treats the problem as a black box)\n* Limited methodological contribution/novelty\n\n\nThe paper's methodological contribution is quite limited: it amounts to a straight-forward parallelization of successive halving (SHA). Specifically, whenever a worker frees up, do a new run on it, at the highest rung possible while making sure to not run too many runs for too high rungs. (I am pretty sure that is the idea, even though Algorithm 1, which is supposed to give the details, appears to have a bug in Procedure get_job -- it would always either pick the highest rung or the lowest!)\n\nEmpirically, the paper strangely does not actually evaluate a parallel version of Hyperband, but only evaluates the 5 parallel variants of SHA that Hyperband would run, each of them with all workers. The experiments in Section 4.2 show that, using 25 workers, the best of these 5 variants obtains a 5-fold speedup over sequential Hyperband on CIFAR and an 8-fold speedup on SVHN. I am confused: the *best* of 5 SHA variants only achieves a 5-fold speedup using 25 workers? I.e., parallel Hyperband, which would run the 5 SHA variants in parallel, would require 125 workers but only yield a 5-fold speedup? If I understand this correctly, I would clearly call this a negative result.\n\nLikewise, for the large-scale experiment, a single run of Vizier actually yields as good performance as the best of the 5 SHA variants, and it is unknown beforehand which SHA variant works best -- in this example, actually Bracket 0 (which is often the best) stagnates. Parallel Hyperband would run the 5 SHA variants in parallel, so its performance at a budget of 10R with a total of 500 workers can be evaluated by taking the minimum of the 5 SHA variants at a budget of 2R. This would obtain a perplexity of above 90, which is quite a bit worse than Vizier's result of about 82. In general, the performance of parallel Hyperband can be computed by taking the minimum of the SHA variants and multiplying the time taken by 5; this shows that at any time in the plot (Figure 3, left) Vizier dominates parallel Hyperband. Again, this is apparently a negative result. (For Figure 3, right, no results for Vizier are given yet.)\n\nIf I understand correctly, the experiment in Section 4.4 does not involve any run of Hyperband, but merely plots predictions of Qi et al.'s Paelo framework of how many models could be evaluated with a growing number of GPUs.\n\nTherefore, all empirical results for parallel Hyperband reported in the paper appear to be negative. This confuses me, especially since the authors seem to take them as positive results. \nBecause the original Hyperband paper argued that Bayesian optimization does not parallelize as well as random search / Hyperband, and because Hyperband has been reported to work much better than Bayesian optimization on a single node, I would have expected clear improvements of parallel Hyperband over parallel Bayesian optimization (=Vizier in the authors' setup). However, this is not what I see in the results. Am I mistaken somewhere? If not, based on these negative results the paper does not seem to quite clear the bar for ICLR.\n\n\nDetails, in order of appearance in the paper:\n\n- Vizier: why did the authors only use Vizier's default Bayesian optimization algorithm? The Vizier paper by Golovin et al (2017) states that for large budgets other optimizers often perform better, and the budget in the large scale experiments is as high as 5000 function evaluations. Also, isn't there an automatic choice built into Vizier to pick the optimizer expected to be best? I think using a suboptimal version of Vizier would be a problem for the experimental setup.\n- Algorithm 1: this needs some improvement; in particular fixing the bug I mentioned above.\n- Section 3.1: Li et al (2017) do not analyze any algorithm theoretically. They also do not discuss finite vs. infinite horizon. I believe the authors meant Li et al's arXiv paper (2016) in both of these cases.\n- Section 3.1, point 2: this is unclear to me, even though I know Hyperband very well. Can you please make this clearer?\n- \"A complete theoretical treatment of asynchronous SHA is out of the scope of this paper\" -> is some theoretical treatment in scope?\n- Section 4.1: It seems very useful to already recommend configurations in each rung of Hyperband, and I am surprised that the methods section does not mention this. From the text in this experiments section, it feels a little like that was always part of Hyperband; I didn't think it was, so I checked the original papers and blog posts, and both the ICLR 2017 and the arXiv 2016 paper state \"In fact, the first result returned by HYPERBAND after using a budget of 5R is often competitive with results returned by other searchers after using 50R.\" and Kevin Jamieson's blog post on Hyperband (https://people.eecs.berkeley.edu/~kjamieson/hyperband.html) explicitly states: \"While random and the Bayesian Optimization algorithms output their first recommendation after max_iter iterations, Hyperband does not output anything until about max_iter(logeta(max_iter)+1) iterations [...]\"\nTherefore, recommending after each rung seems to be a contribution of this paper, and I think it would be nice to read about this in the methods section. \n- Experiment 1 (SVM) used dataset size as a budget, which is what Fabolas (\"Fast Bayesian optimization on large datasets\") is designed for according to Klein et al (2017). On the other hand, Experiments (2) and (3) used the number of epochs as a budget, and Fabolas is not designed for that (one would want to use a different kernel, for epochs, e.g., like Freeze-Thaw Bayesian optimization (FTBO) by Swersky et al (2014), instead of a kernel made for dataset sizes). Therefore, it is not surprising that Fabolas does not work as well in those cases. The case of number of epochs as a budget would be the domain of FTBO. I know that there is no reference implementation of FTBO, so I am not asking for a comparison, but the comparison against Fabolas is misleading for Experiments (2) and (3). This doesn't really change anything for the paper: the authors could still make the case that Fabolas hasn't been designed for this case and that (to the best of my knowledge) there simply isn't an implementation of a BO algorithm that is. Fabolas is arguably the closest thing, so the results could still be reported, just not as an apples-to-apples comparison; probably best as \"Fabolas-like, with dataset size kernel\" in the figure. The justification to not compare against Fabolas in the parallel regime is clearly valid.\n- A clarification question: Section 4.4 does not report on any runs of actual neural networks, does it? And not on any runs of Hyperband, correct? Do I understand the reasoning correctly as pointing out that standard parallelization across multiple GPUs is not great, and that thus, in combination with parallel Hyperband, runs should be done mostly on one GPU only? How does this relate to the results in the cited paper \"Accurate, Large-batch SGD: Training ImageNet in 1 Hour\" (https://arxiv.org/abs/1706.02677)? Quoting from its abstract: \"Using commodity hardware, our implementation achieves ∼ 90% scaling efficiency when moving from 8 to 256 GPUs.\" That seems like a very good utilization of parallel computing power?\n- There is no conclusion / future work.\n\n----------\nEdit after author rebuttal:\nI thank the reviewers for their rebuttal. This cleared up some points, but some others are still open.\n(1) and (2) Unfortunately, I still do not agree that the need for 5*25 workers to get a 5-fold to 8-fold speedup is a positive result. Similarly, I would interpret the results in Figure 3 differently than the authors. For the comparison against Vizier the authors argue that they could just take the lowest 2 brackets of Hyperband; but running both of these two would still be 2x slower than Vizier. And we can't only run the best bracket because the information which one is the best is not available ahead of time. In fact, it is the entire point of Hyperband to hedge across multiple brackets including the one that is random search; one *could* just use the smallest bracket, but that is a heuristic and has no theoretical guarantees of being better (or at least not worse by more than a bounded factor) than random search. \nOrthogonally: the comparison to Vizier (or any other baseline) is still missing for the LSTM acoustic model.\n\n(3) Concerning SOTA results, I have to agree with AnonReviewer3: one way to demonstrate success is to show competitive performance on a dataset (e.g., CIFAR) on which other researchers can also evaluate their algorithms on. Getting 17% on CIFAR-10 does not fall into that category. Nevertheless, I agree with the authors that another way to demonstrate success is to show competitive performance on a *combination* of a dataset and a design space, but for that to be something that other researchers can compare to requires the authors making publicly available the implementations they have optimized; without that public availability, due to a host of possible confounding factors, it is impossible to judge whether state-of-the-art performance on such a combination of dataset and design space has been achieved. I therefore recommend that the authors make the entire code they used for training CIFAR available; I don't expect this to have anything new in there, but it's a useful benchmark.\nLikewise, for the LSTM on PTB, DeepMind used Google Vizier (https://arxiv.org/abs/1707.05589) to achieve *perplexities below 60* (compared to the results above 80 reported by the authors). Just as above, I therefore recommend that the authors make their pipeline for LSTB on PTB available. Likewise for the LSTM acoustic model.\n\n(4) I'm confused that Section 4.4 does relate to SHA/Hyperband. Of course, there are some diminishing returns of running an optimizer across multiple GPUs. But similarly, there are diminishing returns of parallelizing SHA (e.g., the 5-fold speedup on 125 workers above). So the natural question that would be nice to answer is which combination of the two will yield the best results. Relatedly, the paper by Goyal et al seems to show that the weak scaling regime leads to almost linear speedups; why do the authors then analyze the strong scaling regime that does not appear to work as well?\n\nOverall, the rebuttal did not change my evaluation and I kept my original score.",
"This paper adapts the sequential halving algorithm that underpins Hyperband to run across multiple workers in a compute cluster. This represents a very practical scenario where a user of this algorithm would like to trade off computational efficiency for a reduction in wall time. The paper's empirical results confirm that indeed significant reductions in wall time come with modest increases in overall computation, it's a practical improvement.\n\nThe paper is crisply written, the extension is a natural one, the experiment protocols and choice of baselines are appropriate.\n\nThe left panel of figure 3 is blurry, compared with the right one.",
"In this paper, the authors extend Hyperband--a recently proposed non model based hyperparameter tuning procedure--to better support parallel evaluation. Briefly, Hyperband builds on a \"successive halving\" algorithm. This algorithm allocates a budget of B total time to N configurations, trains for as long as possible until the budget is reached, and then recurses on the best N/2 configurations--called the next \"rung\" in the paper. Thus, as optimization proceeds, more promising configurations are allowed more time to train. This basic algorithm has the problem that different optimization tasks may require different amounts of time to become distinguishable; Hyperband solves this by running multiple rounds of succesive halving--called \"brackets\"--varying the initial conditions. That is, should successive halving start with more initial configurations (but therefore less budget for each configuration), or a small number of configurations. The authors further extend Hyperband by allowing the successive halving algorithm to be run in parallel. To accomplish this, when a worker looks for a job it prefers to run jobs on the next available rung; if none are currently outstanding, a new job is started on the lowest rung.\n\nOverall, I think this is a natural scheme for parallelzing Hyperband. It is extremely simple (a good thing), and neatly circumvents the obvious problem with parallelizing Hyperband, which is that successive halving naturally limits the number of jobs that can be done. I think the non-model based approach to hyperparameter tuning is compelling and is of interest to the AutoML community, as it raises an obvious question of how approaches that exploit the fact that training can be stopped any time (like Hyperband) can be combined with model-based optimization that attempt to avoid evaluating configurations that are likely to be bad.\n\nHowever, I do have a few comments and concerns for the for the authors to address that I detail below. I will be more than happy to modify my evaluation if these concerns are addressed by the authors.\n\nFirst and most importantly, can the authors discuss the final results achieved by their hyperparameter optimization compared to state-of-the-art results in the field? I am not sure what SOTA is on the Penn Treebank or acoustic modeling task, but obviously the small ConvNet getting 20% error on CIFAR10 is not state of the art. Do the authors think that their technique could improve SOTA on CIFAR10 or CIFAR100 if applied to a modern CNN architecture like a ResNet or DenseNet? \n\nObviously these models take a bit longer to train, but with the ability to train a large number of models in parallel, a week or two should be sufficient to finish a nontrivial number of iterations. The concern that I have is that we continue to see these hyperparameter tuning papers that discuss how important the task is, but--to the best of my knowledge--the last paper to actually improve SOTA using automated hyperparameter tuning was Snoek et al., 2012., and there they even achieved 9.5% error with data augmentation. Are hyperparameters just too well tuned on these tasks by humans, and the idea is that Hyperband will be better on new tasks where humans haven't been working on them for years? In BayesOpt papers, hyperparameter tuning has often been used simply as a task to compare optimization performance, but I don't think this argument applies to Hyperband because it isn't really applicable to blackbox functions outside of hyperparameter tuning because it explicitly relies on the fact that training can be cut short at any time.\n\nSecond (and this is more of a minor point), I am a little baffled by Figure 4. Not by the argument you are trying to make--it of course makes sense to me that additional GPUs would result in diminishing returns as you become unable to fully exploit the parallelism--but rather the plots themselves. To explain my confusion, consider the 8 days curve in the AlexNet figure. I read this as saying, with 1 GPU per model, in 8 days, I can consider 128 models (the first asterisk). With 2 GPUs per model, in 8 days, I can consider slightly less than 128 models (the second asterisk). By the time I am using 8 GPUs per model, in 8 days, I can only train a bit under 64 models (the fourth asterisk). The fact that these curves are monotonically decreasing suggests that I am just reading the plot wrong somehow -- surely going from 1 GPU per model to 2 should improve performance somewhere? Additionally, shouldn't the dashed lines be increasing, not horizontal (i.e., assuming perfect parallelism, if you increase the number of GPUs per model--the x axis--the number of models I can train in 8 days--the y axis--increases)?",
"A revised version of our paper, incorporating the feedback from reviewers, has been posted.",
"(1) Wall-Clock Problem:\n\n- (Reviewer 1) “The experiments in Section 4.2 show that, using 25 workers, the best of these 5 variants obtains a 5-fold speedup over sequential Hyperband on CIFAR and an 8-fold speedup on SVHN. I am confused: the *best* of 5 SHA variants only achieves a 5-fold speedup using 25 workers? I.e., parallel Hyperband, which would run the 5 SHA variants in parallel, would require 125 workers but only yield a 5-fold speedup? If I understand this correctly, I would clearly call this a negative result.”\n\nThe small scale experiment in Section 4.2 demonstrates that our algorithm succeeds in the wall-clock constrained setting, which is precisely the focus of this paper. We show that asynchronous SHA finds a good configuration for this benchmark in the time to train one model to completion. For these experiments, we observed only 5-8x speedup for 25 workers because a few hundred configurations were sufficient to find a good setting and exploring over 6k configurations with 25 workers did not offer significant benefit. Hence, most of the speedup can be attributed to reducing the time for training a single configuration to completion from 5R in the sequential setting to ~1R for asynchronous SHA (which again is the stated goal of this work). \n\n(2) Successive Halving versus Hyperband:\n\n- (Reviewer 1) “Likewise, for the large-scale experiment, a single run of Vizier actually yields as good performance as the best of the 5 SHA variants, and it is unknown beforehand which SHA variant works best -- in this example, actually Bracket 0 (which is often the best) stagnates. Parallel Hyperband would run the 5 SHA variants in parallel, so its performance at a budget of 10R with a total of 500 workers can be evaluated by taking the minimum of the 5 SHA variants at a budget of 2R. This would obtain a perplexity of above 90, which is quite a bit worse than Vizier's result of about 82. In general, the performance of parallel Hyperband can be computed by taking the minimum of the SHA variants and multiplying the time taken by 5; this shows that at any time in the plot (Figure 3, left) Vizier dominates parallel Hyperband. Again, this is apparently a negative result.”\n\nIn this work, we focused on generalizing Successive Halving (SHA) for the parallel setting. As stated in the end of Section 3, we believe that in practice, users often try one or two brackets of Successive Halving that perform aggressive early-stopping. This is supported by Li, et. al. (2017), which showed that brackets 0 and 1 were the top two performing brackets in all their experiments covering a wide array of hyperparameter tuning tasks. We also found this to be true in our experiments, and in our experience, aggressive early-stopping is highly effective, especially for neural network tasks. Moreover, for modern day hyperparameter tuning problems with high-dimensional search spaces and models that are very expensive to train, aggressive early-stopping is necessary for the problem to be tractable. \n\nIn light of these arguments, we believe it is reasonable to compare Vizier to just the most aggressive brackets of SHA. Our results show bracket 0 and bracket 1 are competitive with Vizier, despite being much simpler and easier to implement. Additionally, whereas the nonparametric regression early-stopping method used in Vizier (based on the work in Golovin, et. al. (2017)) is heuristic in nature, SHA offers a way to perform principled early-stopping. \n\nThat said, we agree with the reviewer that a comparison to Hyperband would be informative. Hence, we have performed additional experiments for “Vizier 5x (Early-Stop)”, which represents Vizier with early-stopping run with 5 times the resources (i.e. 2.5k workers, which is the same as that used for 5 brackets of SHA). We will add this to the chart with the PTB results in Figure 3. The results show that Vizier 5x (Early-Stop) performs comparably to brackets 0 and bracket 1, and hence to parallel Hyperband, which takes the minimum across all 5 brackets. Remarkably, Hyperband matches the performance of Vizier 5x without any of the optimization overhead associated with Bayesian methods, using simple random sampling and adaptive resource allocation. \n\n- (Reviewer 1) “worse performance in the same budget than Google Vizier (even though that treats the problem as a black box)”\n\nAs stated in our response to the previous comment, asynchronous SHA performs comparably to Vizier with and without early-stopping, and similarly, parallel Hyperband performs comparably to Vizier 5x with early-stopping. We note the early-stopping variant of Vizier does not treat the problem as “black-box.” \n",
"(3) State-of-the-art Results:\n\n- (Reviewer 3) “[C]an the authors discuss the final results achieved by their hyperparameter optimization compared to state-of-the-art results in the field?” ... “Do the authors think that their technique could improve SOTA on CIFAR10 or CIFAR100 if applied to a modern CNN architecture like a ResNet or DenseNet?” ... “Are hyperparameters just too well tuned on these tasks by humans, and the idea is that Hyperband will be better on new tasks where humans haven't been working on them for years?” \n\nLi, et. al. (2017) compared the performance of Hyperband to state-of-the-art hyperparameter tuning results on CIFAR-10 using the cudaconvnet architecture. When limited to this architecture, Hyperband was able to achieve state-of-the-art (SOTA) results, exceeding the accuracy of the hand-tuned model by ~1% point. We hypothesize that SHA/Hyperband can improve upon SOTA on more modern CNN architectures as well, since it would be able to explore the space much faster than popular methods like random search and hand-tuning. \n\nFor the PTB experiment in Section 4.3, our model family and search space encompasses the 2-layer LSTMs studied by Zaremba, et. al. (2015). Specifically, our range for # of hidden LSTM nodes of 10 to 1000 is closest to the medium model with 650 nodes considered by Zaremba, et. al. (2015). Their medium model achieved a test perplexity of 82.7 while the best model found by bracket 1 in our experiment achieved a test perplexity of 81.3. Although the search space we used is not directly comparable to the medium model, we believe this is an encouraging result in terms of achieving SOTA on this specific dataset and model family. We will add a paragraph to the revision discussing the performance of our results relative to other published works.\n\n- (Reviewer 1) “Vizier: why did the authors only use Vizier's default Bayesian optimization algorithm?”\n\nThe default Vizier algorithm that we used automatically selects the Bayesian optimization method that is expected to perform the best. Batched Gaussian Process Bandits is used when the number of trials is under 1k, otherwise Vizier uses a proprietary local-search algorithm (Golovin, et. al., 2017).\n\n(4) Training with Multiple GPUs (Section 4.4 / Figure 4):\n\n- (Reviewer 1) “A clarification question: Section 4.4 does not report on any runs of actual neural networks, does it? And not on any runs of Hyperband, correct?”\n\nYes, this is correct. All numbers reported in Section 4.4. are *predicted* results using the PALEO performance model.\n\n- (Reviewer 1) “Do I understand the reasoning correctly as pointing out that standard parallelization across multiple GPUs is not great, and that thus, in combination with parallel Hyperband, runs should be done mostly on one GPU only? How does this relate to the results in the cited paper ‘Accurate, Large-batch SGD: Training ImageNet in 1 Hour\" (https://arxiv.org/abs/1706.02677)? Quoting from its abstract: \"Using commodity hardware, our implementation achieves ∼ 90% scaling efficiency when moving from 8 to 256 GPUs.’ That seems like a very good utilization of parallel computing power?”\n\nWe measure speedups in terms of time per iteration under a *strong scaling regime*, where batch size stays the same as the # of GPUs used to train a single model increases. This results in a serial-equivalent execution. In contrast, the paper (Goyal, et al.) referenced by the reviewer focuses on the weak scaling regime, where batch size increases with # of GPUs per model. Weak scaling is of course advantageous from a computational perspective, however, it does not preserve serial-equivalency, and the resulting models can often underperform (in terms of accuracy) relative to models trained using smaller batches. Goyal, et. al. was impressively able to achieve high accuracy on ImageNet with Resnet50 while using a large batch sizes by adjusting the learning rate, which certainly makes the use of large batch sizes/weak scaling for distributed training more appealing. However, it is unclear whether these techniques generalize to other tasks and models. Furthermore, the result in Goyal et. al does not invalidate either the specific results we present or the more general point about the attractiveness of leveraging distributed resources in an embarrassingly parallel fashion in the context of problems requiring hyperparameter optimization.\n\n\n- (Reviewer 3) Confusion regarding Figure 4.\n\nWe will improve the chart as well as the caption in Figure 4 to simplify the presentation. The lines show how many different configurations/models can be evaluated for a fixed number of GPUs (128 = 2^7 Tesla K80s) and a given time budget. A higher number of GPUs per model is required when we want to train an individual model faster. However, since training speed does not scale linearly with # of GPUs per model, training a model faster comes at the expense of fewer total models trained. \n",
"- (Reviewer 1) “Limited methodological contribution/novelty”\n\nWe respectfully disagree with the reviewer’s comment. While our resulting algorithm is simple, we certainly did not start out with this approach. We initially tried to parallelize Successive Halving by parallelizing each bracket rung by rung. However, this approach suffers from a geometrically decreasing number of jobs as the algorithm progresses to higher rungs, and is highly susceptible to stragglers and dropped jobs. While the first issue can be addressed by starting more brackets as the number of jobs decrease, the second issue is more serious and harder to address. We also considered taking a divide and conquer approach, i.e. running a separate bracket of Successive Halving on each machine. However, this approach is poorly suited for the wallclock-time constrained setting, because it takes the same amount of time as the sequential algorithm to return a configuration trained to completion. In contrast, the algorithm we present addresses all of these issues, while retaining the benefits of Successive Halving. Moreover, as noted by Reviewer 3, we view the simplicity of our resulting algorithm as one of its core benefits.\n\n- (Reviewer 1) “Section 3.1, point 2: this is unclear to me, even though I know Hyperband very well. Can you please make this clearer?”\n\nThe algorithm analyzed by Li, et. al. (2016) doubles the budget for SHA in the outer loop, thereby increasing the starting number of configurations for each bracket of SHA. This is required by the theoretical analysis, which assumes that eventually the optimal bracket of SHA will be run with enough configurations so that an epsilon good configuration is in the starting set with high probability. In contrast, the actual algorithm that Li, et. al. (2016) recommend using in practice keeps the starting number of configurations constant to prevent doubling waiting times in between configurations trained to completion. Our algorithm addresses this discrepancy by naturally grows the number of configurations without blocking promotions to the top rung.\n\n- (Reviewer 1) “It seems very useful to already recommend configurations in each rung of Hyperband, and I am surprised that the methods section does not mention this.” ... “Therefore, recommending after each rung seems to be a contribution of this paper, and I think it would be nice to read about this in the methods section.”\n\nWe agree that intermediate losses observed by SHA in the lower rungs are already very useful for selecting a good configuration. While straightforward, this is not mentioned in the original Hyperband paper, and we will make note of this in our updated submission.\n\n- (Reviewer 1) Bug in Algorithm 1.\n\nWe apologize for the confusing wording used in Algorithm 1. The algorithm simply moves from the top rung to lower rungs in search of a promotable configuration. We will rephrase the wording in Algorithm 1 to reflect this.\n",
"We thank the reviewers for providing thoughtful comments and feedback for our paper. In this rebuttal, we focus on 4 main topics:\n\n(1) Wall-Clock Problem: As noted in the introduction, we are tackling the problem of evaluating “orders of magnitude more hyperparameter configurations than available parallel workers in a small multiple of the wall-clock time needed to train a single model.” As a result, in our experiments, we focus on the raw magnitude of the time taken to return an accurate model, rather than speedups versus the sequential setting. \n\n(2) Successive Halving versus Hyperband: For various reasons described below, we believe that comparing Vizier with Successive Halving is justified. That said, we also see the merit in comparing against Hyperband, and we have just completed running an additional experiment to this end.\n\n(3) State-of-the-art (SOTA) Results: Similar to the original Hyperband work by Li, et. al. (2017), we evaluate the relative performance of asynchronous SHA by comparing to published results on the same dataset (e.g., CIFAR10) *and* the same model family (e.g., cudaconvnet with a specific search space). Under this evaluation criterion, our proposed methods are SOTA on CIFAR10/cudaconvnet and are quite promising on PTB for our specified search space.\n\n(4) Training with Multiple GPUs (Section 4.4 / Figure 4): The goal of this section is to explore how to trade-off between distributed training and embarrassingly parallel hyperparameter optimization when working on GPU clusters. While our underlying arguments have not changed, we will update the text and the figure to clarify our arguments / improve the presentation.\n\n***Note:*** We are still working on the final touches of our updated draft, and will upload the new PDF in the next few days.\n"
] | [
5,
6,
5,
-1,
-1,
-1,
-1,
-1
] | [
5,
3,
5,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_S1Y7OOlRZ",
"iclr_2018_S1Y7OOlRZ",
"iclr_2018_S1Y7OOlRZ",
"iclr_2018_S1Y7OOlRZ",
"Bk0Qc6EmG",
"Bk0Qc6EmG",
"Bk0Qc6EmG",
"iclr_2018_S1Y7OOlRZ"
] |
iclr_2018_SyAbZb-0Z | Transfer Learning to Learn with Multitask Neural Model Search | Deep learning models require extensive architecture design exploration and hyperparameter optimization to perform well on a given task. The exploration of the model design space is often made by a human expert, and optimized using a combination of grid search and search heuristics over a large space of possible choices. Neural Architecture Search (NAS) is a Reinforcement Learning approach that has been proposed to automate architecture design. NAS has been successfully applied to generate Neural Networks that rival the best human-designed architectures. However, NAS requires sampling, constructing, and training hundreds to thousands of models to achieve well-performing architectures. This procedure needs to be executed from scratch for each new task. The application of NAS to a wide set of tasks currently lacks a way to transfer generalizable knowledge across tasks.
In this paper, we present the Multitask Neural Model Search (MNMS) controller. Our goal is to learn a generalizable framework that can condition model construction on successful model searches for previously seen tasks, thus significantly speeding up the search for new tasks. We demonstrate that MNMS can conduct an automated architecture search for multiple tasks simultaneously while still learning well-performing, specialized models for each task. We then show that pre-trained MNMS controllers can transfer learning to new tasks. By leveraging knowledge from previous searches, we find that pre-trained MNMS models start from a better location in the search space and reduce search time on unseen tasks, while still discovering models that outperform published human-designed models. | rejected-papers | This paper presents a sensible, but somewhat incremental, generalization of neural architecture search. However, the experiments are only done in a single artificial setting (albeit composed of real, large-scale subtasks). It's also not clear that such an expensive meta-learning based approach is even necessary, compared to more traditional approaches.
If this paper was less about proposing a single new extension, and more about putting that extension in a larger context, (either conceptually or experimentally), it would be above the bar.
| train | [
"HkKJIALgM",
"HyU-Egtef",
"S1dRXMqxG",
"BkrXu907M",
"B1_TYtaQM",
"rycuQ6nmz",
"S1J62Ih7f",
"Bkls2InmM",
"HJDO3Un7z",
"SkeU2L3Qz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The paper proposes an extension of the Neural Architecture Search approach, in which a single RNN controller is trained with RL to select hyperparameters for child networks that must perform different tasks. The architecture includes the notion of a \"task embedding\", that helps the controller keeping track of similarity between tasks, to facilitate transfer across related tasks.\n\nThe paper is very well written, and based on a simple but interesting idea. It also deals with core issues in current machine learning.\n\nOn the negative side, there is just one experiment, and it is somewhat limited. In the experiment, the proposed model is trained on two very different tasks (English sentiment analysis and Spanish language detection), and then asked to generalize to another English sentiment analysis task and to a Spanish sentiment analysis task. The models converge faster to high accuracy in the proposed transfer learning setup than when trained one a single task with the same architecture search strategy. Moreover, the task embedding for the new English task is closer to that of the training English task, and the same for the training/test Spanish tasks.\n\nMy main concern with the experiment is that the approach is only tested in a setup in which there is a huge difference between two classes of tasks (English vs Spanish), so the model doesn't need to learn very sophisticated task embeddings to group the tasks correctly for transfer. It would be good to see other experiments where there is less of a trivial structure distinguishing tasks, to check if transfer helps.\n\nAlso, I find it surprising that the Corpus Cine sentiment task embedding is not correlated at all with the SST sentiment task. If the controller is really learning something interesting about the nature of the tasks, I would have expected a differential effect, such that IMDB is only correlated with SST, but Corpus Cine is correlated to both the Spanish language identification task and SST. Perhaps, this is worth some discussion.\n\nFinally, it's not clear to me why the multitask architecture was used in the experiment even when no multi-task pre-training was conducted: shouldn't the simple neural architecture search method be used in this case?\n\nMinor points:\n\n\"diffferentiated\": different?\n\n\"outputted actions\": output actions\n\n\"the aim of increase the training stability\": the aim of increasing training stability\n\nInsert references for Polyak averaging and Savitzky-Golay filtering.\n\nFigure 3: specify that the Socher 2013 result is for SST\n\nFigure 4: does LSS stand for SST?\n\nI'm confused by Fig. 6: why aren't the diagonal values 100%?\n\nMNMS is referred to as MNAS in Figure 5.\n\nFor architecture search, the neuroevolution literature should also be cited (https://www.oreilly.com/ideas/neuroevolution-a-different-kind-of-deep-learning).\n",
"Summary\nThis paper extends Neural Architecture Search (NAS) to the multi-task learning problem. A task conditioned model search controller is learned to handle multiple tasks simultaneously. The experiments are conducted on text data sets to evaluate the proposed method.\n\nPros\n1.\tThe problem of neural architecture design is important and interesting.\n2.\tThe motivation is strong. NAS (Zoph & Le, 2017) needs to train a model for a new task from scratch, which is inefficient. It is reasonable to introduce task embeddings into NAS to obtain a generalization model for multiple tasks.\n\nCons\n1.\tSome important technical details are missing, especially for the details regarding task embeddings.\n2.\tThe experiments are not sufficient.\n\nDetailed Comments\n1.\tThe paper does not provide the method of how to obtain task embeddings. In addition, if task embeddings are obtained by an auxiliary network, is it feasible to update task embeddings by updating the weights of this auxiliary network?\n2.\tThe discussion of off-policy training is questionable. There is no experiment to demonstrate the advantage of off-policy training compared to on-policy training.\n3.\tIn order to demonstrate the effectiveness of the idea of multi-task learning and task conditioning in MNMS, some architecture search methods for single-task should be conducted for comparison. For instance, NAS on SST or the Spanish language identification task should be compared.\n4.\tIn order to demonstrate the efficiency of MNMS, running time results of MNMS and NAS should be reported.\n5.\tIn my opinion, the title is not appropriate. The most important contribution of this paper is to search neural models for multiple tasks simultaneously using task conditioning. Only when this target is achieved, is it possible to transfer a pre-trained controller to new tasks with new task embeddings. Therefore, the title should highlight multitask neural model search rather than transfer learning.\n6.\tIn Figure 5, \"MNAS\" should be \"MNMS\".\n",
"In this paper authors are summarizing their work on building a framework for automated neural network (NN) construction across multiple tasks simultaneously. \n\nThey present initial results on the performance of their framework called Multitask Neural Model Search (MNMS) controller. The idea behind building such a framework is motivated by the successes of recently proposed reinforcement based approaches for finding the best NN architecture across the space of all possible architectures. Authors cite the Neural Architecture Search (NAS) framework as an example of such a framework that yields better results compared to NN architectures configured by humans. \n\nOverall I think that the idea is interesting and the work presented in this paper is very promising. Given the depth of the empirical analysis presented the work still feels that it’s in its early stages. In its current state and format the major issue with this work is the lack of more in-depth performance analysis which would help the reader draw more solid conclusions about the generalization of the approach.\n\nAuthors use two text classification tasks from the NLP domain to showcase the benefits of their proposed architecture. It would be good if they could expand and analyze how well does their framework generalizes across other non-binary tasks, tasks in other domains and different NNs. This is especially the case for the transfer learning task. \n\nIn the NAS overview section, readers would benefit more if authors spend more time in outlining the RL detail used in the original NAS framework instead of Figure 1 which looks like a space filler. \n\nAcross the two NLP tasks authors show that MNMS models trained simultaneously give better performance than hand tuned architectures. In addition, on the transfer learning evaluation approach they showcase the benefit of using the proposed framework in terms of the initially retrieved architecture and the number of iterations required to obtain the best performing one. \nFor better clarity figures 3 and 5 should be made bigger. \nWhat is LSS in figure 4?",
"Thanks for the revision, that clarifies some important points and puts the results in perspective. While I find the general direction of your work very promising, I stand by my initial point of view that more extensive experiments should be added for a long paper in a major conference.",
"Apologies - the revision is now uploaded. ",
"Thanks for your reply. Unfortunately, I do not find it as helpful as it could be, because it refers to a revision of the paper that, as far as I can see, you have not uploaded on the OpenReview site. Consequently, you're pointing to arguments and references I cannot access :(\n\nConcerning the other differences in 4.2, perhaps some ablation would help to see how much they matter?",
"Again, thank you for the thoughtful and detailed review. In addition to our responses above to the other two reviews, to first clarify the nature of the experiments:\n \t1. Section 4.2 describes the results of training MNMS models jointly on the SST and Spanish Language Identification tasks.\n \t2. Section 4.3 uses these multi-task-trained MNMS models as the pre-trained models. The transfer learning results shown are the result of subsequently training the MNMS models initialized to the weights from part 4.2 further on each of the transfer learning tasks (CorpusCine and IMDB). During transfer learning, we initialize a new task embedding vector for the new task that is again trained jointly with the MNMS model. While we transfer learn to a single new task, multi-task pretraining has occurred. Further, after transfer learning, the learned task embedding for the new task can now be directly and meaningfully compared to the existing task embeddings, as in Figure 6.\n \n1. “My main concern with the experiment is that the approach is only tested in a setup in which there is a huge difference between two classes of tasks (English vs Spanish), so the model doesn't need to learn very sophisticated task embeddings to group the tasks correctly for transfer. It would be good to see other experiments where there is less of a trivial structure distinguishing tasks, to check if transfer helps.”\n \tWe specifically chose the two initial multitask tasks to be different enough that a single set of hyperparameters would not be optimal for both. However, as seen in 4.2, there are other significant differences in the parameters learned for each task beyond the English vs Spanish word embeddings.\n \tWe have revised the draft to include a further discussion within the conclusion section of the limitations of these experiments and necessary future tasks to demonstrate generalization.\n \n2. “Also, I find it surprising that the Corpus Cine sentiment task embedding is not correlated at all with the SST sentiment task. If the controller is really learning something interesting about the nature of the tasks, I would have expected a differential effect, such that IMDB is only correlated with SST, but Corpus Cine is correlated to both the Spanish language identification task and SST. Perhaps, this is worth some discussion.”\nThis is a good point, and we have updated the discussion to touch on this.\n \n3. “Finally, it's not clear to me why the multitask architecture was used in the experiment even when no multi-task pre-training was conducted: shouldn't the simple neural architecture search method be used in this case?”\n \tAs clarified above, the transfer learning experiments show the results after multi-task pre-training. Let us know if further clarification can be made.\n \n4. Minor points: thank you for catching these. We have updated the grammatical fixes where we believed appropriate and the figures accordingly, and added a reference to the neuroevolution literature. \n",
"Thank you for your thoughtful and detailed review! In addition to our response above, to address the other detailed comments within your review:\n1. “The paper does not provide the method of how to obtain task embeddings. In addition, if task embeddings are obtained by an auxiliary network, is it feasible to update task embeddings by updating the weights of this auxiliary network?”\n \tAs described in 3.2, the task embeddings are randomly initialized vectors that are trained jointly with the controller; these embeddings are therefore learned automatically as part of the training process. During transfer learning to a new task, a new, randomly-initialized vector representation is added to the embedding table for the new task, and the task embedding for the new task is again learned automatically during transfer learning.\n \tAs with other embedding tables, it is possible to continue updating all of the existing task embeddings along with other network weights during subsequent transfer learning. In this work, the same pre-trained model is separately transfer learned to each of the IMDB and CorpusCine tasks. We therefore do not continue to update the initial pre-training task embeddings here to allow between comparison between the transfer learned tasks.\n \n2. “The discussion of off-policy training is questionable. There is no experiment to demonstrate the advantage of off-policy training compared to on-policy training.”\n \n3. “In order to demonstrate the effectiveness of the idea of multi-task learning and task conditioning in MNMS, some architecture search methods for single-task should be conducted for comparison. For instance, NAS on SST or the Spanish language identification task should be compared.”\n \tFigure 5 shows the performance of an MNMS model trained on a single task (Corpus Cine and IMDB) as a baseline for comparison with the transfer learned models.\nWe present the task conditioning for simultaneous task training as a stepping stone towards more generalized training for transfer learning to new tasks, rather than as a method for run-time improvements in itself.\n \n4. “In order to demonstrate the efficiency of MNMS, running time results of MNMS and NAS should be reported.”\n \tMNMS as presented is a direct generalization of NAS, and in the single-task case (as with the single-task, non-pre-trained baselines compared with transfer learning) frameworks are identical. Training with a single, randomly initialized task embedding is equivalent to simply using a standard RNN embedding in the vanilla NAS framework.\n \n5. “In my opinion, the title is not appropriate. The most important contribution of this paper is to search neural models for multiple tasks simultaneously using task conditioning. Only when this target is achieved, is it possible to transfer a pre-trained controller to new tasks with new task embeddings. Therefore, the title should highlight multitask neural model search rather than transfer learning.”\n \tWhile we do believe that multitask transfer learning is an important contribution, multitask transfer learning is presented as a stepping stone specifically towards enabling transfer learning. We present multitask training and the concept of task embeddings for task conditioning as a method to enable generalization for automated architecture design that can extend to new tasks.\n \n6. “In Figure 5, \"MNAS\" should be \"MNMS\".”\n \tThis has been updated in the revised draft; thank you for catching this!",
"To further address the points made in this review specifically:\n \n1. “Given the depth of the empirical analysis presented the work still feels that it’s in its early stages. In its current state and format the major issue with this work is the lack of more in-depth performance analysis which would help the reader draw more solid conclusions about the generalization of the approach.”\n \tAs discussed above, this paper was intended to propose a generalized framework and demonstrate that both multitask training and transfer learning are possible, within these proof-of-concept domains. However, please let us know if there are suggestions for specific further analyses about the current experiments.\n \n2. “It would be good if they could expand and analyze how well does their framework generalizes across other non-binary tasks, tasks in other domains and different NNs. This is especially the case for the transfer learning task.”\n \tWe have updated the revised draft conclusion to include a more detailed discussion of the limitations of this current study, and to include further discussion of ongoing work and future work to evaluate the framework on additional tasks and task sets, based on this feedback.\n3. “In the NAS overview section, readers would benefit more if authors spend more time in outlining the RL detail used in the original NAS framework instead of Figure 1 which looks like a space filler.”\n\tWhile this section was intended as a minimal overview of the original NAS framework (with the understanding that readers could reference the original works for greater detail), we have updated the revised draft to include some additional details, and reduced the size of figure 1.\n .\n \n4 and 5: “For better clarity figures 3 and 5 should be made bigger.” and “What is LSS in figure 4?”\n\tThe revised draft corrects the typo (LSS is now SST) as well as a typo in Figure 3.\n \tWe have updated the revised draft to enlarge figures 3 and 5, and correct the typo (LSS is now SST.) Thank you! ",
"Thank you for the thoughtful and detailed review. We are actively continuing to evaluate the MNMS framework on additional search spaces, non-binary tasks, and tasks outside of the NLP domain. However, we believe that the presented experiments are sufficient to demonstrate two important contributions of the proposed generalized framework, each of which addresses key concerns about previous RL-based automated NN design frameworks such as NAS:\n1. Simultaneous task training is possible.\n \tThe ability to handle multitask training of any kind addresses key issues regarding\n \tthe generalizability and feasibility of RL-based automated search frameworks. In\n \tparticular, learning task representations that allow a single controller model to\n \tdifferentiate between tasks is necessary for any kind of task generalization using\n \tthis form of meta-learning framework.\n \tHowever, especially in RL-based environments, prior work has\n \tdemonstrated that handling multitask learning is empirically challenging,\n \teven on relatively simple tasks; as the two papers cited in the related work section\n \talso discuss, multitask RL training even across two tasks often causes negative\n \tinterference between the tasks, including cases where gradients from one task\n \tcompletely dominate the other. Therefore, it is not obvious that a\n \tNAS-like framework should be able to handle multitask training, even on\nrelatively simple domains, rather than simply collapsing to single, undifferentiated \nparameter choices that are suboptimal for each task. Indeed, as we describe in 3.2, \nmultitask replay is necessary even in this relatively simple domain to ensure adequate \ndifferentiation.\n \tTherefore, while preliminary, we believe that it is important to show that an RL-\n \tbased metalearning framework can indeed discover differentiated architectures for\n \ttwo tasks, which were specifically chosen so that no single, optimal parameter\n \tsolution existed. Further, the ability to automatically learn vector\n task representations sufficient to encode this differentiation during training, even\nin this relatively simple task domain, offers a necessary step towards further work\nin simultaneous multitask training across more challenging tasks in the future.\n2. Pre-training NAS-like frameworks for future transfer learning to new tasks is possible, and speeds up convergence.\n \tA primary criticism of RL-based metalearning architectures, such is NAS, is that these methods are extremely time and computationally intensive, rendering them infeasible without computational resources that are not accessible to many researchers. Therefore, the possibility of using pre-trained models for transfer learning to any new task to reduce search time is a necessary step towards making this approach broadly feasible, both for additional research and more challenging tasks.\n \tHowever, as with multitask training, it is not obvious that this would be possible, even when designing architectures for relatively simple task. Again, especially in RL models, attempting to transfer learn could lead to either 1. premature convergence to suboptimal parameters biased by the pre-training, or 2. no convergence speedup, or even additional convergence time, in which the controller first unlearns its pre-training and then learns the new task. Figure 5 in the results shows that transfer learning in this domain allows the controller to 1. start from a better place in the search space, indicating that NAS-like frameworks can learn knowledge that generalizes to another task, and 2. converge more quickly overall, indicating that this transfer learning can speed up convergence and therefore opening transfer learning as a grounds for further research.\nTherefore, while these empirical results are preliminary, we believe that demonstrating that both of these points are possible are important, nonobvious generalizations on the NAS architecture that offer routes for future study."
] | [
5,
7,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
2,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SyAbZb-0Z",
"iclr_2018_SyAbZb-0Z",
"iclr_2018_SyAbZb-0Z",
"B1_TYtaQM",
"rycuQ6nmz",
"S1J62Ih7f",
"HkKJIALgM",
"HyU-Egtef",
"SkeU2L3Qz",
"S1dRXMqxG"
] |
iclr_2018_SyrGJYlRZ | YellowFin and the Art of Momentum Tuning | Hyperparameter tuning is one of the most time-consuming workloads in deep learning. State-of-the-art optimizers, such as AdaGrad, RMSProp and Adam, reduce this labor by adaptively tuning an individual learning rate for each variable. Recently researchers have shown renewed interest in simpler methods like momentum SGD as they may yield better results. Motivated by this trend, we ask: can simple adaptive methods, based on SGD perform as well or better? We revisit the momentum SGD algorithm and show that hand-tuning a single learning rate and momentum makes it competitive with Adam. We then analyze its robustness to learning rate misspecification and objective curvature variation. Based on these insights, we design YellowFin, an automatic tuner for momentum and learning rate in SGD. YellowFin optionally uses a negative-feedback loop to compensate for the momentum dynamics in asynchronous settings on the fly. We empirically show YellowFin can converge in fewer iterations than Adam on ResNets and LSTMs for image recognition, language modeling and constituency parsing, with a speedup of up to 3.28x in synchronous and up to 2.69x in asynchronous settings. | rejected-papers | This paper asks when SGD+M can beat adaptive methods such as Adam, and then suggests a variant of SGD+M with an adaptive controller for a single learning rate and momentum parameter. There is are comparisons with some popular alternatives. However, the bulk of the paper is concerned with a motivation that didn't convince any of the reviewers. | train | [
"ryUK51INM",
"HyuhIWYez",
"B1RZJ1cxG",
"SJ0CHgbbM",
"SkdSe3zNz",
"BJsMnO6mG",
"B1jF7rpQz",
"SJ6oqITQG",
"Bkf6euOGM",
"ryz7Z_OzM",
"rJ9XCPOMG",
"BkE6RP_fG",
"HJWNy__GM",
"Hy4hvwuff"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"public",
"public",
"public",
"public",
"public",
"public",
"public",
"public"
] | [
"Dear Carlos,\n\nThanks for the clarification. Rescaling the learning rate as per your suggestion, on multiple experiments, has the following consequences:\n\n-- As expected, training is sped-up compared to using YF’s LR un-adjusted and just setting momentum to 0. In some cases, like the one you conduct experiment on, YF is only marginally better (or the same) than YF-minus-momentum-plus-rescaling. \n\n-- Signs of instability start showing. Essentially, trading momentum for higher learning rate when we do the suggested rescaling, makes the performance curves behave more unstable (figures below).\n\n\nOverall, in a number of examples, the original, momentum based YF tuner demonstrates better validation metrics and better stability than the suggested rescaling rule with zero momentum. \n\nE.g. YellowFin rule with non-zero momentum can demonstrate better validation perplexity in the constituency parsing model. (https://github.com/AnonRepository/YellowFin_Pytorch/blob/master/plots/parsing_test_perp.pdf).\n\nIn the following example of ResNext on CIFAR10, YellowFin rule with non-zero momentum can also demonstrate more stable validation accuracy (https://github.com/AnonRepository/YellowFin_Pytorch/blob/master/plots/cifar_smooth_test_acc.pdf). \n\nAs the third example, using the momentum-based tuner, we are able to boost the learning rate (as described in Appendix J4) to get even better performance. In this example, we use learning rate factor 3.0 to increase the learning rate on ResNext for CIFAR10 (as what we did in the experiments in appendix J4), YellowFin rule with non-zero momentum gives *both* observably higher and more stable validation accuracy than the suggested rescaling rule (https://github.com/AnonRepository/YellowFin_Pytorch/blob/master/plots/cifar_smooth_test_acc_lr_fac_3.pdf ).\n\nWe appreciate the feedback on this important point! We will be adding this discussion to our manuscript and, would also be happy to add an acknowledgement for your suggestion.\n\nBest regards,\nThe authors\n",
"This paper proposes a method to automatically tuning the momentum parameter in momentum SGD methods, which achieves better results and fast convergence speed than state-of-the-art Adam algorithm.\n\nAlthough the results are promising, I found the presentation of this paper almost inaccessible to me.\n\nFirst, though a minor point, but where does the name *YellowFin* come from?\n\nFor the presentation, the motivation in introduction is fine, but the following section about momentum operator is hard to follow. There are a lot of undefined notation. For example, what does the *convergence rate* mean (what is the measurement for convergence)? And is the *optimal accelerated rate* the same as *convergence rate* mentioned above? Also, what do you mean by *all directions* in the sentence below eq.2?\n\nThen the paper talks about robustness properties of the momentum operator. But: first, I am not sure why the derivative of f(x) is defined as in eq.3, how is that related to the original definition of derivative?\n\nIn the following paragraph, what is *contraction*? Does it have anything to do with the paper as I didn't see it in the remaining text?\n\nLemma 2 seems to use the spectral radius of the momentum operator as the *robustness*. But how can it describe the robustness? More details are needed to understand this.\n\nWhat it comes to Section 3, it seems to me that the authors try to use a local quadratic approximation for the original function f(x), and use the results in last section to find the optimal momentum parameter. I got confused in this section because eq.9 defines f(x) as a quadratic function. Is this f(x) the original function (non quadratic) or just the local quadratic approximation? If it is the local quadratic approximation, how is it correlated to the original function? It seems to me that the authors try to say if h and C are calculated from the original function, then this f(x) is a local quadratic approximation? If what I think is correct, I think it would be important to show this.\n\nAlso, the objective function in SingleStep algorithm seems to come from eq.13, but I failed to get the exact reasoning.\n\nOverall, I think this is an interesting paper, but the presentation is too fuzzy to get it evaluated.",
"The paper explores momentum SGD and an adaptive version of momentum SGD which the authors name YF (Yellow Fin). They compare YF to hand tuned momentumSGD and to Adam in several deep learning applications.\n\n\nI found the first part which discusses the theoretical motivation behind YF to be very confusing and misleading:\nBased on the analysis of 1-dimensional problems, the authors design a framework and an algorithm that supposedly ensures accelerated convergence. There are two major problems with this approach:\n\n-First: Exploring 1-dim functions is indeed a nice way to get some intuition. Yet, algorithms that work in the 1-dim case do not trivially generalize to high dimensions, and such reasoning might lead to very bad solutions.\n\n-Second: Accelerated GD does not benefit over GD in the 1-dim case. And therefore, this is not an appropriate setting to explore acceleration.\nConcretely, the definition of the generalized condition number $\\nu$, and relating it to the standard definition of the condition number $\\kappa$, is very misleading. This is since $\\kappa =1$ for 1-dim problems, and therefore accelerated GD does not have any benefits over non accelerated GD in this case.\nHowever, $\\nu$ might be much larger than 1 even in the 1-dim case.\n\n\nRegarding the algorithm itself: there are too many hyper-parameters (which depend on each other) that are tuned (per-dimension).\nAnd as I have mentioned, the design of the algorithm is inspired by the analysis of 1-dim quadratic functions.\nThus, it is very hard for me to believe that this algorithm works in practice unless very careful fine tuning is employed.\nThe authors mention that their experiments were done without tuning or with very little tuning, which is very mysterious for me.\n\nIn contrast to the theoretical part, the experiments seems very encouraging. Showing YF to perform very well on several deep learning tasks without (or with very little) tuning. Again, this seems a bit magical or even too good to be truth. I suggest the authors to perform a experiment with say a qaudratic high dimensional function, which is not aligned with the axes in order to illustrate how their method behaves and try to give intuition.\n",
"[Apologies for short review, I got called in late. Marking my review as \"educated guess\" since i didn't have time for a detailed review]\n\nThe paper proposes an algorithm to tune the momentum and learning rate for SGD. While the algorithm does not have a theory for general non-quadratic functions, experimental validation is extensive, making it a worthy contribution in my opinion. I have personally tried the algorithm when the paper came out and can vouch for the empirical results presented here.",
"Dear readers and reviewers,\n\nwe have updated our manuscript during the rebuttal period in addition to our response to the official reviews. Specifically, we:\n\n1. performed a significant rewrite of Sections 2 and 3 to make exposition of our ideas much more clear.\n\n2. we added discussion in section 3.1 on how our tuning rule in equ(8) intuitively generalizes to multiple dimensions.\n\n3. we addressed a number of reviewers comments and suggestions on clarification. \n\nBest regards,\nThe authors",
"Notice that I turn momentum off, but also rescale the learning rate in order to have the same effective learning rate. In this case the learning rate is not constant anymore, but there is no momentum. In my code, the important change is\n\nself._optimizer = tf.train.GradientDescentOptimizer(1.0 * self._lr_var * self.lr_factor / (1.0 - self._mu_var))\n\nin https://github.com/cstein06/YellowFin/blob/no_momentum/tuner_utils/yellowfin.py\n\nIn this case there is no momentum, but the results should be very similar in general. The figure I posted is for the CIFAR + Resnet model in your paper.",
"I read the paper carefully and I don't think the results are related with momentum at all. In Figure 5 one sees that the momentum value does not go above 0.8, which shouldn't make much difference (common values are 0.99 for example).\nI confirmed this by turning off momentum and rescaling the learning rate appropriately, getting the same results. Plot:\nhttps://github.com/cstein06/YellowFin/blob/no_momentum/cifar/results/compare_losses.pdf\n\nIt means that it is a learning rate scheduler in disguise.",
"Dear Carlos,\n\nthank you for the interest in our method! You pose a salient question and we have the experimental results to answer it in the appendix of our manuscript.\n\nIt is absolutely true that positive momentum is *not always* necessary to achieve the best performance possible in the SGD+Momentum framework. Your plot is a good example of that (we’d like to point out that momentum tuning does not hurt performance in this case, if anything it marginally improves things).\n\nHowever, most importantly, *in many cases, adaptively tuning momentum strictly improves performance over the prescribed constant momentum values (such as 0.0 or 0.9).*\n\nOur manuscript includes experiments in Figure 10 (Appendix J.2) in support of this point. Specifically, we performed the following experiment: fix the momentum value to either 0.0 or 0.9 and just use the tuned learning rate from YellowFin. The results show an example where momentum tuning makes a big difference throughout training (the CharRNN on the left) and another example (the CIFAR100 Resnet on the right) where fixed 0.0 momentum seems optimal in the beginning, but eventually loses out to the adaptive momentum curve (the red YF curve).\n\nIn summary, adaptively tuning momentum:\ndoes not seem to be hurting performance in your example (or in the problems we considered)\nstrictly improves performance in many cases we’ve seen (and included in our manuscript)\n\nBest regards,\nThe authors\n",
"Q: What it comes to Section 3, it seems to me that the authors try to use a local quadratic approximation for the original function f(x), and use the results in last section to find the optimal momentum parameter. I got confused in this section because eq.9 defines f(x) as a quadratic function. Is this f(x) the original function (non quadratic) or just the local quadratic approximation? If it is the local quadratic approximation, how is it correlated to the original function? It seems to me that the authors try to say if h and C are calculated from the original function, then this f(x) is a local quadratic approximation? If what I think is correct, I think it would be important to show this.\n\nf(x) is the quadratic approximation of the original function. As AnonReviewer1 pointed out, we measure h and C from the original function. The h and C measurements are used to construct the local quadratic approximation and fed into the tuning rule. We would rephrase the statement on f(x) to connect it to the original function.\n\n\nQ: The objective function in SingleStep algorithm seems to come from eq.13, but I failed to get the exact reasoning.\n\nThe SingleStep objective is a generalization of Equ (13) from 1D quadratics to multidimensional local quadratic approximations. Specifically, the SingleStep objective is the expected squared distance to the optimum of multiple dimensional local quadratic approximation after a single iterative step. For a multidimensional quadratic aligned with the axes, as we use *a single global learning rate and a single global momentum for the whole model*, the objective can be decomposed into sum of expected squared distance along different axes (i.e. on 1d quadratic), which is the left-hand side of Equ (13) (with t = 1 in Equ (13)). Note if the quadratic function is not axes-aligned, we can still decompose along the eigendirections of the Hessian instead of the axes. \n",
"We appreciate AnonReviewer1's helpful comments about improving clarity. *We will upload a new manuscripts, with the suggestions and comments on clarification incorporated, in the next couple of days.* We answer the questions below inline. If there are any further ones, we would be happy to discuss and use them to keep improving the manuscript's clarity. \n\nQ: Where does the name *YellowFin* come from?\n\nWe wanted a mnemonic for our ‘tuner’. Yellowfin happens to be one of the fastest species of tuna. \n\n\nQ: What does the *convergence rate* mean (what is the measurement for convergence)? And is the *optimal accelerated rate* the same as *convergence rate* mentioned above? Also, what do you mean by *all directions* in the sentence below eq.2?\n\n\nThe convergence rate is with respect to the distance to the optimum. Specifically, the rate is \\beta if and only if \\|x_t - x* \\| <= \\beta^t \\| x_0 - x* \\|, where x* is the optimum and x_t is the point after t steps in an iterative optimization process. The *optimal accelerated rate* is also with respect to distance towards the optimum, i.e. the optimal \\beta in the above definition of convergence rate.\n\nOur statement on “all directions” is in the context of multidimensional quadratics with different curvature on different axes (i.e. \\kappa > 1). Specifically, with large enough momentum \\mu and proper learning rate, momentum gradient descent has the same convergence rate \\sqrt{\\mu} along all the axes, i.e. | x_{i, t} - x* | <= \\sqrt{\\mu}^t | x_{i, 0} - x*| where x_{i, t} is the coordinate on axis i after t steps. This holds even if the eigendirection of the quadratics are not axes-aligned. We will rephrase in the updated version to clarify.\n\n\nQ: Then the paper talks about robustness properties of the momentum operator. But: first, I am not sure why the derivative of f(x) is defined as in eq.3, how is that related to the original definition of derivative?\n\nDefinition 1 is not to re-define derivative. Instead, we define generalized curvature h for 1d functions and rewrite the derivative in terms of h. We will rephrase to clarify in the manuscript.\n\n\nQ: In the following paragraph, what is *contraction*? Does it have anything to do with the paper as I didn't see it in the remaining text?\n\nThe contraction actually refers to the multiplicative factor that describes how fast the distance to optimum decays. E.g. for 1d quadratics, gradient descent gives x_{t + 1} - x* = (1 - \\alpha h(x_t) ) (x_{t} - x*) with |1 - \\alpha h(x_t) | as the contraction. In the appendix of our upcoming new manuscript, this concept will be helpful in demonstrating examples on the motivation of generalized curvature.\n\n\nQ: Lemma 2 seems to use the spectral radius of the momentum operator as the *robustness*. But how can it describe the robustness? More details are needed to understand this.\n\n*Robustness* of momentum operator means that momentum GD can achieve asymptotic linear convergence rate, which is *robust* to 1) the variation of generalized curvature of the landscape. 2) a range of different learning rate. Specifically, from Equ (4), we see constant spectral radius \\sqrt{\\mu} of operator A_t can imply asymptotic linear convergence rate \\sqrt{\\mu}. Lemma 2 gives the *condition* to achieve this rate. As discussed in the two paragraphs following Lemma 2, given momentum \\mu is properly set based on the dynamic range of generalized curvature, the *condition* can be robustly satisfied 1) regardless of the variation of generalized curvature; 2) with a range of different value for learning rate.\n",
"Q: Regarding the algorithm itself: there are too many hyper-parameters (which depend on each other) that are tuned (per-dimension).\n\nAs mentioned in the abstract, YellowFin only auto-tunes two hyperparameters for the entire model: a single global momentum and a single global learning rate (i.e. YellowFin does not use per-dimension learning rates and per-dimension momentums). The main question in the intro, which motivates our paper, is “can we produce an adaptive optimizer that does not depend on per-variable learning rate adaptation?”. The Yellowfin tuning rule uses only one SingleStep instance in section 3.1 for the whole model (instead of using one instance for each variable). It operates on the high dimensional local quadratic approximation and uses estimates of extremal curvatures h_min and h_max over all possible directions. It solves for a single momentum and a single learning rate. Note the SingleStep problem is a direct generalization of our 1D analysis at the beginning of Section 3, by decomposing along the eigendirections of the high dimensional quadratic.\n\nWe will make Section 3 more precise in an upcoming revision and emphasize that we run a single instance of the SingleStep optimizer for the entire model with the estimators for h_min and h_max providing rough estimates of extremal curvatures along all directions.\n\n\nQ: And as I have mentioned, the design of the algorithm is inspired by the analysis of 1-dim quadratic functions. Thus, it is very hard for me to believe that this algorithm works in practice unless very careful fine tuning is employed. The authors mention that their experiments were done without tuning or with very little tuning, which is very mysterious for me.\n In contrast to the theoretical part, the experiments seems very encouraging. Showing YF to perform very well on several deep learning tasks without (or with very little) tuning. Again, this seems a bit magical or even too good to be truth. I suggest the authors to perform a experiment with say a quadratic high dimensional function, which is not aligned with the axes in order to illustrate how their method behaves and try to give intuition.\n\nFollowing up on AnonReviewer3’s suggestion, we demonstrate the convergence behavior on a high-dimensional quadratic which is not aligned with the axes. More specifically, we first generated a 1000d quadratic, with curvature 1, 2, 3, …, 1000 on the axes. Then we rotate the quadratic for 45 degrees on the planes defined by the axes with curvature i and 1001 - i (for i = 1, 2,3, …, 500) in a pairwise fashion. We assume SingleStep problem has access to the oracle to exactly measure the curvature range and the distance to optimum. As shown in Figure (https://github.com/AnonRepository/YellowFin_Pytorch/blob/master/plots/1000d_quadratics.pdf), YellowFin’s tuning rule can demonstrate linear convergence rate on this landscape using the full gradient.\n\nWe have also set up anonymized YellowFin repositories with PyTorch and TensorFlow implementations. Our experiments can be easily replicated with the code in the repository.\n[PyTorch repo] https://github.com/AnonRepository/YellowFin_Pytorch\n[Tensorflow repo] https://github.com/AnonRepository/YellowFin\n\n\nIn summary, we \n\n-- showed how 1D analysis can generalize to multidimensional cases on quadratics.\n\n-- showed that there exists a meaningful definition of a generalized condition number (GCN) for 1D functions; and gave simple 1D examples where acceleration is possible.\n\n-- provided requested demonstration on high dimensional quadratics, as well as anonymized code repositories to replicate results in our manuscript.\n\nWe appreciate AnonReviewer3’s time and detailed comments/suggestions to further clarify our contributions, and merit of our optimizer. That said, we are happy to provide further analysis, clarifications and experiments to resolve further questions. \n",
"Q: Second, accelerated GD does not benefit over GD in the 1-dim case. And therefore, this is not an appropriate setting to explore acceleration. Concretely, the definition of the generalized condition number $\\nu$, and relating it to the standard definition of the condition number $\\kappa$, is very misleading. This is since $\\kappa =1$ for 1-dim problems, and therefore accelerated GD does not have any benefits over non accelerated GD in this case. However, $\\nu$ might be much larger than 1 even in the 1-dim case.\n\nIn this response, we give a detailed explanation and examples about why:\n(i) our definition of a generalized condition number (GCN) is meaningful on 1D functions and;\n(ii) even on 1D, we can use the GCN to tune non-zero values of momentum and achieve faster convergence that with 0 momentum.\n\nRegarding the motivation of generalized curvature, our main idea is that the condition number, which is defined based on a local definition of curvature, can be generalized to incorporate longer-range, non-local variations of curvature. Specifically, classic curvature at a given point is described by the eigenvalues of the local Hessian. On quadratic problems, this curvature plays a crucial role on the rate of convergence: if we use learning rate α, and no momentum, on a 1D quadratic of curvature h, the contraction (i.e. the multiplicative factor describing the stepwise shrinkage of the distance towards optimum) at every step is |1-αh|. The intuition is that when h is high, the gradient output exerts a strong ‘pull’ towards the optimum. Unfortunately, when we move to non-quadratic problems, this tight connection between curvature and convergence rate is lost.\n\nOur definition of ‘generalized curvature’ tackles this issue and maintains this tight connection for non-quadratic problems. We define it with respect to a specific local minimum and it describes how strong the ‘pull’ towards that minimum is. If h’(x) describes the generalized curvature at point x, the contraction for gradient descent with no momentum becomes |1-αh’(x)|. In this we regain the tight connection between our new definition of curvature and the convergence rate.\n\nAs the first simple example to show acceleration in 1D, let's consider the 1D function f(x)=|x|. Curvature for all x \\neq 0 is 0. Generalized curvature on the other hand is h’(x) = 1/|x|. Now if we restrict ourselves to x \\in [ε, 1], the generalized (i.e. long-range) condition number (GCN) (with optimum x* = 0) is 1/ε. That is, as we get closer to the optimum, the relative pull towards it grows and the contraction factor becomes |1-α/|x||. Assume we are aiming for an accuracy of ε, then in the absence of momentum, we need to set the learning rate as α=Ο(ε). This means that starting from x_0=1, our first steps are going to converge at a rate of ~|1-ε|, which can be extremely slow. On the other hand, if we use the GCN to tune our momentum and learning rate, we get a momentum μ = 1-2(sqrt(ε)/(1+sqrt(ε)) ~= 1-2sqrt(ε), and experience a constant rate of sqrt(μ)=sqrt(1-2sqrt(ε)). As shown in the following plot (https://github.com/AnonRepository/YellowFin_Pytorch/blob/master/plots/contraction_abs_func.pdf), momentum is able to achieve stronger contraction, i.e. faster reduction of the distance to the optimum, during the first steps of optimization. \n\nFurthermore, the next figure (https://github.com/AnonRepository/YellowFin_Pytorch/blob/master/plots/convergence_abs_func.pdf) uses the analytical results above over time and shows that for f(x)=|x| using momentum tuned according to our GCN, can yield acceleration over gradient descent without momentum on 1D functions. \n\nAs a second example to show acceleration in 1D case, we would like to point to the non-convex example of Figure 3(a) of our manuscript. In that case again, GCN=1000 which suggest a value of momentum of about 0.9, even though this is a 1D function. Plot 3(b) already shows that using this momentum allows for a constant linear convergence rate on this non-convex function. Assume in Figure 3(a), that the curvature of the top, flatter quadratic is 1 and the curvature of the bottom, steeper quadratic is 1000. The learning rate for gradient descent without momentum cannot exceed 1/500, otherwise it would always escape from the steep quadratic. Again, a similar analysis to the one we presented in the previous examples, as well an simple experiments, show that the optimal value for momentum again is not zero, even though this is a 1D function. \n\nThe reason we are able to achieve this acceleration, is because we are taking into account long-range variations in curvature.\n",
"We appreciate AnonReviewer3’s helpful and detailed comments. *We will upload a new manuscript with the clarification incorporated in the next couple of days*. In the following, we address AnonReviewer3’s questions in detail. In summary, we: \n\n-- elaborate that the goal of the present paper is not to provide theoretical guarantees for general convex functions, but to use our simple analysis from quadratics to design an optimizer that works well empirically.\n\n-- explain how 1D analysis can generalize to multidimensional analysis in our case.\n\n-- explain why there exists a meaningful definition of a generalized condition number (GCN) for 1D functions; we give simple examples where accelerated linear rate is possible for 1D cases when we use the GCN (instead of conventional condition number) to tune momentum. \n\n-- provide the requested demonstration of convergence behaviors on high dimensional quadratics, as well as anonymous repo for experiment replication.\n\n\nQ: I found the first part which discusses the theoretical motivation behind YF to be very confusing and misleading:\nBased on the analysis of 1-dimensional problems, the authors design a framework and an algorithm that supposedly ensures accelerated convergence. \n\nWe would like to clarify that our algorithm does not ensure convergence outside the class of quadratic functions for multiple dimensional cases. This difficulty to guarantee general results for (Polyak’s) momentum gradient descent has been documented by the existence of specific counter-examples (Lessard et. al, 2016). We cite this paper in section 2 in order to make it clear that we do not give general guarantees. \n\nInstead we focus on ideas inspired from quadratics; these drive the design of our tuner, our main contribution. We conducted extensive experiments on 8 different popular deep learning models, empirically demonstrating our tuner’s merit on non-quadratics. We will make this point more prominent in the manuscript.\n\n\nQ: There are two major problems with this approach: First: Exploring 1-dim functions is indeed a nice way to get some intuition. Yet, algorithms that work in the 1-dim case do not trivially generalize to high dimensions, and such reasoning might lead to very bad solutions.\n\nWe agree with the reviewer that this is not an obvious point and elaborate here. We are updating our manuscript with the following clarifying discussion. We start by discussing a blueprint for generalization that is exact for quadratics (by extending discussions in the paragraph above Lemma 4). In our answer to the next question, we extend some of the quantities and ideas---like (generalized) curvature and (generalized) condition number---to the non-quadratic case. Then we follow the same generalization blueprint to go from 1D to multidimensional. As noted, we do not aim to provide analytical guarantees for non-quadratics, but rather to design an adaptive optimization method that works empirically well.\n\nThe analysis of momentum dynamics on quadratic objectives decomposes exactly into independent scalar problems along the eigenvectors of the Hessian. For each scalar quadratic problem, the curvature is constant everywhere; we can think of this as the degenerate case where the extremal curvatures are equal, that is h_min=h_max. As the reviewer points out, the condition number for individual slices is 1, hence the optimal momentum value is 0.\n\nAll scalar components of the multi-dimensional quadratic can be tuned jointly using a single instance of SingleStep to yield a single learning rate and single momentum value. In that case extremal curvatures h_min and h_max are taken over all directions and their ratio yields the condition number; individual 1D gradient variances sum up to the total multi-dimensional gradient variance, C, and the squared distance from optimum, D^2, can be either calculated as the sum of squared distances on the scalar problems, or approximately estimated directly (the latter is what we do in our implementation of the Distance() function). In this rebuttal we include a sanity-check experiment on a synthetic quadratic. It shows that when using exact oracles as input to SingleStep, we achieve the optimal convergence rate for quadratics.\n\nNow that we have established a blueprint for going from 1D to multidimensional analysis, in our next answer, we give a detailed explanation and examples about why:\n(i) our definition of a generalized condition number is meaningful on 1D functions and;\n(ii) even on 1D, we can use it to tune non-zero values of momentum and achieve faster convergence that with 0 momentum.\n",
"We thank AnonReviewer2 for providing independent support on the accuracy of our experiments. The kind support on the validity of our experiments is a great encouragement to us. "
] | [
-1,
4,
4,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
3,
5,
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"BJsMnO6mG",
"iclr_2018_SyrGJYlRZ",
"iclr_2018_SyrGJYlRZ",
"iclr_2018_SyrGJYlRZ",
"iclr_2018_SyrGJYlRZ",
"SJ6oqITQG",
"iclr_2018_SyrGJYlRZ",
"B1jF7rpQz",
"HyuhIWYez",
"HyuhIWYez",
"B1RZJ1cxG",
"B1RZJ1cxG",
"B1RZJ1cxG",
"SJ0CHgbbM"
] |
iclr_2018_H1bM1fZCW | GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks | Deep multitask networks, in which one neural network produces multiple predictive outputs, are more scalable and often better regularized than their single-task counterparts. Such advantages can potentially lead to gains in both speed and performance, but multitask networks are also difficult to train without finding the right balance between tasks. We present a novel gradient normalization (GradNorm) technique which automatically balances the multitask loss function by directly tuning the gradients to equalize task training rates. We show that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, GradNorm improves accuracy and reduces overfitting over single networks, static baselines, and other adaptive multitask loss balancing techniques. GradNorm also matches or surpasses the performance of exhaustive grid search methods, despite only involving a single asymmetry hyperparameter α. Thus, what was once a tedious search process which incurred exponentially more compute for each task added can now be accomplished within a few training runs, irrespective of the number of tasks. Ultimately, we hope to demonstrate that gradient manipulation affords us great control over the training dynamics of multitask networks and may be one of the keys to unlocking the potential of multitask learning. | rejected-papers | This paper proposes a way to automatically weight different tasks in a multi-task setting. The problem is a bit niche, and the paper had a lot of problems with clarity, as well as the motivation for the experimental setup and evaluation. | train | [
"H10ZQaugz",
"Bycjn6tef",
"Sy3BPIMbM",
"ryLKvyUzM",
"rJQmwyUzM",
"r1NpL1UGz",
"SkRYF0DZG",
"SJkNF0PbM",
"S1Ia_CP-G",
"S1Sqw0P-f",
"H1kyHRslM",
"rkHMM4KeM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"The paper proposes a method to train deep multi-task networks using gradient normalization. The key idea is to enforce the gradients from multi tasks balanced so that no tasks are ignored in the training. The authors also demonstrated that the technique can improve test errors over single task learning and uncertainty weighting on a large real-world dataset.\n\nIt is an interesting paper with a novel approach to multi-task learning. To improve the paper, it would be helpful to evaluate the method under various settings. My detailed comments are below.\n\n1. Multi-task learning can have various settings. For example, we may have multiple groups of tasks, where tasks are correlated within groups but tasks in different groups are not much correlated. Also, tasks may have hierarchical correlation structures. These patterns often appear in biological datasets. I am wondering how a variety of multi-task settings can be handled by the proposed approach. It would be helpful to discuss the conditions where we can benefit from the proposed method.\n\n2. One intuitive approach to task balancing would be to weight each task objective based on the variance of each task. It would be helpful to add a few simple and intuitive baselines in the experiments. \n\n3. In Section 4, it would be great to have more in-depth simulations (e.g., multi-task learning in various settings). Also, in the bottom right panel in Figure 2, GrandNorm and equal weighting decrease test errors effectively even after 15000 steps but uncertainty weighting seems to reach a plateau. Discussions on this would be useful.\n\n4. It would be useful to discuss the implementation of the method as well. \n\n\n\n\n\n\n\n\n\n",
"Paper summary:\nExisting works on multi-task neural networks typically use hand-tuned weights for weighing losses across different tasks. This work proposes a dynamic weight update scheme that updates weights for different task losses during training time by making use of the loss ratios of different tasks. Experiments on two different network indicate that the proposed scheme is better than using hand-tuned weights for multi-task neural networks.\n\n\nPaper Strengths:\n- The proposed technique seems simple yet effective for multi-task learning.\n- Experiments on two different network architectures showcasing the generality of the proposed method.\n\n\nMajor Weaknesses:\n- The main weakness of this work is the unclear exposition of the proposed technique. Entire technique is explained in a short section-3.1 with many important details missing. There is no clear basis for the main equations 1 and 2. How does equation-2 follow from equation-1? Where is the expectation coming from? What exactly does ‘F’ refer to? There is dependency of ‘F’ on only one of sides in equations 1 and 2? More importantly, how does the gradient normalization relate to loss weight update? It is very difficult to decipher these details from the short descriptions given in the paper.\n- Also, several details are missing in toy experiments. What is the task here? What are input and output distributions and what is the relation between input and output? Are they just random noises? If so, is the network learning to overfit to the data as there is no relationship between input and output? \n\n\nMinor Weaknesses:\n- There are no training time comparisons between the proposed technique and the standard fixed loss learning.\n- Authors claim that they operate directly on the gradients inside the network. But, as far as I understood, the authors only update loss weights in this paper. Did authors also experiment with gradient normalization in the intermediate CNN layers?\n- No comparison with state-of-the-art techniques on the experimented tasks and datasets.\n\n\nClarifications:\n- See the above mentioned issues with the exposition of the technique.\n- In the experiments, why are the input images downsampled to 320x320?\n- What does it mean by ‘unofficial dataset’ (page-4). Any references here?\n- Why is 'task normalized' test-time loss as good measure for comparison between models in the toy example (Section 4)? The loss ratios depend on initial loss, which is not important for the final performance of the system.\n\n\nSuggestions:\n- I strongly suggest the authors to clearly explain the proposed technique to get this into a publishable state. \n- The term ’GradNorm’ seem to be not defined anywhere in the paper.\n\n\nReview Summary:\nDespite promising results, the proposed technique is quite unclear from the paper. With its poor exposition of the technique, it is difficult to recommend this paper for publication.",
"The paper addresses an important problem in multitask learning. But its current form has several serious issues. \n\nAlthough I get the high-level goal of the paper, I find Sec. 3.1, which describes the technical approach, nearly incomprehensible. There are many things unclear. For example:\n\n- it starts with talking about multiple tasks, and then immediately talks about a \"filter F\", without defining what the kind of network is being addressed. \n\n- Also it is not clear what L_grad is. It looks like a loss, but Equation 2 seems to define it to be the difference between the gradient norm of a task and the average over all tasks. It is not clear how it is used. In particular, it is not clear how it is used to \"update the task weights\"\n\n- Equation 2 seems sloppy. “j” appears as a free index on the right side, but it doesn’t appear on the left side. \n\nAs a result, I am unable to understand how the method works exactly, and unable to judge its quality and originality.\n\nThe toy experiment is not convincing. \n\n- the evaluation metric is the sum of the relative losses, that is, the sum of the original losses weighted by the inverse of the initial loss of each task. This is different from the sum of the original losses, which seems to be the one used to train the “equal weight” baseline. A more fair baseline is to directly use the evaluation metric as the training loss. \n- the curves seem to have not converged.\n\nThe experiments on NYUv2 involves non-standard settings, without a good justification. So it is not clear if the proposed method can make a real difference on state of the art systems. \n\nAnd the reason that the proposed method outperforms the equal weight baseline seems to be that the method prevents overfitting on some tasks (e.g. depth). However, the method works by normalizing the norms of the gradients, which does not necessarily prevent overfitting — it can in fact magnify gradients of certain tasks and cause over-training and over-fitting. So the performance gain is likely dataset dependent, and what happens on NYU depth can be a fluke and does not necessarily generalize to other datasets. ",
"Hello,\n\nThanks again for your feedback. We'd like to inform you that we have uploaded a paper revision which we feel provides a much clearer exposition of our technique. Time permitting, we invite you to take a look, in the hopes that this newer version clarifies any outstanding questions you may have had.",
"Hello,\n\nThanks again for your comments. We wanted to let you know that we have uploaded a paper revision with significant rewrites for clarity, and have rewritten Section 3 entirely. We hope that this newer version presents GradNorm in a much more clear way and motivates why it works so well in a multitask setting. ",
"Hello,\n\nThanks again for your review. We wanted to inform you that we have uploaded a paper revision with significant rewrites for clarity, especially in Section 3 (which has essentially been rewritten). We hope that this newer version presents a much clearer case for why GradNorm is an intuitive and powerful way to improve multitask learning. ",
"General comments: Thank you very much for your review. You raise a very important point on task groupings and possible extensions to this method. Hopefully we can help clarify a few of these points below, along with the other comments/clarification requests you made. \n\n(((It is an interesting paper with a novel approach to multi-task learning. To improve the paper, it would be helpful to evaluate the method under various settings. My detailed comments are below.\n\n1. Multi-task learning can have various settings. For example, we may have multiple groups of tasks, where tasks are correlated within groups but tasks in different groups are not much correlated. Also, tasks may have hierarchical correlation structures. These patterns often appear in biological datasets. I am wondering how a variety of multi-task settings can be handled by the proposed approach. It would be helpful to discuss the conditions where we can benefit from the proposed method.)))\n\nRESPONSE: We completely agree that this is a very important question. However, this type of task grouping is best handled through network architecture search, not through tuning the loss function. Network branch structure should begin to mimic correlations amongst network tasks (see for example Lu et al 2016, Farley et al 2015). There are certainly some exciting possibilities in the direction of using gradients for architecture search, but they’re rather out of the scope to the methods proposed in our manuscript, but are directions we are considering for future study.\n\nHowever, although GradNorm isn’t the most direct solution to correlation structures in the labels, we still do well in the presence of these correlations. In our NYUv2 experiments, there is actually a strong grouping amongst tasks: depth and normals are strongly correlated (in fact the latter is calculated from the former), while segmentation or room layout are less related and dependent on rather different semantics. Despite this, GradNorm still converges to optimal weights and improves performance on all tasks, due to the asymmetry $\\alpha$: higher $\\alpha$ informs our network to expect more complicated relationships between tasks, including complicated correlation structures. \n\n(((2. One intuitive approach to task balancing would be to weight each task objective based on the variance of each task. It would be helpful to add a few simple and intuitive baselines in the experiments. )))\n\nRESPONSE: Kendall’s et al.’s methodology actually does precisely this. The Kendall et al. method (uncertainty weighting) is essentially a sophisticated variant of variance weighting - it uses a Bayesian framework to model intrinsic task variance, and then picks the loss weights w_i(t) based directly on these variance estimates. It is clear, however, that GradNorm outperforms this type of variance weighting from our experiments. \n\n(((3. In Section 4, it would be great to have more in-depth simulations (e.g., multi-task learning in various settings). Also, in the bottom right panel in Figure 2, GrandNorm and equal weighting decrease test errors effectively even after 15000 steps but uncertainty weighting seems to reach a plateau. Discussions on this would be useful.)))\n\nRESPONSE: In terms of more in-depth simulations, we did present results for a variety of different tasks: regression, classification, and synthetic/simulated. There are certainly more tasks we could try (although often we are limited by not having good multitask labels for various scenarios). We believe that experiments we performed show that our methodology is robust to many standard factors (architecture, single-task loss function choices, etc.)\n\nThe pitfall of the uncertainty weighting technique comes from its tendency to overly boost gradients through training. As L_i decreases, uncertainty weighting aggressively tries to compensate with higher task weights, and a higher global learning rate as a result. In our case, this improved training initially but then training reached a plateau when the global learning rate grew too large. We will make this clear in the revision.\n\n(((4. It would be useful to discuss the implementation of the method as well. )))\n\nRESPONSE: We welcome any specific detail requests, and we are planning to add many more details of the implementation in the revision (soon to come). We have also summarized the implementation of Gradnorm in RE: IMPLEMENTATION OF GRADNORM above in “Rebuttal: General Comments.”\n",
"General comments: Thank you very much for your comments. We will upload a revised version of the manuscript with a reworked section 3.1 and a more detailed exposition: we hope this will make the methodology and motivations clearer. We also hope to clarify a few things regarding your other points below:\n\n(((Although I get the high-level goal of the paper, I find Sec. 3.1, which describes the technical approach, nearly incomprehensible. There are many things unclear. For example:\n\n- it starts with talking about multiple tasks, and then immediately talks about a \"filter F\", without defining what the kind of network is being addressed.\n\n- Also it is not clear what L_grad is. It looks like a loss, but Equation 2 seems to define it to be the difference between the gradient norm of a task and the average over all tasks. It is not clear how it is used. In particular, it is not clear how it is used to \"update the task weights\")))\n\nRESPONSE: Please see “GENERAL COMMENTS ON IMPLEMENTATION” above in “Rebuttal: General Comments.”\n\n(((- Equation 2 seems sloppy. “j” appears as a free index on the right side, but it doesn’t appear on the left side. )))\n\nRESPONSE: The revision will have cleaner notation.\n\n(((As a result, I am unable to understand how the method works exactly, and unable to judge its quality and originality.\n\nThe toy experiment is not convincing. \n\n- the evaluation metric is the sum of the relative losses, that is, the sum of the original losses weighted by the inverse of the initial loss of each task. This is different from the sum of the original losses, which seems to be the one used to train the “equal weight” baseline. A more fair baseline is to directly use the evaluation metric as the training loss. )))\n\nRESPONSE: Please see “RE: TOY EXAMPLE AND THE SUM-OF-LOSS-RATIO METRIC” above in “Rebuttal: General Comments.”\n\n(((- the curves seem to have not converged.)))\n\n\nRESPONSE: For ease of visualization we only show the first 25k steps of training, but the trend is consistent beyond that point as well.\n\n(((The experiments on NYUv2 involves non-standard settings, without a good justification. So it is not clear if the proposed method can make a real difference on state of the art systems. )))\n\n\nRESPONSE: Regarding the non-standard settings, please see “RE: DATASET/SETTING USED” above in “Rebuttal: General Comments.”\n\nIn addition, we’ve already shown that GradNorm is optimal in some important ways (being able to find optimal grid search weights, for example) for task weight tuning, so any system which stands to benefit from properly tuned task weights should benefit from GradNorm, regardless of how complex or how many parallel components are in the model otherwise. Note that many of the architectures we used (VGG-SegNet and ResNet-FCN) are popular state-of-the-art architectures and we showed significant improvement in both. \n\n(((And the reason that the proposed method outperforms the equal weight baseline seems to be that the method prevents overfitting on some tasks (e.g. depth). However, the method works by normalizing the norms of the gradients, which does not necessarily prevent overfitting — it can in fact magnify gradients of certain tasks and cause over-training and over-fitting. So the performance gain is likely dataset dependent, and what happens on NYU depth can be a fluke and does not necessarily generalize to other datasets. )))\n\nRESPONSE: GradNorm should not magnify gradients on overfitting tasks. Overfitting will to artificially high training rates, and GradNorm will curtail gradients for tasks with high training rates. For NYUv2 this was very apparent in depth regression. \n\nIt is true that we focused on the NYUv2 dataset, but we feel there are a few very strong reasons to believe that the results are very statistically significant and not due to dataset bias: (1) We tried on different subsets of tasks: one with segmentation and one with room layout regression. GradNorm showed consistent performance on both. (2) We also tried on very different architectures with various connectivities, and different dataset sizes. (3) GradNorm quickly converged to optimal gridsearch task weights and beat 100 randomly initialized static networks. It is highly unlikely that GradNorm would have arrived at this set of weights through random chance. \n",
"General comments: Thank you very much for your review. We are working on a revision that will address clarifications you asked for, but please allow us to respond to each of your points below.\n\n(((- The main weakness of this work is the unclear exposition of the proposed technique. Entire technique is explained in a short section-3.1 with many important details missing. There is no clear basis for the main equations 1 and 2. How does equation-2 follow from equation-1? Where is the expectation coming from? What exactly does ‘F’ refer to? There is dependency of ‘F’ on only one of sides in equations 1 and 2? More importantly, how does the gradient normalization relate to loss weight update? It is very difficult to decipher these details from the short descriptions given in the paper.)))\n\nRESPONSE: Please see “RE: IMPLEMENTATION OF GRADNORM” above in “Rebuttal: General Comments”.\n\n(((Also, several details are missing in toy experiments. What is the task here? What are input and output distributions and what is the relation between input and output? Are they just random noises? If so, is the network learning to overfit to the data as there is no relationship between input and output? )))\n\nRESPONSE: Our toy example illustrates a simple but important scenario where standard methods fail: multiple related regression tasks whose ground truth is statistically IID *except* for a scaling $\\sigma_i$. The task was defined in equation (3) and described in the text directly following the equation. The target function is a multi-dimensional tanh function, and the inputs are $A_i = B + \\epsilon_i$, with B a common baseline and individual elements of all matrices generated from a random normal distribution centered at 0 (B has std 10, while A_i std 3.5).\n\n(((Minor Weaknesses:\n- There are no training time comparisons between the proposed technique and the standard fixed loss learning.)))\n\nRESPONSE: This was mentioned in the manuscript, but GradNorm adds around 5% compute time to our networks. This is because we apply GradNorm only at a very upstream set of kernel weights.\n\n(((Authors claim that they operate directly on the gradients inside the network. But, as far as I understood, the authors only update loss weights in this paper. Did authors also experiment with gradient normalization in the intermediate CNN layers?)))\n\nRESPONSE: By “operate directly” on the gradients, we mean that the gradients are explicitly a part of our loss function (which necessitates taking gradients of gradients). This is in contrast with the traditional methods that do not explicitly take the first-order gradients into account. We chose to apply GradNorm only to a very upstream CNN layer because it saved on overhead compute significantly.\n\n(((No comparison with state-of-the-art techniques on the experimented tasks and datasets.)))\n\nRESPONSE: We did make a crucial comparison to the state-of-the-art: we showed that our multi-task balancing method produces superior results to the Kendall et al dynamic weighting method, which is a state-of-the-art method for multitask learning. \n\nAfter the submission we performed some tests on the full-resolution NYUv2 task on the standard dataset: GradNorm in that case improves depth error by ~10%, segmentation mIoU by ~7%, and normals error by ~28% over an equal weights baseline, so the results are consistent across resolutions. \n\n\n(((Clarifications:\n- See the above mentioned issues with the exposition of the technique.\n- In the experiments, why are the input images downsampled to 320x320?)))\n\nRESPONSE: Please see “RE: DATASET/SETTING USED” above in “Rebuttal: General Comments.” \n\n(((What does it mean by ‘unofficial dataset’ (page-4). Any references here?)))\n\nThis description is confusing and will be removed from the revision, but in addition to running GradNorm on the standard NYUv2 dataset, we used an expanded version of NYUv2 with additional annotations that were labeled/calibrated in house (hence ‘unofficial’). This involved 40x increase in number of labels compared to NYUv2, and we would be happy to make this dataset available at the conference.\n\n(((Why is 'task normalized' test-time loss as good measure for comparison between models in the toy example (Section 4)? The loss ratios depend on initial loss, which is not important for the final performance of the system.)))\n\nPlease see “RE: TOY EXAMPLE AND THE SUM-OF-LOSS-RATIO METRIC” above in “Rebuttal: General Comments.”\n\n(((Suggestions:\n\n- The term ’GradNorm’ seem to be not defined anywhere in the paper.)))\n\nWe will make this more explicit in the revision.\n",
"Thanks to all the reviewers for your comments. We are in process of revising the manuscript to address all concerns but also would like to offer clarifications here to the questions/remarks we received.\n\n-----\n\nRE: IMPLEMENTATION OF GRADNORM\n\nWe received a few comments to clarify the implementation of GradNorm. We are in process of overhauling this explanation in the manuscript but give a short summary here of the method.\n\nFirst, by filter F we mean kernel weights W for some layer in the network, and we will switch to this notation (both here and in the revision) for clarity. To summarize GradNorm:\n\nWe identify the kernel weights W for a layer in any neural network architecture. Usually this layer is the last layer which couples to all tasks within the network. We will normalize the gradients of the loss at W. \nWe want the norms of the gradients on W to be rate balanced (i.e. no task trains very quickly relative to other tasks). Therefore, the derivative of the task i loss w.r.t W denoted $|\\nabla_W L_i|$ is made proportional to $[(r_i)^{-1}]^{\\alpha}$ for relative task training rate $r_i$ and hyperparameter $\\alpha$. We argued that the loss ratio $L’_i$ gives us the inverse task training rate, so $(r_i)^{-1} = L’_i/E_{task}[L’]$. (This is eq 1). \nSince GradNorm only reasons about relative quantities, we should keep the mean gradient norm unchanged. The constant of proportionality in point (2) above is thus most naturally the average gradient norm, $E_{task}[|nabla_W L_i|]$. Eq 1 thus defines a target value for each gradient norm $|nabla_W L_i|$, and our method pushes gradient norms towards this target via an L_1 loss between the value of $|nabla_W L_i|$ versus the desired value. (This loss is eq 2).\nWe backpropagate this loss like any normal loss function into the loss weights w_i(t) of the network. In principle we could also backpropagate this signal into all network parameters but this tends to degrade performance and speed.\n\n------\n\nRE: TOY EXAMPLE AND THE SUM-OF-LOSS-RATIO METRIC.\n\nThere was some concern that we used the sum of loss ratios, $L_i(t)/L_i(0)$, as our performance metric for the toy example. Please see future revision for more discussion, but from a multitask perspective, designing an appropriate statistic by which to judge overall performance is very difficult. The toy example, however, involves tasks which are statistically IID except for a scaling factor $\\sigma_i$ per task: thus, the sum of loss ratios is the natural choice for gauging the performance of the network.\n\nFor more complex real tasks, the sum of loss ratios may not be very meaningful. In NYUv2, the loss ratio weights are clearly not optimal (this is clear from our gridsearch experiments). If we knew how to pick the correct multitask evaluation metric in general, onerous methods like gridsearch would be obsolete, as the evaluation metric would automatically set the correct training loss. So in our toy example, using the sum of loss ratios as the training loss would be rather circular; the entire point is to start with equal weighting (note that GradNorm also initializes task weights to be equal), and then to evaluate how well methods perform based on a “true” aggregate performance metric. \n\n-----\n\nRE: DATASET/SETTING USED.\n\nWe received some comments that our training setting (image resolutions, etc.) seemed nonstandard. To clarify, we generally followed architectures and resolutions from Lee et al (2017) for room layout estimation, as it is the state-of-the-art for room layout estimation. This resolution also allowed for faster training without losing complexity in the inputs or outputs. As our results are for multitask learning and our baseline comparisons are to other multitask learning techniques which are agnostic to the dataset settings chosen, we emphasize that our results are not dependent on the resolution of the data or the specific dataset.\n",
"Hi there,\n\nThanks very much for the comment. You're absolutely right - GradNorm at an implementation level amounts to dynamically finding the right weights w_i(t) which then goes into a weighted average of each individual task loss. This is implemented via the equations in the manuscript. However, the core reason these equations are meaningful is because they set a common scale for our backpropped gradients by *normalizing* gradients to two additional pieces of data: (1) the average gradient norms amongst different tasks, and (2) the relative training rate of tasks. The influence of the latter is controlled by the asymmetry parameter alpha, as described in the manuscript. That's why we are normalizing our gradients: we discovered a meaningful common scale for these gradients which tells us how they relate to each other and we use these relationships to our advantage during training.\n\nThanks also for pointing out the other paper - I think it would be more well-defined to just refer to our method as GradNorm, which we also used to draw the analogy to BatchNorm. Normalizing gradients can mean many different things (depending on the objective, the scale/data you normalize to, etc.), and our proposed method focuses on one way to do this in the context of multitask learning. But depending on the application there can certainly be other methods where the term \"gradient normalization\" would also apply. \n",
"Hi,\n\nI am a bit confused why the method is called gradient NORMALIZATION? From my understanding, it is essentially dynamic weighted average according to eqn (1)(2). Am I correct?\n\nIn fact, the name \"gradient normalization\" was proposed earlier in the following paper: \nhttps://arxiv.org/pdf/1707.04822.pdf\n\nIt might be good to clarify this to avoid any confusion.\n\nBest."
] | [
6,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
2,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H1bM1fZCW",
"iclr_2018_H1bM1fZCW",
"iclr_2018_H1bM1fZCW",
"H10ZQaugz",
"Bycjn6tef",
"Sy3BPIMbM",
"H10ZQaugz",
"Sy3BPIMbM",
"Bycjn6tef",
"iclr_2018_H1bM1fZCW",
"rkHMM4KeM",
"iclr_2018_H1bM1fZCW"
] |
iclr_2018_H1OQukZ0- | Online Hyper-Parameter Optimization | We propose an efficient online hyperparameter optimization method which uses a joint dynamical system to evaluate the gradient with respect to the hyperparameters. While similar methods are usually limited to hyperparameters with a smooth impact on the model, we show how to apply it to the probability of dropout in neural networks. Finally, we show its effectiveness on two distinct tasks. | rejected-papers | This paper presents an update to the method of Franceschi 2017 to optimize regularization hyperparameters, to improve stability. However, the theoretical story isn't so clear, and the results aren't much of an improvement. Overall, the presentation and development of the idea needs work. | train | [
"S1Qv42tlz",
"HkNOs3g-z",
"SkFNKkUZf",
"rJmsNRjXz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author"
] | [
"Summary of the paper\n---------------------------\nThe paper addresses the issue of online optimization of hyper-parameters customary involved in deep architectures learning. The covered framework is limited to regularization parameters. These hyper-parameters, noted $\\lambda$, are updated along the training of model parameters $\\theta$ by relying on the generalization performance (validation error). The paper proposes a dynamical system including the dynamical update of $\\theta$ and the update of the gradient $y$, derivative of $\\theta$ w.r.t. to the hyper-parameters. The main contribution of the paper is to propose a way to re-initialize $y$ at each update of $\\lambda$ and a clipping procedure of $y$ in order to maintain the stability of the dynamical system. Experimental evaluations on synthetic or real datasets are conducted to show the effectiveness of the approach.\n\nComments\n-------------\n- The materials of the paper sometimes may be quite not easy to follow. Nevertheless the paper is quite well written.\n- The main contributions of the paper can be seen as an incremental version of (Franceschi et al, 2017) based on the proposal in (Luketina et al., 2016). As such the impact of the contributions appears rather limited even though the experimental results show a better stability of the method compared to competitors.\n- One motivation of the approach is to fix the slow convergence of the method in (Franceschi et al, 2017). The paper will gain in quality if a theoretical analysis of the speed-up brought by the proposed approach is discussed.\n- The goal of the paper is to address automatically the learning of regularization parameters. Unfortunately, Algorithm 1 involves several other hyper-parameters (namely clipping factor $r$, constant $c$ or $\\eta$) which choices are not clearly discussed. It turns that the paper trades a set of hyper-parameters for another one which tuning may be tedious. This fact weakens the scope of the online hyper-parameter optimization approach.\n- It may be helpful to indicate the standard deviations of the experimental results.",
"\n# Summary of paper\nThe paper proposes an algorithm for hyperparameter optimization that can be seen as an extension of Franceschi 2017 were some estimates are warm restarted to increase the stability of the method. \n\n# Summary of review\nI find the contribution to be incremental, and the validation weak. Furthermore, the paper discusses the algorithm using hand-waiving arguments and lacks the rigor that I would consider necessary on an optimization-based contribution. None of my comments are fatal, but together with the incremental contribution I'm inclined as of this revision towards marginal reject. \n\n# Detailed comments\n\n1. The distinction between parameters and hyperparameters (section 3) should be revised. First, the definition of parameters should not include the word parameters. Second, it is not clear what \"parameters of the regularization\" means. Typically, the regularization depends on both hyperparameters and parameters. The real distinction between parameters and parameters is how they are estimated: hyperparameters cannot be estimated from the same dataset as the parameters as this would lead to overfitting and so need to be estimated using a different criterion, but both are \"begin learnt\", just from different datasets.\n\n2. In Section 3.1, credit for the approach of computing the hypergradient by backpropagating through the training procedure is attributed to Maclaurin 2015. This is not correct. This approach was first proposed in Domke 2012 and refined by Maclaurin 2015 (as correctly mentioned in Maclaurin 2015).\n\n3. Some quantities are not correctly specified. I should not need to guess from the context or related literature what the quantities refer to. theta_K for example is undefined (although I could understand its meaning from the context) and sometimes used with arguments, sometimes without (i.e., both theta_K(lambda, theta_0) and theta_K are used).\n\n4. The hypothesis are not correctly specified. Many of the results used require smoothness of the second derivative (e.g., the implicit function theorem) but these are nowhere stated.\n\n5. The algorithm introduces too many hyper-hyperparameters, although the authors do acknowledge this. While I do believe that projecting into a compact domain is necessary (see Pedregosa 2016 assumption A3), the other parameters should ideally be relaxed or estimated from the evolution of the algorithm.\n\n# Minor\n\nmissing . after \"hypergradient exactly\".\n\n\"we could optimization the hyperparam-\" (typo)\n\nReferences:\n Justin Domke. Generic methods for optimization-based modeling. In\nInternational Conference on Artificial Intelligence and Statistics, 2012.\n",
"Summary of paper:\n\nThis work proposes an extension to an existing method (Franceschi 2017) to optimize regularization hyperparameters. Their method claims increased stability in contrast to the existing one.\n\nSummary of review:\n\nThis is an incremental change of an existing method. This is acceptable as long as the incremental change significantly improves results or the paper presents some convincing theoretical arguments. I did not find either to be the case. The theoretical arguments are interesting but lacking in rigor. The proposed method introduces hyper-hyperparameters which may be hard to tune. The experiments are small scale and it is unclear how much the method improves random grid search. For these reasons, I cannot recommend this paper for acceptance.\n\nComments:\n1. Paper should cite Domke 2012 in related work section.\n2. Should state and verify conditions for application of implicit function theorem on page 2.\n3. Fix notation on page 3. Dot is used on the right hand side to indicate an argument but not left hand side for equation after \"with respect to \\lambda\".\n4. I would like to see more explanation for the figure in Appendix A. What specific optimization is being depicted? This figure could be moved into the paper's main body with some additional clarification.\n5. I did not understand the paragraph beginning with \"This poor estimation\". Is this just a restatement of the previous paragraph, which concluded convergence will be slow if \\eta is too small?\n6. I do understand the notation used in equation (8) on page 4. Are <, > meant to denote less than/greater than or something else?\n7. Discussion of weight decay on page 5 seems tangential to main point of the paper. Could be reduced to a sentence or two.\n8. I would like to see some experimental verification that the proposed method significantly reduces the dropout gradient variance (page 6), if the authors claim that tuning dropout probabilities is an area they succeed where others don't.\n9. Experiments are unconvincing. First, only one hyperparameter is being optimized and random search/grid search are sufficient for this. Second, it is unclear how close the proposed method is to finding the optimal regularization parameter \\lambda. All one can conclude is that it performs slightly better than grid search with a small number of runs. I would have preferred to see an extensive grid search done to find the best possible \\lambda, then seen how well the proposed method does compared to this.\n10. I would have liked to see a plot of how the value of lambda changes throughout optimization. If one can initialize lambda arbitrarily and have this method find the optimal lambda, that is more impressive than a method that works simply because of a fortunate initialization.\n\n\nTypos:\n1. Optimization -> optimize (bottom of page 2)\n2. Should be a period after sentence starting \"Several algorithms\" on page 2.\n3. In algorithm box on page 5, enable_projection is never used. Seems like warmup_time should also be an input to the algorithm. \n",
"We would like to thank the reviewers for their feedback.\nWe agree that the paper would benefit from some stronger theoretical justifications and/or more extensive experiments.\nThis will be part of some future work and we accept the decision of the reviewers to reject the paper.\n\nSome answers to AnonReviewer4:\nThanks for your feedback.\n#4: Figure in appendix A is a simplified view of the behavior of the 2 dynamical systems (given by Eq 6 and 7) when the first one (Eq 6) has reached convergence. The figure shows the convergence of the derivative (Eq (7)) in \\lambda_0 can take some extra time after (6) has converged.\n#5: The sentence mitigates what was previously said. When using a fixed number of steps, the estimated hyper-gradient could be far from the true hyper-gradient. However, the value of the hyper-parameter is not going to be significantly altered since the norm of the estimated gradient is a O(K \\eta) (Summary: the direction of the estimated hypergradient might be wrong but its norm is small).\n#6: < x, y> in equation 8 denotes the inner product.\n#9: We agree that the goal would be to apply this method on more than 1 hyper-parameter. We started with 1 hyper-parameter to gain a better understanding of the dynamic of the training.\n#10: Appendix B.2 is showing some plots of the dropout parameter along the training with different initial values (from 0.1 to 0.9). As can be seen, for method \"Clip5\", all the points, regardless of the initialization of the hyper-parameter converge to the minimum of the validation loss.\n\nAnswer to AnonReviewer1:\nThanks for your comments. We will work to improve the paper to make it more rigorous. \n\nAnswer to AnonReviewer2:\nThanks for your comments.\nWe agree that the choice of the hyper-hyperparameter could be more extensively studied.\nWe have 2 hyper-hyperparameters that are specific to the estimation of the hyper-parameters: the clipping factor $r$ and the learning rate scale $c$. Table 3 tends to show that the algorithm is pretty robust w.r.t. the choice of the hyper-hyperparameters.\n"
] | [
4,
5,
4,
-1
] | [
3,
3,
3,
-1
] | [
"iclr_2018_H1OQukZ0-",
"iclr_2018_H1OQukZ0-",
"iclr_2018_H1OQukZ0-",
"iclr_2018_H1OQukZ0-"
] |
iclr_2018_SyjjD1WRb | Evolutionary Expectation Maximization for Generative Models with Binary Latents | We establish a theoretical link between evolutionary algorithms and variational parameter optimization of probabilistic generative models with binary hidden variables.
While the novel approach is independent of the actual generative model, here we use two such models to investigate its applicability and scalability: a noisy-OR Bayes Net (as a standard example of binary data) and Binary Sparse Coding (as a model for continuous data).
Learning of probabilistic generative models is first formulated as approximate maximum likelihood optimization using variational expectation maximization (EM).
We choose truncated posteriors as variational distributions in which discrete latent states serve as variational parameters. In the variational E-step,
the latent states are then
optimized according to a tractable free-energy objective. Given a data point, we can show that evolutionary algorithms can be used for the variational optimization loop by (A)~considering the bit-vectors of the latent states as genomes of individuals, and by (B)~defining the fitness of the
individuals as the (log) joint probabilities given by the used generative model.
As a proof of concept, we apply the novel evolutionary EM approach to the optimization of the parameters of noisy-OR Bayes nets and binary sparse coding on artificial and real data (natural image patches). Using point mutations and single-point cross-over for the evolutionary algorithm, we find that scalable variational EM algorithms are obtained which efficiently improve the data likelihood. In general we believe that, with the link established here, standard as well as recent results in the field of evolutionary optimization can be leveraged to address the difficult problem of parameter optimization in generative models. | rejected-papers | This method makes a connection between evolutionary and variational methods in a particular model. This is a good contribution, but there has been little effort to position it in comparison to standard methods that do the same thing, showing relative strengths and weaknesses.
Also, please shorten the abstract. | train | [
"BykoTdLNG",
"B1HSi_8Vf",
"ryWjGsdef",
"SyHYDG5lf",
"S15xOyjgf",
"BJaIxMSGz",
"SkqzzmSzG",
"ByuaozrzM",
"ByFAWfSGz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Just to clarify my position, I would suggest that any \"proof of concept\" paper DOES need to position itself clearly with respect to other approaches. You want your reader to walk away with clear understand of when you use your approach and why.\n\nEven a \"conceptual\" positioning would be a step in the right direction, such as clearly explaining an instance where your method would have lower runtime than an alternative E-step, or avoid local optima better than an alternative.\n\nThis is not to say you need some exhaustive benchmark table. To me, it would be totally fine if you had a few results where your method was better, and some where it wasn't, as long as there was clear understanding of when your method improves on baselines and why.\n\nGood luck on future submissions!\n",
"Thanks to the authors for their reply. For this review cycle, I stand by my original rating of \"4\". I think the broad idea of using EA as a substep within a monotonically improving free energy algorithm could be interesting, but needs far more experimental justification than presented here as well as more insightful suggestions about how to select the best EA procedure (use crossover? use mutations?) for new problems.\n\nI'm glad to hear that clarifications about hyperparameters and the definition of logP are on the TODO list. These are definitely needed to help others understand and deploy this method effectively.",
"The paper presents a combination of evolutionary computation (EC) and variational EM for models with binary latent variables represented via a particle-based approximation.\n\nThe scope of the paper is quite narrow as the proposed method is only applicable to very specialised models. Furthermore, the authors do not seem to present any realistic modelling problems where the proposed approach would clearly advance the state of the art. There are no empirical comparisons with state of the art, only between different variants of the proposed method.\n\nBecause of these limitations, I do not think the paper can be considered for acceptance.\n\nDetailed comments:\n\n1. When revising the paper for next submission, please make the title more specific. Papers with very broad titles that only solve a very small part of the problem are very annoying.\n\n2. Your use of crossover operators seems quite unimaginative. Genomes have a linear order but in the case of 2D images you use it is not obvious how that should be mapped to 1D. Combining crossovers in different representations or 2D crossovers might fit your problem much better.\n\n3. Please present a real learning problem where your approach advances state of the art.\n\n4. For the results in Fig. 7, please run the algorithm until convergence or justify why that is not necessary.\n\n5. Please clarify the notation: what is the difference between y^n and y^(n)?\n",
"## Review summary\n\nOverall, the paper makes an interesting effort to tightly integrate\nexpectation-maximization (EM) training algorithms with evolutionary algorithms\n(EA). However, I found the technical description lacking key details and the\nexperimental comparisons inadequate. There were no comparisons to non-\nevolutionary EM algorithms, even though they exist for the models in question.\nFurthermore, the suggested approach lacks a principled way to select\nand tune key hyperparameters. I think the broad idea of using EA as a substep\nwithin a monotonically improving free energy algorithm could be interesting,\nbut needs far more experimental justification.\n\n\n## Pros / Stengths\n\n+ effort to study more than one model family\n\n+ maintaining monotonic improvement in free energy\n\n\n## Cons / Limitations\n\n- poor technical description and justification of the fitness function\n\n- lack of comparisons to other, non-EA algorithms\n\n- lack of study of hyperparameter sensitivity\n\n\n## Paper summary\n\nThe paper suggests a variant of the EM algorithm for binary hidden variable\nmodels, where the M-step proceeds as usual but the E-step is different in two\nways. First, following work by J. Lucke et al on Truncated Posteriors, the\ntrue posterior over the much larger space of all possible bit vectors is\napproximated by a more tractable small population of well-chosen bit vectors,\neach with some posterior weight. Second, this set of bit vectors is updated\nusing an evolutionary/genetic algorithm. This EA is the core contribution,\nsince the work on Trucated Posteriors has appeared before in the literature.\nThe overall EM algorithm still maintains monotonic improvement of a free\nenergy objective.\n\nTwo well-known generative models are considered: Noisy-Or models for discrete\ndatasets and Binary Sparse Coding for continuous datasets. Each has a\npreviously known, closed-form M-step (given in supplement). The focus is on\nthe E-step: how to select the H-dimensional bit vector for each data point.\n\nExperiments on artificial bars data and natural image patch datasets compare\nseveral variants of the proposed method, while varying a few EA method\nsubsteps such as selecting parents by fitness or randomly, including crossover\nor not, or using generic or specialized mutation rates.\n\n\n## Significance\n\nCombining evolutionary algorithms (EA) within EM has been done previously, as\nin Martinez and Vitria (Pattern Recog. Letters, 2000) or Pernkopf and\nBouchaffra (IEEE TPAMI, 2005) for mixture models. However, these efforts seem\nto use EA in an \"outer loop\" to refine different runs of EM, while the present\napproach uses EA in a substep of a single run of EM. I guess this is\ntechnically different, but it is already well known that any E-step method\nwhich monotonically improves the free energy is a valid algorithm. Thus, the\npaper's significance hinges on demonstrating that the particular E step chosen\nis better than alternatives. I don't think the paper succeeded very well at\nthis: there were no comparisons to non-EA algorithms, or to approaches that\nuse EA in the \"outer loop\" as above.\n\n\n## Clarity of Technical Approach\n\nWhat is \\tilde{log P} in Eq. 7? This seems a fundamental expression. Its\nplain-text definition is: \"the logarithm of the joint probability where\nsummands that do not depend on the state s have been elided\". To me, this\ndefinition is not precise enough for me to reproduce confidently... is it just\nlog p(s_n, y_n | theta)? I suggest revisions include a clear mathematical\ndefinition. This omission inhibits understanding of this paper's core\ncontributions.\n\nWhy does the fitness expression F defined in Eq. 7 satisfy the necessary\ncondition for fitness functions in Eq. 6? This choice of fitness function does\nnot seem intuitive to me. I think revisions are needed to *prove* this fitness\nfunction obeys the comparison property in Eq. 6.\n\nHow can we compute the minimization substep in Eq. 7 (min_s \\tilde{logP})? Is\nthis just done by exhaustive search over bit vectors? I think this needs\nclarification.\n\n\n## Quality of Experiments\n\nThe experiments are missing a crucial baseline: non-EA algorithms. Currently\nonly several varieties of EA are compared, so it is impossible to tell if the\nsuggested EA strategies even improve over non-EA baselines. As a specific\nexample, previous work already cited in this paper -- Henniges et al (2000) --\nhas developed a non-EA EM algorithm for Binary Sparse Coding, which already\nuses the truncated posterior formulation. Why not compare to this?\n\nThe proposed algorithm has many hyperparameters, including number of\ngenerations, number of parents, size of the latent space H, size of the\ntruncation, etc. The current paper offers little advice about selecting these\nvalues intelligently, but presumably performance is quite sensitive to these\nvalues. I'd like to see some more discussion of this and (ideally) more\nexperiments to help practitioners know which parameters matter most,\nespecially in the EA substep.\n\nRuntime analysis is missing as well: Is runtime dominated by the EA step? How\ndoes it compare to non-EA approaches? How big of datasets can the proposed\nmethod scale to?\n\nThe reader walks away from the current toy bars experiment somewhat confused.\nThe Noisy-Or experiment did not favor crossover and and favored specialized\nmutations, while the BSC experiment reached the opposite conclusions. How does\none design an EA for a new dataset, given this knowledge? Do we need to\nexhaustively try all different EA substeps, or are there smarter lessons to\nlearn?\n\n\n\n## Detailed comments\n\nBottom of page 1: I wouldn't say that \"variational EM\" is an approximation to\nEM. Sometimes moving from EM to variational EM can mean we estimate posteriors\n(not point estimates) for both local (example-specific) and global parameters.\nInstead, the *approximation* comes simply from restricting the solution space\nto gain tractability.\n\nSec. 2: Make clear earlier that hidden var \"s\" is assumed to be discrete, not\ncontinuous.\n\nAfter Mutation section: Remind readers that \"N_g\" is number of generations\n",
"This paper proposes an evolutionary algorithm for solving the variational E step in expectation-maximization algorithm for probabilistic models with binary latent variables. This is done by (i) considering the bit-vectors of the latent states as genomes of individuals, and by (ii) defining the fitness of the individuals as the log joint distribution of the parameters and the latent space.\n \nPros:\nThe paper is well written and the methodology presented is largely clear.\n\nCons:\nWhile the reviewer is essentially fine with the idea of the method, the reviewer is much less convinced of the empirical study. There is no comparison with other methods such as Monte carlo sampling.\nIt is not clear how computationally Evolutionary EM performs comparing to Variational EM algorithm and there is neither experimental results nor analysis for the computational complexity of the proposed model.\nThe datasets used in the experiments are quite old. The reviewer is concerned that these datasets may not be representative of real problems.\nThe applicability of the method is quite limited. The proposed model is only applicable for the probabilistic models with binary latent variables, hence it cannot be applied to more realistic complex model with real-valued latent variables.",
"We thank the reviewers for their comments. As several of the points raised appear to be similar for all reviewers, we will address them here together. Other more specific points will be clarified in their relevant comment thread.\nAs underlined by abstract and discussion, we see the main contribution of this paper to be the development of a variational EM learning algorithm that directly employs evolutionary optimization techniques to monotonically increase a variational free-energy. Experimental results are provided as a proof of concept to show 1) general viability of the approach and 2) scalability up to hundreds of latent variables (also for models with challenging posterior structure such as noisy-OR).\nThe authors hope the theoretical link established here will pave the way to a wider range of new techniques which can leverage results on both variational and evolutionary approaches. We stressed that the presented results are \"a proof of concept\" (see abstract), and they were as such not meant to compete with current benchmarks, nor did we focus our research on achieving such results at this stage. We are happy that all reviewers appreciated the general novel direction, i.e., novel combination of different research fields. At the same time, we are, of course, disappointed that the absence of numerical results that were competitive with recent benchmarks was rated so negatively.\nWe will follow the feedback of the reviewers to improve that shortcoming of the paper in future versions. Given the exceptional agreement of the reviewers on the current version, such efforts are presumably better targeted at a submission to another venue - and we thank the reviewers again for their feedback which, we believe, will improve such future submissions.",
"We agree that the title as-is (which only consists of the given name of the novel learning algorithm here developed) can be misleading in that this paper has smaller scope than a full replacement of standard variational EM techinques. It will therefore be amended to reflect the applicability of the method uniquely to models with binary latent variables.\n\nThe crossover is performed on one-dimensional bit-strings (the latent states), so we are not sure a 2D crossover would benefit the algorithm.\n\nWe posted another comment in reply to the other issues raised here.",
"Many thanks for the thorough review.\n\nRegarding hyperparameter sensitivity, our experience suggests that EEM is robust w.r.t. changes in all hyperparameters (within reasonable bounds), the most significant hyperparameter being the size of the sets of latent states K. Indeed, the paper would benefit from the addition of a systematic study of hyperparameter sensitivity. Future revisions will add such a section (most likely to the appendix).\n\nRegarding the choice of fitness function, as discussed in the paper the only requirement is that it satisfies (6) (and, pragmatically, that it is cheap to compute). Any such fitness function would guarantee that adopting states with higher fitness would also increase the free energy (our true objective function).\nNonetheless, we will add a more rigorous definition of \\tilde{log P} in future revisions, as well as amend equation (7) -- the minimum should be taken over all states in the set K^n, not over all possible states. That second factor on the right hand side of (7) is only there to guarantee positivity of the fitness function (necessary for fitness-proportional parent selection).\n\nWe also agree with the reviewer that a generic procedure to select the best EA for a given generative model is desirable. In fact, it would be best if this procedure was automatic and did not require user intervention. Work in this direction is underway.\nImproving our understanding of the difference in performance of the different EAs for different models will also be part of future work on this topic.\n\nWe posted another comment in reply to the issues raised here that are common with the other reviewers.",
"We agree that the title as-is (which only consists of the given name of the novel learning algorithm here developed) can be misleading in that this paper has smaller scope than a full replacement of standard variational EM techinques. It will therefore be amended to reflect the applicability of the method uniquely to models with binary latent variables.\n\nWe posted another comment in reply to the other issues raised here."
] | [
-1,
-1,
4,
4,
4,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
4,
4,
-1,
-1,
-1,
-1
] | [
"BJaIxMSGz",
"ByuaozrzM",
"iclr_2018_SyjjD1WRb",
"iclr_2018_SyjjD1WRb",
"iclr_2018_SyjjD1WRb",
"iclr_2018_SyjjD1WRb",
"ryWjGsdef",
"SyHYDG5lf",
"S15xOyjgf"
] |
iclr_2018_r1kP7vlRb | Toward learning better metrics for sequence generation training with policy gradient | Designing a metric manually for unsupervised sequence generation tasks, such as text generation, is essentially difficult. In a such situation, learning a metric of a sequence from data is one possible solution. The previous study, SeqGAN, proposed the framework for unsupervised sequence generation, in which a metric is learned from data, and a generator is optimized with regard to the learned metric with policy gradient, inspired by generative adversarial nets (GANs) and reinforcement learning. In this paper, we make two proposals to learn better metric than SeqGAN's: partial reward function and expert-based reward function training. The partial reward function is a reward function for a partial sequence of a certain length. SeqGAN employs a reward function for completed sequence only. By combining long-scale and short-scale partial reward functions, we expect a learned metric to be able to evaluate a partial correctness as well as a coherence of a sequence, as a whole. In expert-based reward function training, a reward function is trained to discriminate between an expert (or true) sequence and a fake sequence that is produced by editing an expert sequence. Expert-based reward function training is not a kind of GAN frameworks. This makes the optimization of the generator easier. We examine the effect of the partial reward function and expert-based reward function training on synthetic data and real text data, and show improvements over SeqGAN and the model trained with MLE. Specifically, whereas SeqGAN gains 0.42 improvement of NLL over MLE on synthetic data, our best model gains 3.02 improvement, and whereas SeqGAN gains 0.029 improvement of BLEU over MLE, our best model gains 0.250 improvement. | rejected-papers | The pros and cons of this paper can be summarized as follows:
Pros:
* It seems that the method has very good intuitions: consideration of partial rewards, estimation of rewards from modified sequences, etc.
Cons:
* The writing of the paper is scattered and not very well structured, which makes it difficult to follow exactly what the method is doing. If I were to give advice, I would flip the order of the sections to 4, 3, 2 (first describe the overall method, then describe the method for partial rewards, and finally describe the relationship with SeqGAN)
* It is strange that the proposed method does not consider subsequences that do not contain y_{t+1}. This seems to go contrary to the idea of using RL or similar methods to optimize the global coherence of the generated sequence.
* For some of the key elements of the paper, there are similar (widely used) methods that are not cited, and it is a bit difficult to understand the relationship between them:
** Partial rewards: this is similar to "reward shaping" which is widely used in RL, for example in the actor-critic method of Bahdanau et al.
** Making modifications of the reference into a modified reference: this is done in, for example, the scheduled sampling method of Bengio et al.
** Weighting modifications by their reward: A similar idea is presented in "Reward Augmented Maximum Likelihood for Neural Structured Prediction" by Norouzi et al.
The approach in this paper is potentially promising, as it definitely contains a lot of promising insights, but the clarity issues and fact that many of the key insights already exist in other approaches to which no empirical analysis is provided makes the contribution of the paper at the current time feel a bit weak. I am not recommending for acceptance at this time, but would certainly encourage the authors to do clean up the exposition, perhaps add a comparison to other methods such as RL with reward shaping, scheduled sampling, and RAML, and re-submit to another venue. | train | [
"H104OpbgM",
"SkY1f6Hlf",
"HJ2pirpxG",
"SJBLEn27z",
"SJRHEvMfG",
"Sku0lvfzz",
"ByWVfK6bf",
"r13qwSzbf",
"ByDj_rG-z",
"S1clBHfZG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author"
] | [
"This article is a follow-up from recent publications (especially the one on \"seqGAN\" by Yu et al. @ AAAI 2017) which tends to assimilate Generative Adversarial Networks as an Inverse Reinforcement Learning task in order to obtain a better stability.\nThe adversarial learning is replaced here by a combination of policy gradient and a learned reward function.\n\nIf we except the introduction which is tainted with a few typos and English mistakes, the paper is clear and well written. The experiments made on both synthetic and real text data seems solid.\nBeing not expert in GANs I found it pleasant to read and instructive.\n\n\n\n",
"This paper describes an approach to generating time sequences by learning state-action values, where the state is the sequence generated so far, and the action is the choice of the next value. Local and global reward functions are learned from existing data sequences and then the Q-function learned from a policy gradient.\n\nUnfortunately, this description is a little vague, because the paper's details are quite difficult to understand. Though the approach is interesting, and the experiments are promising, important explanation is missing or muddled. Perhaps most confusing is the loss function in equation 7, which is quite inadequately explained.\n\nThis paper could be interesting, but substantial editing is needed before it is sufficient for publication.",
"This paper considers the problem of improving sequence generation by learning better metrics. Specifically, it focuses on addressing the exposure bias problem, where traditional methods such as SeqGAN uses GAN framework and reinforcement learning. Different from these work, this paper does not use GAN framework. Instead, it proposed an expert-based reward function training, which trains the reward function (the discriminator) from data that are generated by randomly modifying parts of the expert trajectories. Furthermore, it also introduces partial reward function that measures the quality of the subsequences of different lengths in the generated data. This is similar to the idea of hierarchical RL, which divide the problem into potential subtasks, which could alleviate the difficulty of reinforcement learning from sparse rewards. The idea of the paper is novel. However, there are a few points to be clarified.\n\nIn Section 3.2 and in (4) and (5), the authors explains how the action value Q_{D_i} is modeled and estimated for the partial reward function D_i of length L_{D_i}. But the authors do not explain how the rewards (or action value functions) of different lengths are aggregated together to update the model using policy gradient. Is it a simple sum of all of them?\n\nIt is not clear why the future subsequences that do not contain y_{t+1} are ignored for estimating the action value function Q in (4) and (5). The authors stated that it is for reducing the computation complexity. But it is not clear why specifically dropping the sequences that do not contain y_{t+1}. Please clarify more on this point.\n",
"1. We fixed some typos and grammar mistakes\n\n4.1 The title of section is substituted to \"expert-based reward function training specification\" because previous seciton title does not suit\n\n4.2 moved some explanation of modified binary cross entropy to appendix because it was bit verbose\n\n5.2.2 We changed the generated examples in Table 4 to make it easy to see the comparison. All generated examples are started from the word \"according\".\n\n",
"Given the valuable reviews, we revised the following parts of our paper.\n\n2.1 We add the description of why the dynamics is known in the sequence generation setting.\n\n3.2 We add the description of the \\alpha_{D_i} that it adjusts the importance of a partial reward function with a certain length.\n\n3.2 We describe that Q is finally calculated by aggregating all Q_{D_i}.\n\n4 We divide this section into two, because 4 has two contents, the proposal of expert-based reward function, and the modification of the objective. By receiving the comment from reviewer2, we wrote that the modified BCE has no theoretical background and is a heuristic. The justification of this objective is done by experimental way.\n\n5.1.2 We state that PG_L_exp gets benefit when \\tau=1.5, indicating that the modified BCE is effective.\n\n6 We discuss the selection of \\alpha_D and its difficulty.\n\n",
"Thanks for the reply and giving the specific parts of the paper that are unclear.\nWe are giving answer to these questions.\nMoreover, we revised our paper to satisfy your request.\n\nQ, What does “dynamics” mean?\nA. This is where our explanation lacks. I give more specific explanation.\n“dynamics” means the transition probability of the next state given the current state and action, formally p(s_{t+1} | s_{t}, a_{t}).\nIn a lot of tasks in reinforcement learning, dynamics is usually unknown and difficult to learn.\nIn a sequence generation, however, s_{t} is the sequence that the generator has generated so far and a_{t} is the next token generation, and s_{t+1} is always [s_{t}, a_{t}], therefore p(s_{t+1} | s_{t}, a_{t}) is deterministic. So, the dynamics is known.\nThis nature is important when we generate fake sequence from expert, like our method. If we do not know the dynamics, we can not determine the next state when we change the certain action.\n\nWe revised the section 2.1 by adding those explanation.\n\nQ,W_e isn't mentioned again, making it unclear what space you're learning in.\nA. W_e is just the embedding matrix (it is learned together with other weights) and we specified the dimension of embedding layer in the description of the experiment section (In synthetic data, the dimension of embedding layer is 32, and in text data, it is 200).\nDoes it answer your question?\n\nQ, The selection of \\alpha.\nA. The selection of \\alpha is important when we use partial reward functions of different scales, because it balances the priorities of the partial correctness of different scale length. Our paper probably should argue it more specifically.\n\nUnfortunately, the selection of \\alpha_{D_i} is done by nothing but hyper-parameter tuning, and we are aware that it is the problem as we argued in the discussion section. In the text generation task, we prepare two partial reward functions (Long R and Short R), and empirically show the differences of BLEU score and generated sequence when \\alpha is changed. The fact that a true metric for sequence is usually not given (except for the special case, such as oracle test) makes difficult to even validate the goodness of selected \\alpha_{D_i}. This is the reason we only try \\alpha_s = 0.3. and \\alpha_s = 1.0 in the text generation experiment.\n\nI think this problem is not only in our case, but the fundamental problem of inverse reinforcement learning (IRL). IRL learns a reward from expert, but the goodness of learned reward function can be evaluated by the behavior of policy, and the evaluation is done by a human (with a bias), or a surrogate manually designed metric.\n\nAbove discussion is included in the discussion (and a little explanation is added in 3.2).\n\nQ, Some concerns about equation 7.\nA. We understand your main concerns.\nIn our paper, equation 7 comes from nowhere, and we do not clearly say that it is completely heuristics. This would confuse readers as you were so.\n\nWe, however, believe that even though the justification of equation 7 is not done in a theoretical way, the justification can also be done in an experimental way. If there is a proper experimental validation for a proposal, the proposal should be the important contribution to the community.\n\nWe revised our paper as below to make section 4 clear.\nWe divided the section 4 into the two subsections 4.1 and 4.2, the one for proposing the idea of expert-based reward function training, and the other one for proposing the modified objective function.\nIn the second subsection, we clearly wrote that\n\n- objective function comes by heuristics and there is no theoretical justification.\n- when \\tau ~= 0, this objective function becomes conventional binary cross entropy.\n- The effectivity of this objective function is validated in the experiment section.\n\nand more specific explanation for the objective as we discussed in the reply for your first review.\n\nPlease have a look at the revised version and give us a reply if you have any other concerns.\n\nBest,",
"Given the thorough response and the other reviews, I went back to re-read the paper to make sure I was being fair. I was a little harsh, but still don't believe this paper is ready for publication, as important paragraphs are quite difficult to read and parse. I have changed my review from a 3 to a 4.\n\nAs an example of points that are unclear:\n\n2.1: it's quite unclear what you mean by \"dynamics\" at the end of this section which are known in the sequence generation task, confusing this explanation.\n3.1: W_e isn't mentioned again, making it unclear what space you're learning in.\n3.2: selection of alpha_D_i isn't discussed, though discounted by the fact I haven't looked at REINFORCE in some time. It seems it would matter quite a lot.\n4: Your discussion above on equation 7 helps a lot, and would benefit the paper (though I still wouldn't quite advocate acceptance). This is particularly true since elements are \"heuristic,\" as you say, making it non-obvious where they came from. This is perhaps the core of my concerns with this paper: crucial equations we are to take on faith, without justification or explanation, should not be published. It is very confusing to try and re-derive equation 7 from the points made in the preceding parts of the paper; it just doesn't follow without much more explanation.\n\n",
"Thanks for the review.\n\nFrom the title and the first paragraph of your review, we assume that you might not get our paper, maybe due to our poor writing. We are not sure how you understand our paper, so we firstly try to correct your misunderstandings.\n\nThis paper is introducing the two techniques to learn better reward function, partial reward function and expert-based reward function training, rather than introducing new RL approach. From your review, it can be assumed that you think our paper argues about q-learning, but our paper uses policy-based RL approach (it has been firstly done by Ranzato et al. and it is not our novelty) and does not argue about q-learning at all. A policy (or a sequence generator) is learned by a policy gradient, and Q-function is NOT learned by a policy gradient. In REINFORCE, Q-value is estimated by Monte-Carlo samplings. I think the first paragraph of reviewer3 well summarizes our paper. We would appreciate if you could tell us which parts of our paper actually caused your misunderstandings so that we can revise these parts.\n\nQ. Explain about equation 7 specifically.\nA. The motivation of equation 7 is, when the produced fake sequence is not quite different from the true sequence (for example, only one token in the sequence of length 20 is changed), we thought it would be effective to decrease the weight of the objective function, binary cross entropy (BCE), because this fake sequence is actually not so bad sequence. The benefit of decreasing the weight for such sequence is that the learned reward function would become easier to be maximized by a policy gradient, because learned reward function would return some reward to a generated sequence that has some mistakes. In our paper, we describe it as “smooth\" reward function.\nThe parameter \\tau in quality function directly affects the weight of BCE. When \\tau is large, the fake sequence that is little edited from expert one get a large value of quality function, resulting in making (1 - q) / (1 + q) lower than 1, and it decreases the weight of the second term in the right hand side of equation (7). On the other hand, when \\tau is small, the fake sequence that is little edited from expert one gets a near 0 value of quality function, resulting in (1 - q) / (1 + q) ~= 1, and equation (7) becomes the conventional BCE.\nThe term (1 - q) / (1 + q) is heuristic and there is no theoretical background for it, but it enables to control the strictness of the learned reward function by changing the parameter \\tau (“strict” means that only realistic sequence gets the reward close to 1, and others get the reward close to 0. A strict reward function is accurate, but it is considered to be difficult to maximize by a policy gradient because this reward function might be binary-like peaky function). In the experiment, we show that when the partial reward function has long scale, easing the conventional BCE by using \\tau=1.5 is effective.\n\nPlease give us more specific parts that you are still confused, and we are willing to give answers.\n\nBest,",
"Thanks for the review.\nYour first paragraph of the review well summarizes our paper. Our paper is seemingly well understood by you.\n\nQ. How are the action-state values of different length aggregated?\nA. We simply add the Q values of different scales. To balance the importance of different scales, we also introduce hyper parameter alpha.\n\nQ. Why are the future subsequences that do not contain y_{t+1} ignored?\nA2. In some setting such as Go or Atari games, the final state of the agent is important (e.g. win or lose), and future states affect the Q-value a lot. So, it is important to see further future state after the certain action at t to estimate Q-value in those setting. In our setting, however, the importance of states (or subsequences) does not depend on the timesteps. The partial reward functions treat every subsequences at a time step equally. So, we think the subsequences that contain y_{t+1} are enough samples (and they should depend on q-value of y_{t+1} a lot because y_{t_1} itself is in the subsequences) to estimate q-value. \nIn equation (4), the subsequences that do not contain y_{t+1} are not ignored.",
"Thank you for the review. I am glad that you enjoyed reading our paper.\nAbout the mistakes of English in the introduction part, we will get native check and revise it."
] | [
7,
4,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
1,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_r1kP7vlRb",
"iclr_2018_r1kP7vlRb",
"iclr_2018_r1kP7vlRb",
"SJRHEvMfG",
"iclr_2018_r1kP7vlRb",
"ByWVfK6bf",
"r13qwSzbf",
"SkY1f6Hlf",
"HJ2pirpxG",
"H104OpbgM"
] |
iclr_2018_rkdU7tCaZ | Dynamic Evaluation of Neural Sequence Models | We present methodology for using dynamic evaluation to improve neural sequence models. Models are adapted to recent history via a gradient descent based mechanism, causing them to assign higher probabilities to re-occurring sequential patterns. Dynamic evaluation outperforms existing adaptation approaches in our comparisons. Dynamic evaluation improves the state-of-the-art word-level perplexities on the Penn Treebank and WikiText-2 datasets to 51.1 and 44.3 respectively, and the state-of-the-art character-level cross-entropies on the text8 and Hutter Prize datasets to 1.19 bits/char and 1.08 bits/char respectively. | rejected-papers | The pros and cons of the paper are summarized below:
Pros:
* The proposed tweaks to the dynamic evaluation of Mikolov et al. 2010 are somewhat effective, and when added on top of already-strong baseline models improve them substantially
Cons:
* Novelty is limited. This is essentially a slightly better training scheme than the method proposed by Mikolov et al. 2010.
* The fair comparison against Mikolov et al. 2010 is only shown in Table 1, where a perplexity of 78.6 turns to a perplexity of 73.5. This is a decent gain, but the great majority of this is achieved by switching the optimizer from SGD to an adaptive method, which as of 2018 is a somewhat limited contribution. The remainder of the tables in the paper do not compare with the method of Mikolov et al.
* The paper title, abstract, and introduction do not mention previous work, and may give the false impression that this is the first paper to propose dynamic evaluation for neural sequence models, significantly overclaiming the paper's contribution and potentially misleading readers.
As a result, while I think that dynamic evaluation itself is useful, given the limited novelty of the proposed method and the lack of comparison to the real baseline (the simpler strategy of Mikolov et al.) in the majority of the experiments, I think this papers till falls short of the quality bar of ICLR.
Also, independent of this decision, a final note about perplexity as an evaluation measure to elaborate on the comments of reviewer 1. In general, perplexity is an evaluation measure that is useful for comparing language models of the same model class, but tends to not correlate well with model performance (e.g. ASR accuracy) across very different types of models. For example, see "Evaluation Metrics for Language Models" by Chen et al. 1998. The method of dynamic evaluation is similar to the cache-based language models that existed in 1998 in that it reinforces the model to choose similar vocabulary to that it's seen before. As you can see from this paper that the quality of perplexity of an evaluation measure falls when cache-based models are thrown into the mix, and one reason for this is that cache models, while helping perplexity greatly, tend to reinforce previous errors when errors do occur. | train | [
"Sk-EjzwxG",
"B1Z3O3HeG",
"BkGlZbceG",
"SywY2Zvmz",
"HJ-esWv7G",
"HyTDubw7z",
"S19dzZ4bM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors provide an improved implementation of the idea of dynamic evaluation, where the update of the parameters used in the last time step proposed in (Mikolov et al. 2010) is replaced with a back-propagation through the last few time steps, and uses RMSprop rather than vanilla SGD. The method is applied to word level and character level language modeling where it yields some gains in perplexity. The algorithm also appears able to perform domain adaptation, in a setting where a character-level language model trained mostly on English manages to quickly adapt to a Spanish test set. \n\nWhile the general idea is not novel, the implementation choices matter, and the authors provide one which appears to work well with recently proposed models. The character level experiments on the multiplicative LSTM make the most convincing point, providing a significant improvement over already good results on medium size data sets. Figure 2 also makes a strong case for the method's suitability for applications where domain adaptation is important.\n\nThe paper's weakest part is the word level language modeling section. Given the small size of the data sets considered, the results provided are of limited use, especially since the development set is used to fit the RMSprop hyper-parameters. How sensitive are the final results to this choice? Comparing dynamic evaluation to neural cache models is a good idea, given how both depend en medium-term history: (Grave et al. 2017) provide results on the larger text8 and wiki103, it would be useful to see results for dynamic evaluation at least on the former.\n\nAn indication of the actual additional evaluation time for word-level, char-level and sparse char-level dynamic evaluation would also be welcome.\n\nPros:\n- Good new implementation of an existing idea\n- Significant perplexity gains on character level language modeling\n- Good at domain adaptation\n\nCons:\n- Memory requirements of the method\n- Word-level language modeling experiments need to be run on larger data sets\n\n(Edit: the authors did respond satisfactorily to the original concern about the size of the word-level data set)",
"This paper takes AWD-LSTM, a recent, state of the art language model that was equipped with a Neural Cache, swaps the cache out for Dynamic Evaluation and improves the perplexities.\n\nDynamic Evaluation was the baseline that was most obviously missing from the original Neural Cache paper (Grave, 2016) and from the AWD-LSTM paper. In this sense, this work fills in a gap.\n\nLooking at the proposed update rule for Dynamic Evaluation though, the Global Prior seems to be an implementation of the Fast Weights idea. It would be great to explore that connection, or at least learn about how much the Global Prior helps.\n\nThe sparse update idea feels very much an afterthought and so do the experiments with Spanish.\n\nAll in all, this paper could be improved a lot but it is hard to argue with the strong results ...\n\nUpdate: I'm happy with how the authors have addressed these and other comments in revision 2 of the paper and I've bumped the rating from 6 to 7.\n",
"This paper proposes a dynamic evaluation of recurrent neural network language models by updating model parameters with certain segment lengths.\n\nPros.\n- Simple adaptation scheme seems to work, and the paper also shows (marginal) improvement from a conventional method (neural cache RNNLM) \nCons.\n- The paper is not well written due to undefined variables/indexes, confused explanations, not clear explanations of the proposed method in abstract and introduction (see the comments below)\n- Although the perplexity is an important measure, it’s better to show the effectiveness of the proposed method with more practical tasks including machine translation and speech recognition. \n\nComments:\n- Abstract: it is difficult to guess the characteristics of the proposed method only with a term “dynamic evaluation”. It’s better to explain it in more detail in the abstract.\n- Abstract: It’s better to provide relative performance (comparison) of the numbers (perplexity and bits/char) from conventional methods.\n- Section 2: Some variables are not explicitly introduced when they are appeared including i, n, g, and l\n- Section 3: same comment with the above for M. Also n is already used in Section 2 as a number of sequences.\n- Section 5. Why does the paper only provide examples for SGD and RMSprop? Can we apply it to other optimization methods including Adam and Adadelta?\n- Section 6, equation (9): is this new matrix introduced for every layer? Need some explanations.\n- Section 7.1: It’s better to provide the citation of Chainer.\n- Section 7.1 “AWD-LSTM”: The paper should provide the full name of AWD-LSTM when it is first appeared.\n\n",
"Thanks for your review.\n\n“Looking at the proposed update rule for Dynamic Evaluation though, the Global Prior seems to be an implementation of the Fast Weights idea. It would be great to explore that connection, or at least learn about how much the Global Prior helps.”\n\nThis reviewer makes an insightful comment about the relationship between dynamic evaluation and fast weights. Dynamic evaluation in general does relate to fast weights (and could even be considered a type of fast weights, although it differs from traditional fast weights in the update mechanism), and the global prior we use is similar to the decay sometimes used in fast weights. We added a paragraph about this in the related work section, and updated the paper to mention the relationship with the decay rule of fast weights when we introduce the global prior. \n\nWe did provide some experiments exploring how much the global prior helps in the submitted version of the paper in table 1; a simple L2 global prior helps slightly, and scaling decay rates by RMS gradient values helps a bit more. \n\n“The sparse update idea feels very much an afterthought and so do the experiments with Spanish.”\n\nWe included the sparse update idea to address the high memory cost of dynamic evaluation when mini-batching. We included the Spanish experiments to show how dynamic evaluation handles domain adaptation, and as an illustration of the features that dynamic evaluation can learn to model on the fly. \n",
"Thanks for your review. \n\nThis reviewer requested that we add a word-level experiment on a larger dataset, and we are pleased to say that we were able to include experiments on word-level text8 in section 7.2 of the updated paper. In summary, we achieved test perplexities of static eval: 87.5, neural cache: 75.1, dynamic eval: 70.3 .These results show that dynamic evaluation still provides a large improvement to word-level language modelling on a larger dataset. \n\nAnother point about the word-level experiments is that WikiText-103 and WikiText-2 use the same test set. Our result of 44.3 on the WikiText test set (using WikiText-2 for training) outperforms the static model on WikiText-103 from the neural cache paper (Grave et al. 2017) , which achieves a perplexity of 48.7. Our result also approaches the performance of LSTM+neural cache on WikiText-103 from (Grave et al. 2017), which achieved a perplexity of 40.8. So despite using 50 times less data, our results on WikiText-2 are competitive with previous approaches trained on WikiText-103.\n\nAs for the memory requirements of the method, we did present a result for character-level language modelling with our sparse dynamic evaluation that used 0.5% of the number of adaptation parameters of regular dynamic evaluation.\n",
"Thank you for your review. \n\nThis reviewer makes unreasonable claims that the improvements in the paper are “marginal”, and that language modelling is not a sufficient benchmark. As pointed out by AnonReviewer2 in the comments of this review, the results in this paper are quite strong, and language modelling is very sensible for evaluating new techniques. This reviewer also claims that the paper is not well-written, but provides very little evidence to support this. Overall, this reviewer has no valid scientific criticisms of the paper.\n \n\"the paper also shows (marginal) improvement from a conventional method (neural cache RNNLM)\"\n\nOur results on WikiText-2, where we demonstrate a 7.7 perplexity point improvement over the previous state-of-the-art, would be considered far more than a \"marginal\" improvement by almost any standard. For instance, we report much larger perplexity gains on WikiText-2 as compared with contemporary ICLR submissions such as [1,2,3], which all use similar baselines to our experiments. Our improvements to character-level language modelling were also far more than \"marginal\".\n\n\"Some variables are not explicitly introduced when they are appeared including i, n, g, and l\"\n\n g and l are not variables, they are subscripts used to denote \"global\" and \"local\". If the reviewer thought g and l were variables, we can understand why the reviewer may have been confused by our explanations of our method. However, we did explicitly introduce all variables that use g and l subscripts, so this really should have been clear to the reviewer.\n\nThe reviewer also mentions a few minor variables that are not “explicitly introduced”, however every one of these variables is defined implicitly in sequence/set notation. \n\n\" n is already used in Section 2 as a number of sequences.\"\n\nTo avoid confusion, we replaced n in section 2 with M, since we also use M in section 3 as the number of sequences in a slightly different context.\n\n“Abstract: it is difficult to guess the characteristics of the proposed method only with a term “dynamic evaluation”. It’s better to explain it in more detail in the abstract.”\n\n Our use of the term dynamic evaluation was described in the second sentence of the abstract. We elected not to describe the specific engineering details of our dynamic evaluation method because this is beyond the scope of the abstract.\n\n\"It’s better to provide relative performance (comparison) of the numbers (perplexity and bits/char) from conventional methods.\" \n\nWe elected not to provide relative performance comparisons, because the \"deltas\" to perplexity and bits/character are almost meaningless without knowledge of how strong the baseline numbers are. Providing the static evaluation numbers alongside the dynamic evaluation numbers would also be unreasonable as it is too many results for an abstract. Providing the overall results with dynamic evaluation is the most concise way to demonstrate the effectiveness of our method, especially since all of these results improve the state-of-the-art.\n\n\"Why does the paper only provide examples for SGD and RMSprop? Can we apply it to other optimization methods including Adam and Adadelta?\"\n\nThe goal of these experiments is to demonstrate the utility of the proposed modifications of dynamic evaluation as compared to past approaches (which used SGD). There are an infinite number of dynamic evaluation approaches, we don't claim anywhere that ours is the best possible-- just that all of our suggested modifications improve on past approaches.\n\nAs for the two approaches suggested by the reviewer, ADAM or ADAM derived methods could be reasonable to use for dynamic evaluation, but were found not to work as well in preliminary experiments. Adadelta likely would not be sensible for dynamic evaluation, because the learning rates of Adadelta decrease over training. If dynamic evaluation were applied with Adadelta, the rate of adaptation to recent history would decrease later in the test set, which would likely hurt performance. \n\n\"equation (9): is this new matrix introduced for every layer? Need some explanations.\"\n\nThe mLSTM we applied this to only used 1 recurrent layer, so this distinction was arbitrary in the context of our experiments. In a multilayer-RNN, the new matrix could be introduced for every layer or just 1 layer. We've added a sentence in the paper clarifying this point. \n\n\"It’s better to provide the citation of Chainer. \" \n\nwe added this to the current version.\n\n“AWD-LSTM: The paper should provide the full name of AWD-LSTM when it is first appeared.”\n\nWe now provide the full name of AWD-LSTM at first appearance.\n\n[1] Memory-based Parameter Adaptation. ICLR 2018 submission. https://openreview.net/pdf?id=SkFqf0lAZ\n\n[2] Breaking the Softmax Bottleneck: A High-Rank RNN Language Model. ICLR 2018 submission. https://openreview.net/pdf?id=HkwZSG-CZ \n\n[3] Fraternal Dropout. ICLR 2018 submission . https://openreview.net/pdf?id=SJyVzQ-C-\n\n",
"Vanilla neural machine translation can be viewed as conditional language modelling (most often trained for perplexity) with additional evaluation noise in the form of BLEU. Until we find a better loss, language modelling is a very good option to explore new models, optimization and evaluation techniques.\n\nThe paper has issues that can be fixed up but it also has great results. It is far from a clear rejection."
] | [
7,
7,
3,
-1,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rkdU7tCaZ",
"iclr_2018_rkdU7tCaZ",
"iclr_2018_rkdU7tCaZ",
"B1Z3O3HeG",
"Sk-EjzwxG",
"BkGlZbceG",
"BkGlZbceG"
] |
iclr_2018_SkYibHlRb | SQLNet: Generating Structured Queries From Natural Language Without Reinforcement Learning | Synthesizing SQL queries from natural language is a long-standing open problem and has been attracting considerable interest recently. Toward solving the problem, the de facto approach is to employ a sequence-to-sequence-style model. Such an approach will necessarily require the SQL queries to be serialized. Since the same SQL query may have multiple equivalent serializations, training a sequence-to-sequence-style model is sensitive to the choice from one of them. This phenomenon is documented as the "order-matters" problem. Existing state-of-the-art approaches rely on reinforcement learning to reward the decoder when it generates any of the equivalent serializations. However, we observe that the improvement from reinforcement learning is limited.
In this paper, we propose a novel approach, i.e., SQLNet, to fundamentally solve this problem by avoiding the sequence-to-sequence structure when the order does not matter. In particular, we employ a sketch-based approach where the sketch contains a dependency graph, so that one prediction can be done by taking into consideration only the previous predictions that it depends on. In addition, we propose a sequence-to-set model as well as the column attention mechanism to synthesize the query based on the sketch. By combining all these novel techniques, we show that SQLNet can outperform the prior art by 9% to 13% on the WikiSQL task. | rejected-papers | The pros and cons of the paper cited by the reviewers can be summarized as follows:
Pros:
- good problem, NL2SQL is an important task given how dominant SQL is
- incorporating a grammar ("sketch") is a sensible improvement.
Cons:
- The dataset used makes very strong simplification assumptions (that every token is an SQL keyword or appears in the NL)
- The use of a grammar in the context of semantic parsing is not novel, and no empirical comparison is made against other reasonable recent baselines that do so (e.g. Rabinovich et al. 2017).
Overall, the paper seems to do some engineering for the task of generating SQL, but without an empirical comparison to other general-purpose architectures that incorporate grammars in a similar way, the results seem incomplete, and thus I cannot recommend that the paper be accepted at this time. | train | [
"BJnEgv8Nf",
"B1y7_3YgM",
"HksQE4cez",
"HkTzAHqxf",
"S1LubAnQz",
"HykdZyNmz",
"HyrNZJ4mM",
"ryjOuKfQG",
"B1SI_FfQz",
"rJP3AS0fM",
"SkkIarCfz",
"BkOfprAzG",
"ByNjir0Mf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"I have looked at the latest revision and it has not addressed my main concern adequately, i.e. the positioning of the paper wrt the literature. It is argued that previous work on question answering over tables is not relevant to the paper, which is clearly not the case: even though SQL can express more complex queries than these, the dataset considered does exactly that by utilizing only a part of the expressivity of SQL. Given this, it doesn't acknowledge the fact that crowd sourced datasets already exist, which do not share tables in training/dev/test. In addition, it is argued that using a sketch (the well established notion of a grammar in previous work), doesn't hinder the generality of the approach while it clearly does, especially when comparing it against work agnostic to the task such as the one of Dong and Lapata (2016). Thus I keep my score unchanged.",
"This submission proposes a new seq2sel solution by adopting two new techniques, a sequence-to-set model and column attention mechanism. They show performance improve over existing studies on WikiSQL dataset.\n\nWhile the paper is written clearly, the contributions of the work heavily depends on the WikiSQL dataset. It is not sure if the approach is generally applicable to other sequence-to-sql workloads. Detailed comments are listed below:\n\n1. WikiSQL dataset contains only a small class of SQL queries, with aggregation over single table and various filtering conditions. It does not involve any complex operator in relational database system, e.g., join and groupby. Due to its simple structure, the problem of sequence-to-sql translation over WikiSQL is actually simplified as a parameter selection problem for a fixed template. This greatly limits the generalization of approaches only applicable to WikiSQL. The authors are encouraged to explore other datasets available in the literature.\n\n2. The \"order-matters\" motivation is not very convincing. It is straightforward to employ a global ordering approach to rank the columns and filtering conditions based on certain rules, e.g., alphabetical order. That could ensure the orders in the SQL results are always consistent.\n\n3. The experiments do not fully verify how the approaches bring performance improvements. In the current version, the authors only report superficial accuracy results on final outcomes, without any deep investigation into why and how their approach works. For instance, they could verify how much accuracy improvement is due to the insensitivity to order in filtering expressions.\n\n4. They do not compare against state-of-the-art solution on column and expression selection. While their attention mechanism over the columns could bring performance improvement, they should have included experiments over existing solutions designed for similar purpose. In (Yin, et al., IJCAI 2016), for example, representations over the columns are learned to generate better column selection.\n\nAs a conclusion, I find the submission contains certain interesting ideas but lacks serious research investigations. The quality of the paper could be much enhanced, if the authors deepen their studies on this direction.",
"The authors present a neural architecture for the WikiSQL task. The approach can be largely seen as graphical model tailored towards the constrained definition of SQL queries in WikiSQL. The model makes strong independence-assumptions, and only includes interactions between structures where necessary, which reduces the model complexity while alleviating the \"order matters\" problem. An attention mechanism over the columns is used to model the interaction between columns and the op or value in a soft differentiable manner. The results show impressive gains over the baseline, despite using a much simpler model. I appreciated the breakdown of accuracy over the various subtasks, which provides insights into where the challenges lie.",
"This paper proposes a neural network-based approach to converting natural language questions to SQL queries. The idea is to use a small grammar to facilitate the process, together making some independence assumptions. It is evaluated on a recently introduced dataset for natural language to SQL.\n\nPros:\n- good problem, NL2SQL is an important task given how dominant SQL is\n- incorporating a grammar (\"sketch\") is a sensible improvement.\n\nCons:\n- The dataset used makes very strong simplification assumptions. Not problem per se, but it is not the most challenging SQL dataset. The ATIS corpus is NL2SQL and much more challenging and realistic:\nDeborah A. Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the ATIS task: the ATIS-3 corpus. In Proceedings of the workshop on Human Language Technology (HLT '94). Association for Computational Linguistics, Stroudsburg, PA, USA, 43-48. DOI: https://doi.org/10.3115/1075812.1075823\n\n- In particular, the assumption that every token in the SQL statement is either an SQL keyword or appears in the natural language statement is rather atypical and unrealistic.\n\n- The use of a grammar in the context of semantic parsing is not novel; see this tutorial for many pointers:\nhttp://yoavartzi.com/tutorial/\n\n- As far as I can tell, the set prediction is essentially predicted each element independently, without taking into account any dependencies. Nothing wrong, but also nothing novel, that is what most semantic parsing/semantic role labeling baseline approaches do. The lack of ordering among the edges, doesn't mean they are independent.\n\n- Given the rather constrained type of questions and SQL statements, it would make sense to compare it against approaches for question answering over knowledge-bases:\nhttps://github.com/scottyih/Slides/blob/master/QA%20Tutorial.pdf\nWhile SQL can express much more complex queries, the ones supported by the grammar here are not very different.\n\n- Pasupat and Liang (2015) also split the data to make sure different tables appear only in training, dev, test and they developed their dataset using crowd sourcing.\n\n- The comparison against Dong and Lapata (2016) is not fair because their model is agnostic and thus applicable to 4 datasets while the one presented here is tailored to the dataset due the grammar/sketch used. Also, suggesting that previous methods might not generalize well sounds odd given that the method proposed seems to use much larger datasets.\n\n- Not sure I agree that mixing the same tables across training/dev/test is more realistic. If anything, it assumes more training data and manual annotation every time a new table is added.",
"We apologize for misunderstanding the comments. We are aware of the WikiTableQuestion task as well. However, it is a question-answering task, rather than a query-generation task. We have explained in previous responses why we prefer query-generation over touching the data directly. This is also why we thought the overnight dataset was referred to before, since overnight is a query-generation dataset.\n",
"Fourth, we acknowledge of the QA task studied in Sun et al 2016, and they are very important. However, as we mentioned, their approaches need to touch the data to propose candidate answers first and then rank the answers directly. However, the applications that we care most are in the enterprise setting dealing with user’s private data. One such scenario is described in the Uber study (Johnson et al. 2018). Therefore, we focus on query generation rather than question answering.\n\nHaving said this, we acknowledge the concerns from the reviewer that such an approach will have the limitation that the values must appear as a substring of the description, and we have stated it clearly in Section 2. However, we argue that touching the data directly is not ideal due to the privacy concern. In our agenda, we plan to study the propose-repair scheme to mitigate this issue. That is, we can generate a query based on the description, and query the database, and revise the query. Such an approach is similar to the PL-approach such as SQLizer. Note that in this process, querying the database may still leak private information. However, we can apply the approach proposed by Johnson et al. 2018 to automatically convert a SQL query into a differentially private one so that the information leakage can be mitigated. Such a paradigm is much better than QA directly with respect to the privacy concern.\n\nWith respect to the scalability issue, our target application scenarios may contain billions of records in a database (e.g., the Uber study in Johnson et al. 2018). A model handling such large-scale datasets typically special designs. In a SQL query synthesis setting, we do not think such a burden is necessary, and thus we prefer synthesizing SQL queries directly from description and schema, without touching the data. We want to also emphasize that WikiSQL does not reflect the scale of the target applications in our mind, but its problem setup automatically makes the database scale not an issue since, during query generation, the data in the table is not touched.\n",
"Thanks a lot for the further feedback! We have uploaded a further revision to reflect our responses. More feedbacks are welcome, and we would be happy to address as many of them as possible before the end of the discussion period.\n\nFirst, we have changed the example to the following one:\nQuestion: Which country is Jim Les from?\nSQL query: SELECT Nationality WHERE Player = Jim Les\n\nIn this example, the column name “nationality” is not a simple rename of any utterance in the description. There have been several others, for example:\nQuestion: What is the green house made of?\nSQL query: SELECT Composition WHERE Colours = green\n\nThese examples should justify our argument that the column names are not a substring of the original statement. Note, our model can correctly process these examples.\n\nSecond, “grammar based semantic parsing seems to be also important enough previous work to be acknowledged and cited.” We cite a few more papers:\n\nMaxim Rabinovich, Mitchell Stern, Dan Klein, Abstract Syntax Networks for Code Generation and Semantic Parsing, EMNLP 2017\nParisotto, Emilio, et al. Neuro-symbolic program synthesis, ICLR 2017\n\nWe think these are the most relevant approaches, which are quite similar to Seq2SQL baseline. In fact, we can view Seq2SQL as a modification of these works by incorporating the pointer network for value prediction. We are happy to cite more papers that the reviewer would suggest.\n\nThird, thanks for the pointer to the 2006 paper! However, we would like to argue that the SCFG used in (Wong 2006) is quite different than the sketch (or CFG) used in our work, and constructing an SCFG requires much more efforts. \n\nIn fact, an SCFG production rule is of the form NT -> (A, B), where A and B indicate rules in the source and target language respectively. In the semantic parsing setting, A is the natural language and B is SQL. This means that such a rule is essentially a translation rule to say that every natural sentence A should be translated into B. Therefore, constructing an SCFG is essentially not only constructing the grammar B, but also constructing a formal grammar for natural language A and the translation rule from A to B, which is highly non-trivial.\n\nIn our sketch-based approach, we only need the grammar B, but NEITHER the grammar for A NOR the translation rule from A to B. Further, grammar B is typically already available, since B is a programming language, i.e., SQL. Therefore, to apply our approach, we only need to construct a dependency graph for a SQL sub-grammar, which requires not many efforts. This makes our approach much more practical than SCFG-based approaches such as Wong (2006). We want to note that constructing an SCFG, which requires constructing a set of translation rules and a manually designed alignment approach (as in Wong 2006), is a non-trivial work, and we find it hard to justify why such an approach serves a more reasonable baseline than Zhong et al 2017.\n\nHaving said this, we acknowledge the concerns from Reviewer A that more baselines should be compared. We will examine Seq2seq, seq2tree, and abstract syntax network, which are likely the state-of-the-arts on parsing and semantic parsing. The time may not be sufficient before the rebuttal period, but we guarantee that our best results of these approaches will be provided in the final version. Based on our existing experience, these approaches are unlikely to outperform Zhong et al 2017.\n",
"- \"For the overnight dataset (Pasupat and Liang 2015), it is not true that the schemas from train/dev/test are non-overlapping. We quote the statement from (Pasupat and Liang 2015):\"\n\nI think there is a misunderstanding. The overnight dataset was poposed in this work:\nhttps://nlp.stanford.edu/pubs/wang-berant-liang-acl2015.pdf\n(Wang, Berant and Liang, ACL 2015)\nThe Pasupat and Liang (ACL 2015 too) is this one:\nhttps://cs.stanford.edu/~ppasupat/resource/ACL2015-paper.pdf\nIn this one, we read:\n\"The final dataset contains 22,033 examples on 2,108 tables. We set aside 20% of the tables and\ntheir associated questions as the test set and develop on the remaining examples\"\nThus the tables (and their schemas) in the test set do not appear in the training/dev set.\n\n",
"I appreciate the thoughtful response by the authors. Here are some comments/feedback:\n\n- WikiSQL/ATIS-3: I appreciate that WikiSQL is not your contribution. However it is your choice to work on this task and not others, and thus the choice and conclusions drawn from the experiments need to be argued with precision. In the version of the paper submitted for reviewing, it read that WikiSQL was considered more challenging than others previously considered. Now this statement has been revised appropriately. Furthermore, the fact that the methods developed in this paper require larger training datasets than those applied to ATIS is rather a disdavantage in the context of semantic parsing research. Finally it should be stated in the paper explicitly that the WikiSQL is not only a larger dataset but also a narrower task, as stated in the response.\n\n- \"The reviewer mentioned: “In particular, the assumption that every token in the SQL statement is either an SQL keyword or appears in the natural language statement is rather atypical and unrealistic.” We want to emphasize that this is NOT true.\"\n\nIn both the original and the revised version it reads: \"Second, any token in the output SQL query\nis either a SQL keyword or a sub-string of the natural language question.\" I assumed that column names are included in the tokens of the SQL statement. In the example of figure 1, it seems like the only token from the SQL statement that doesn't appear in the NL question is \"no.\", but that's only if we assume the system doesn't know that \"number\" mean \"no.\", which fairly trivial to learn given the large-scale training dataset. GeoQuery has the same kind of challenge and it was less than 1000 instances, but nevertheless performances reached 80% accuracy for Enlgish using just phrase-based machine translation, see: https://people.eecs.berkeley.edu/~jda/papers/avc_smt_semparse.pdf. Could you provide a more challenging example from WikiSQL to help clarify the challenge posed by it?\n\n- \"We agree with the reviewers, and we also didn’t claim using a grammar is our contribution.\": Indeed, but neither in the original paper nor in the revised you give credit to any previous work. You don't claim you invented seq2seq either, but you give credit to those who proposed it (as you should). Given that you mention \"sketch\" 39 times in your paper, grammar based semantic parsing seems to be also important enough previous work to be acknowledged and cited.\n\n- \"Clearly, predicting the value in one constraint in the WHERE clause depends on the column selected in a previous step. Also, it is mentioned that “nothing novel” and “this is most … baselines do”. We would highly appreciate it if the reviewer could provide some references. To the best of our knowledge, some typical baseline approaches such as Seq2tree has been demonstrated ineffective on this Wikisql dataset in Zhong et al. 2017.\"\n\nHere is a paper from 2006 that employs a grammar (a CFG in particular) that given one rule generating two intermediate nodes, each of them is explanded independently:\nhttp://www.mt-archive.info/HLT-NAACL-2006-Wong.pdf\nIf what you do is different, it would be good to compare against this paper, as well as others beyond Zhong et al. (2017).\n\n- \"First, since QA has been studied over decades and many QA tasks have been proposed (and mentioned in the slides), we would appreciate it if the reviewer can point out the particular one that is relevant. \"\n\nHere is one that seems to be handling the same kind of questions:\nhttp://cs.ucsb.edu/~ysu/papers/www16_table.pdf\nThey cite a fair amount of previous work that is also worth considering.\n\n- \"We have argued in our paper why we do not prefer such a problem: the KB can be too huge or contain privacy-sensitive information, and thus generating the query without touching the data itself is an important factor for practical usages.\"\n\nThe paper I cite above handles KBs with millions of tables, thus it definitely scales to WikiSQL size databases. I would argue that being able to handle large databases is desirable, and hence an advantage for the methods that can do it. Secondly, creating the query without \"touching the data itself\" is only possible when the NL question contains the values, an assumption made in the WikiSQL dataset but not necessarily realistic. When it doesn't hold, some form of (named) entity linking is necessary. I appreciate the privacy concerns and it would be worth stating precisely which aspects of the previous work such as the one mentioned above violates them, since they have different variants of their system utilizing different aspects of the data.",
"We thank the reviewers for the valuable comments. We would like to clarify some clear misunderstandings and highlight the differences in our revisions.\n\nWe agree with the reviewer that this work focus on the WikiSQL dataset. This is because this is the only largest scale dataset that is close to a practical application scenario to the best of our knowledge. In our revision, we cite one recent case study on 8.1 million real-world SQL queries written by uber data analysts (Johnson 2017). They show that almost 40% of all these queries (1) involve only table; and (2) each WHERE constraint involves only one column. They do not contain join at all. This is exactly the same as the queries proposed in WikiSQL. On the other hand, this problem is not trivial, since we can see even our new state-of-the-art’s performance is less than 70%. By showing these two points, we believe we are dealing with a meaningful problem which is not trivial to tackle. \n\nTo all other datasets, such as atis-3 mentioned by Reviewer A or the dataset used in (Yin, et al., IJCAI 2016), they suffer one or more problems discussed in Section 2 which render them not practical, and thus not an ideal target of our study.\n\nWe include one more section in our evaluation to document the order-matters phenomenon. In particular, we want to emphasize that WikiSQL is already employing a global ordering, but the columns may appear in the natural language statement in an arbitrary order. For example, we include the following example in our revision:\nNL: What are the seasons when twente came in third place and ajax was the winner?\nSQL: SELECT season WHERE winner = ajax AND third place = twente\n\nWe can observe that the statement “twente came in third place” and “ajax was the winner” appear in the reverse order of the global order of the two columns “winner” and “third place”. This is the ``order-matters” issue we discuss. However, this issue cannot be solved by changing the global order of the two columns, since the human users should also be allowed to state:\n“What are the seasons when ajax was the winner and twente came in third place?”\nNo matter what global order is used, one of these two statements will cause the ``order-matters” issue. As far as we can see, the only way to mitigate this issue is to restrict human users to state their goals following the global order. Again, doing so will render the dataset artificial and not practical.\n\nWe are very confused about the reviewer’s comment “The experiments do not fully verify how the approaches bring performance improvements”. In fact, the entire Section 4.3 is devoted to an ablation study to show the improvements brought by each component. In particular, the SQLNet (seq2set) shows the accuracy improvement due to the insensitivity to order. In our revision, we add Sec 4.4 to provide one more section to even further understand the effectiveness due to “order-matters” issue.\n\nSeq2SQL is the state-of-the-art on the WikiSQL dataset, and we have compared against it. Yin et al 2016 is not suitable for the WikiSQL task since their approach needs to take the data in the table as a part of the input. We have argued that this is not a scalable approach and may also have privacy issue. We have discussed this in Section 2.\n\nWe hope the reviewer can clarify some of the earlier comments with respect to our clarification. We are also welcome more comments.\n",
"We thank the reviewer’s comment. We have updated the paper to address some comments raised in all reviews. We have posted a separate comment for a highlight overview of revision, and updated the paper. Please take a look and see if there are any comments that we should address further. More feedbacks are welcome!",
"We appreciate reviewers’ valuable comments, and we have improved our paper to address some of the concerns. We find that most comments on the novelty are to some points that we do not claim as our contribution (e.g., the WikiSQL task itself is not our contribution at all). We clarify some of such confusions below, and hope the reviewers can provide more feedback to help us to improve our paper.\n\nFirst, the reviewer mentioned ATIS-3. We agree with the reviewer that atis-3 is much more challenging than WikiSQL. But we want to emphasize that we choose the problem not simply based on its difficulty, but also based on its practical impact. In our revision, we cite one recent case study on 8.1 million real-world SQL queries written by uber data analysts (Johnson 2017). They show that almost 40% of all these queries (1) involve only one table; and (2) each WHERE constraint involves only one column. This is exactly the same as the queries proposed in WikiSQL. On the other hand, this problem is not trivial, since we can see even our new state-of-the-art’s performance is less than 70%. By showing these two points, although we are not solving a challenging problem as 'NP vs P', we believe we are dealing with a meaningful problem which is not trivial to tackle.\n\nOn the other hand, although atis-3 is more challenging, we observe that its dataset is small by the deep neural network standard. This is one additional reason why we prefer WikiSQL.\n\nThe reviewer mentioned: “In particular, the assumption that every token in the SQL statement is either an SQL keyword or appears in the natural language statement is rather atypical and unrealistic.” We want to emphasize that this is NOT true. We only assume the value in the query must appear in the description to make the problem amenable, but we do not assume the column names appear in the description. \n\nFor the constraints on the values, we agree that further efforts need to devote to making a better dataset. But we do not see the problem is overly simplified as discussed above.\n\nNext, the reviewer mentioned: “The use of a grammar in the context of semantic parsing is not novel”. We agree with the reviewers, and we also didn’t claim using a grammar is our contribution. At the end of the introduction, we highlight the three contributions of this work: (1) seq2set; (2) column attention; (3) achieving the state-of-the-art on WikiSQL.\n\nWe do not follow very clearly about the comments “the set prediction is essentially predicted each element independently, without taking into account any dependencies”. Clearly, predicting the value in one constraint in the WHERE clause depends on the column selected in a previous step. Also, it is mentioned that “nothing novel” and “this is most … baselines do”. We would highly appreciate it if the reviewer could provide some references. To the best of our knowledge, some typical baseline approaches such as Seq2tree has been demonstrated ineffective on this Wikisql dataset in Zhong et al. 2017.\n\nThe reviewer mentioned QA tasks. First, since QA has been studied over decades and many QA tasks have been proposed (and mentioned in the slides), we would appreciate it if the reviewer can point out the particular one that is relevant. Second, in our understanding, most existing works on KB-based QA will take the entire KB as an input to answer a question. We have argued in our paper why we do not prefer such a problem: the KB can be too huge or contain privacy-sensitive information, and thus generating the query without touching the data itself is an important factor for practical usages. Again, WikiSQL task is more suitable to such a requirement than previously proposed tasks involving data itself.\n\nFor the overnight dataset (Pasupat and Liang 2015), it is not true that the schemas from train/dev/test are non-overlapping. We quote the statement from (Pasupat and Liang 2015):\n“For each domain, we held out a random 20% of the examples as the test set, and performed development on the remaining 80%, further splitting it to a training and development set (80%/20%). We created a database for each domain by randomly generating facts using entities and properties in the domain (with type-checking).”\nHere, each domain is one schema. Also, the novelty of WikiSQL over Overnight is not the problem that we want to address in our paper.\n\nWe are not comparing against Seq2tree (Dong et al 2016), which was originally compared in Zhong et al 2017. We only compare to the Seq2SQL which is the state-of-the-art on the WikiSQL dataset. Again, as we discussed above, our work is focusing on the WikiSQL dataset itself, since we believe that is an important, though somehow narrow, task.\n\nJohnson et al, Practical differential privacy for SQL queries using elastic sensitivity. to appear in VLDB 2017.\n",
"We have improved our paper with the following revision:\nWe have added more discussions in Section 2 to explain why WikiSQL is a more meaningful and challenging task that is more practical than previous datasets, as part of our explanation why our work, dealing with WikiSQL, is a meaningful contribution.\nWe have added a separate subsection (Section 4.4) to document our study on the 'order-matters' issue, and we also explain why this is not a specific issue in WikiSQL but a general issue that may be encountered in all other tasks. We also provide more detailed analysis to show how our seq2set technique helps to mitigate this issue.\n"
] | [
-1,
4,
7,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
5,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"ByNjir0Mf",
"iclr_2018_SkYibHlRb",
"iclr_2018_SkYibHlRb",
"iclr_2018_SkYibHlRb",
"ryjOuKfQG",
"HyrNZJ4mM",
"B1SI_FfQz",
"B1SI_FfQz",
"BkOfprAzG",
"B1y7_3YgM",
"HksQE4cez",
"HkTzAHqxf",
"iclr_2018_SkYibHlRb"
] |
iclr_2018_r1nmx5l0W | SIC-GAN: A Self-Improving Collaborative GAN for Decoding Sketch RNNs | Variational RNNs are proposed to output “creative” sequences. Ideally, a collection of sequences produced by a variational RNN should be of both high quality and high variety. However, existing decoders for variational RNNs suffer from a trade-off between quality and variety. In this paper, we seek to learn a variational RNN that decodes high-quality and high-variety sequences. We propose the Self-Improving Collaborative GAN (SIC-GAN), where there are two generators (variational RNNs) collaborating with each other to output a sequence and aiming to trick the discriminator into believing the sequence is of good quality. By deliberately weakening one generator, we can make another stronger in balancing quality and variety. We conduct experiments using the QuickDraw dataset and the results demonstrate the effectiveness of SIC-GAN empirically. | rejected-papers | Pros and cons of the paper can be summarized as follows:
Pros:
* The underlying idea may be interesting
* Results are reasonably strong on the test set used
Cons:
* Testing on the single dataset indicates that the model may be of limited applicability
* As noted by reviewer 2, core parts of the paper are extremely difficult to understand, and the author response did little to assuage these concerns
* There is little mathematical notation, which compounds the problems of clarity
After reading the method section of the paper, I agree with reviewer 2: there are serious clarity issues here. As a result, I do cannot recommend that this paper be accepted to ICLR in its current form. I would suggest the authors define their method precisely in mathematical notation in a future submission. | train | [
"SJK75ZFef",
"Skie1qFxM",
"ByfDSjtlf",
"H1dfBd6Xz",
"Sk1SSO6Xz",
"rJtkrOaQf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The paper proposed a method that tries to generate both accurate and diverse samples from RNNs. \nI like the basic intuition of this paper, i.e., using mistakes for creativity and refining on top of it. I also think the evaluation is done properly. I think my biggest concern is that the method was only tested on a single dataset hence it is not convincing enough. Also on this particular dataset, the method does not seem to strongly dominate the other methods. Hence it's not clear how much better this method is compared to previously proposed ones.",
"This paper baffles me. It appears to be a stochastic RNN with skip connections (so it's conditioned on the last two states rather than last one) trained by an adversarial objective (which is no small feat to make work for sequential tasks) with results shown on the firetruck category of the QuickDraw dataset. Yet the authors claim significantly more importance for the work than I think it merits.\n\nFirst, there is nothing variational about their variational RNN. They seem to use the term to be equivalent to \"stochastic\", \"probabilistic\" or \"noisy\" rather than having anything to do with optimizing a variational bound. To strike the right balance between pretension and accuracy, I would suggest substituting the word \"stochastic\" everywhere \"variational\" is used.\n\nSecond, there is nothing self-improving or collaborative about their self-improving collaborative GAN. Once the architecture is chosen to share the weights between the weak and strong generator, the only difference between the two is that the weak generator has greater noise at the output. In this sense the architecture should really be seen as a single model with different noise levels at alternating steps. In this sense, I am not entirely clear on what the difference is between the SIC-GAN and their noisy GAN baseline - presumably the only difference is that the noisy GAN is conditioned on a single timestep instead of two at a time? The claim that these models are somehow \"self-improving\" baffles me as well - all machine learning models are self-improving, that is the point of learning. The authors make a comparison to AlphaGo Zero's use of self-play, but here the weak and strong generators are on the same side of the game, and because there are no game rules provided beyond \"reproduce the training set\", there is no possibility of discovery beyond what is human-provided, contrary to the authors' claim.\n\nThird, the total absence of mathematical notation made it hard in places to follow exactly what the models were doing. While there are plenty of papers explaining the GAN framework to a novice, at least some clear description of the baseline architectures would be appreciated (for instance, a clearer explanation of how the SIC-GAN differs from the noisy GAN). Also the description of the soft $\\ell_1$ loss (which the authors call the \"1-loss\" for some reason) would benefit from a clearer mathematical exposition.\n\nFourth, the experiments seem too focused on the firetruck category of the QuickDraw dataset. As it was the only example shown, it's difficult to evaluate their claim that this is a general method for improving variety without sacrificing quality. Their chosen metrics for variety and detail are somewhat subjective, as they depend on the fact that some categories in the QuickDraw dataset resemble firetrucks in the fine detail while others resemble firetrucks in outline. This is not a generalizable metric. Human evaluation of the relative quality and variety would likely suffice.\n\nLastly, the entire section on the strong-weak collaborative GAN seems to add nothing. They describe an entire training regiment for the model, yet never provide any actual experimental results using that model, so the entire section seems only to motivate the SIC-GAN which, again, seems like a fairly ordinary architectural extension to GANs with RNN generators.\n\nThe results presented on QuickDraw do seem nice, and to the best of my knowledge it is the first (or at least best) applications of GANs to QuickDraw - if they refocused the paper on GAN architectures for sketching and provided more generalizable metrics of quality and variety it could be made into a good paper.",
"Overall the paper is good: good motivation, insight, the model makes sense, and the experiments / results are convincing. I would like to see some evidence though that the strong generator is doing exactly what is advertised: that it’s learning to clean up the mistakes from variation. Can we have some sort of empirical analysis that what you say is true? \n\nThe writing grammar quality fluctuates. Please clean up.\n\nDetailed notes\nP1:\nWhy did you pass on calling it Self-improving collaborative adversarial learning (SICAL)?\nI’m very surprised you don’t mention VAE RNN here (Chung et al 2015) along with other models that leverage an approximate posterior model of some sort.\n\nP2:\nWhat about scheduled sampling?\nIs the quality really better? How do you quantify that? To me the ones at the bottom of 2(c) are of both lower quality *and* diversity.\n“Figure 2(d) displays human-drawn sketches of fire trucks which demonstrate that producing sequences–in this case sketches–with both quality and variety is definitely achievable in real-world applications”: I’m not sure I follow this argument. Because people can do it, ML should be able to?\n\nP3:\n“Recently, studies start to apply GANs to generate the sequential output”: fix this\nGrammar takes a brief nose-dive around here, making it a little harder to read.\nCaption: “bean search”\nChe et al also uses something close to Reinforcement learning for discrete sequences.\n“nose-injected”: now you’re just being silly\nMaybe cite Bahdanau et al 2016 “An actor-critic algorithm for sequence prediction”\n“does not require any variety reward/measure to train” What about the discriminator score (MaliGAN / SeqGAN)? Could this be a simultaneous variety + quality reward signal? If the generator is either of poor-quality or has low variety, the discriminator could easily distinguish its samples from the real ones, no?\n\nP6:\nDid you pass only the softmax values to the discriminator?\n\nP7:\nI like the score scheme introduced here. Do you see any connection to inception score?\nSo compared to normal GAN, does SIC-GAN have more parameters (due to the additional input)? If so, did you account for this in your experiments?",
"We thank the reviewer for constructive comments. Following is our reply:\n\nQ: Once the architecture is chosen to share the weights between the weak and strong generator, ...it appears to be a stochastic RNN with skip connections (so it's conditioned on the last two states rather than last one) trained by an adversarial objective...\nA: We are sorry for not describing the “tying” precisely. It is done in a soft manner; that is, we add a loss term for the weak generator that require its parameters to be similar to those of the strong generator. Please see Section 4 for more details. Actually, the extra input taken by the strong generator is not necessary and are not implemented. We just described it for the cases when the hyperparameter of the term is high. We have remove the irrelevant sentences to avoid confusion.\n\nQ: I am not entirely clear on what the difference is between the SIC-GAN and their noisy GAN baseline...\nA: The noisy GAN just weakens the ordinary RNN generator of the naive GAN to achieve the “covering” effect similar to that of the strong-weak collaborative GAN—if a point is made bad, the RNN may learn to generate better points at later time steps in order to fool the discriminator. However, the “next points” in the weakened RNN are made bad too (since the entire RNN is weakened) and may not be able to actually cover the previous point. To fool the discriminator in such a situation, the RNN may instead learn to output points that, after being weakened, are more easily “covered” by the future (bad) points. In effect, this makes the RNN conservative to generating novel sequence and reduces variety. We call this the covering-or-covered paradox. On the other hand, once trained, the strong generator in the strong-weak collaborative GAN is used to generate an entire sequence. This means that the strong generator should have enough based temperature (or noise level) to ensure the variety. One naive way to do so is to add a base-temperature to both the strong and weak generators during the training time. However, the strong generator faces the covering-or-covered paradox now and may learn to be conservative. We can instead train the strong-weak collaborative GAN multiple times using a self-improving technique. We start by adding a low base-temperature to both the strong and weak generators and train them in the first phase. Then we set the weak generator in the next phase as the strong one we get from the previous phase and train the generators with increased base-temperature. We then repeat this process until the target base-temperature is reached. We call the process “self-improving” because the strong generator in the next phase learns to cover itself in the previous phase. It is important to note that in a later phase, the weak generator is capable of covering the negative effect due to the variety of the strong generator (because that weak generator is a strong generator in the previous phase). So, the strong generator in the current phase can focus on the “covering” rather than “covered,” preventing the final RNN from being conservative. \n\nQ: ...because there are no game rules provided beyond “reproduce the training set,” there is no possibility of discovery beyond what is human-provided, contrary to the authors' claim.\nA: The generator can exploit up to the generalizability of the generator.",
"We thank the reviewer for constructive comments. \n\nQ: I think my biggest concern is that the method was only tested on a single dataset...\nA: Thanks. Following the suggestion of reviewer 2, we have changed the paper title to Sketch RNN so we believe this is no longer a concern.",
"We thank the reviewer for the positive comments. We have fixed the typos and grammar issues in the new version and cited more relevant work including the Bahdanau et al. 2016 “An actor-critic algorithm for sequence prediction” and the VAE RNN by Chung et al. 2015. Following is our reply to your specific comments: \n\nQ: Did you pass only the softmax values to the discriminator? \nA: No, we pass the mean and variance of each point generated by the Sketch RNN to the discriminator. \n\nQ: So compared to normal GAN, does SIC-GAN have more parameters (due to the additional input)?\nA: No, the parameter numbers of the unfolded Noisy GAN and SIG-GAN are roughly the same.\n\nQ: “Figure 2(d) displays human-drawn sketches of fire trucks which demonstrate that producing sequences–in this case sketches–with both quality and variety is definitely achievable in real-world applications”: I’m not sure I follow this argument. Because people can do it, ML should be able to?\nA: You are right. Here we just want to emphasize that the quality and variety is both achievable “by humans.” We have corrected the sentence in the paper. \n\nQ: Discriminator in MaliGAN/SeqGAN a simultaneous variety + quality reward signal?\nA: Yes it is. The MaliGAN/SeqGAN is proposed for the RNNs with the discrete output. While we use the continuous Sketch-RNN to demonstrate the Strong-Weak Collaborative GAN and SIC-GAN, the ideas could be readily applied to discrete cases. This is our future work."
] | [
5,
4,
7,
-1,
-1,
-1
] | [
3,
5,
3,
-1,
-1,
-1
] | [
"iclr_2018_r1nmx5l0W",
"iclr_2018_r1nmx5l0W",
"iclr_2018_r1nmx5l0W",
"Skie1qFxM",
"SJK75ZFef",
"ByfDSjtlf"
] |
iclr_2018_BkUDW_lCb | Pointing Out SQL Queries From Text | The digitization of data has resulted in making datasets available to millions of users in the form of relational databases and spreadsheet tables. However, a majority of these users come from diverse backgrounds and lack the programming expertise to query and analyze such tables. We present a system that allows for querying data tables using natural language questions, where the system translates the question into an executable SQL query. We use a deep sequence to sequence model in wich the decoder uses a simple type system of SQL expressions to structure the output prediction. Based on the type, the decoder either copies an output token from the input question using an attention-based copying mechanism or generates it from a fixed vocabulary. We also introduce a value-based loss function that transforms a distribution over locations to copy from into a distribution over the set of input tokens to improve training of our model. We evaluate our model on the recently released WikiSQL dataset and show that our model trained using only supervised learning significantly outperforms the current state-of-the-art Seq2SQL model that uses reinforcement learning. | rejected-papers | The pros and cons of the paper can be summarized below:
Pro:
* The improvements afforded by the method are significant over baselines, although these baselines are very preliminary baselines on a new dataset.
Con
* There is already a significant amount of work in using grammars to guide semantic parsing or code generation, as rightfully noted by the authors, and thus the approach in the paper is not extremely novel.
* Because there is no empirical comparison with these methods, the relative utility of the proposed method is not clear.
As a result, I recommend that the paper not be accepted at this time. | train | [
"S1IbWw_gM",
"rk5LF4OeM",
"BkYlT4ieG",
"S1VLPVn7f",
"rJhlPN27G",
"SJb6IEnQf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The paper claims to develop a novel method to map natural language queries to SQL. They claim to have the following contributions: \n\n1. Using a grammar to guide decoding \n2. Using a new loss function for pointer / copy mechanism. For each output token, they aggregate scores for all positions that the output token can be copied from.\n\nI am confident that point 1 has been used in several previous works. Although point 2 seems novel, I am not convinced that it is significant enough for ICLR. I was also not sure why there is a need to copy items from the input question, since all SQL query nouns will be present in the SQL table in some form. What will happen if we restrict the copy mechanism to only copy from SQL table.\n\nThe references need work. There are repeated entries for the same reference (one form arxiv and one from conference). Please cite the conference version if one is available, many arxiv references have conference versions.\n\nRebuttal Response: I am still not confident about the significance of contribution 1, so keeping the score the same.",
"This paper proposes a model for solving the WikiSQL dataset that was released recently.\n\nThe main issues with the paper is that its contributions are not new.\n\n* The first claimed contribution is to use typing at decoding time (they don't say why but this helps search and learning). Restricting the type of the decoded tokens based on the programming language has already been done by the Neural Symbolic Machines of Liang et al. 2017. Then Krishnamurthy et al. expanded that in EMNLP 2017 and used typing in a grammar at decoding time. I don't really see why the authors say their approach is simpler, it is only simpler because the sub-language of sql used in wikisql makes doing this in an encoder-decoder framework very simple, but in general sql is not regular. Of course even for CFG this is possible using post-fix notation or fixed-arity pre-fix notation of the language as has been done by Guu et al. 2017 for the SCONE dataset, and more recently for CNLVR by Goldman et al., 2017.\n\nSo at least 4 papers have done that in the last year on 4 different datasets, and it is now close to being common practice so I don't really see this as a contribution.\n\n* The authors explain that they use a novel loss function that is better than an RL based function used by Zhong et al., 2017. If I understand correctly they did not implement Zhong et al. only compared to their numbers which is a problem because it is hard to judge the role of optimization in the results.\n\nMoreover, it seems that the problem they are trying to address is standard - they would like to use cross-entropy loss when there are multiple tokens that could be gold. the standard solution to this is to just have uniform distribution over all gold tokens and minimize the cross-entropy between the predicted distribution and the gold distribution which is uniform over all tokens. The authors re-invent this and find it works better than randomly choosing a gold token or taking the max. But again, this is something that has been done already in the context of pointer networks and other work like See et al. 2017 for summarization and Jia et al., 2016 for semantic parsing.\n\n* As for the good results - the data is new, so it is probable that numbers are not very fine-tuned yet so it is hard to say what is important and what not for final performance. In general I tend to agree that using RL for this task is probably unnecessary when you have the full program as supervision.",
"This paper presents a neural architecture for converting natural language queries to SQL statements. The model utilizes a simple typed decoder that chooses to copy either from the question / table or generate a word from a predefined SQL vocabulary. The authors try different methods of aggregating attention for the decoder copy mechanism and find that summing token probabilities works significantly better than alternatives; this result could be useful beyond just Seq2SQL models (e.g., for summarization). Experiments on the WikiSQL dataset demonstrate state-of-the-art results, and detailed ablations measure the impact of each component of the model. Overall, even though the architecture is not very novel, the paper is well-written and the results are strong; as such, I'd recommend the paper for acceptance.\n\nSome questions:\n- How can the proposed approach scale to more complex queries (i.e., those not found in WikiSQL)? Could the output grammar be extended to support joins, for instance? As the grammar grows more complex, the typed decoder may start to lose its effectiveness. Some discussion of these issues would be helpful.\n- How does the additional preprocessing done by the authors affect the performance of the original baseline system of Zhong et al.? In general, some discussion of the differences in preprocessing between this work and Zhong et al. would be good (do they also use column annotation)?",
"We thank the reviewer for the questions about the novelty of our contributions. \n\nQ. First claimed contribution of using typing at decoding time is not novel...previous works have used grammar (or CFG) at decoding time?\n\nWe would like to emphasize that using a type system is quite different from using grammars. A grammar typically only describes the set of valid syntactic programs, whereas the type system can rule out certain classes of programs that are syntactically correct but violate certain type constraints. For example, consider the grammar for predicates of the form “columnName op constant”. Here, the grammar would allow program predicates such as Age < “USA”, whereas the type constraint can rule out that “<” operator can only be applied to integer values and the column’s type should also be integer. However, we agree that for this dataset the difference between type system and grammar is not that significant since the queries were generated using pre-defined templates.\n\nQ. Loss function is standard?\n\nOur value-based loss based on sum-transfer is not the same as using cross-entropy loss with uniform distribution over multiple gold tokens (pointers). Optimizing the loss function with uniform distribution over gold tokens (as done in Jia et al. 2016) would try to learn to predict similar probabilities over all the gold pointers, whereas instead our loss function first translates the pointers to their values and then sums up the values to learn a probability distribution over values as opposed to over indices.\n\nPlease let us know if there are more clarifications that might be helpful to explain the differences with the previous works.\n",
"Thanks for the helpful comments and feedback. \n\nQ. Using a grammar to guide decoding is not novel?\n\nWe would like to emphasize that using a type system is quite different from using grammars. A grammar describes the set of valid syntactic programs, whereas the type system can rule out certain classes of programs that are still syntactically correct. For example, consider the grammar for predicates of the form “columnName op constant”. Here, the grammar would allow program predicates such as Age < “USA”, whereas the type constraint can rule out that “<” operator can only be applied to integer values and the column’s type should also be integer. However, we agree that for this dataset the difference between type system and grammar is not that significant since the dataset contains only queries generated from a set of pre-defined templates.\n\nQ. Why copy from input question? Why not restrict copy mechanism to only SQL table?\n\nSince SQL tables can be quite large (and even for this dataset tables typically have tens of rows/columns), we only embed the column names of the tables and the input question for efficiency reasons. Besides, constants used in the query are commonly mentioned in questions raised by the user.\n",
"We thank the reviewer for the helpful feedback and comments.\n\nQ. How can the proposed approach scale for more complex queries such as join?\n\nFor more complex SQL structures (e.g., join, group by), we can not statically pre-determine the types of cells as there might be multiple number of non-determinisms in the query template. For example, consider a template for the join query such as “select col from T (join T)* where (pred)*”, where we have two sets of non-determinisms -- one for variable number of Table names T and another one for variable number of predicates. However, once we have resolved the earlier non-determinism during the decoding process, we can still use the type system to guide decoding for the later choices. For example, once `where` is decoded in the above template,the tokens afterwards would be templated into `column op val` tuples and decoder types can still be applied. \n\nQ. How is the preprocessing done different from Zhong et al.?\n\nWhile we didn’t directly run Zhong et al. baseline on the preprocessed dataset, we compare the difference of with and without using preprocessing (column annotation) in our best model in Section 4.3 (Figure 3c) -- the results show that preprocessing does improve our model performance, but the ablation test demonstrates that the major improvement of the model comes from the improved loss function. \n"
] | [
4,
3,
7,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1
] | [
"iclr_2018_BkUDW_lCb",
"iclr_2018_BkUDW_lCb",
"iclr_2018_BkUDW_lCb",
"rk5LF4OeM",
"S1IbWw_gM",
"BkYlT4ieG"
] |
iclr_2018_HyTrSegCb | Achieving morphological agreement with Concorde | Neural conversational models are widely used in applications like personal assistants and chat bots. These models seem to give better performance when operating on word level. However, for fusion languages like French, Russian and Polish vocabulary size sometimes become infeasible since most of the words have lots of word forms. We propose a neural network architecture for transforming normalized text into a grammatically correct one. Our model efficiently employs correspondence between normalized and target words and significantly outperforms character-level models while being 2x faster in training and 20\% faster at evaluation. We also propose a new pipeline for building conversational models: first generate a normalized answer and then transform it into a grammatically correct one using our network. The proposed pipeline gives better performance than character-level conversational models according to assessor testing. | rejected-papers | The pros and cons of this paper cited by the reviewers can be summarized below:
Pros:
* Empirical results demonstrate decent improvements over other reasonable models
* The method is well engineered to the task
Cons:
* The paper is difficult to read due to grammar and formatting issues
* Experiments are also lacking detail and potentially difficult to reproduce
* Some of the experimental results are suspect in that the train/test accuracy are basically the same. Usually we would expect train to be much better in highly parameterized neural models
* The content is somewhat specialized to a particular task in NLP, and perhaps of less interest to the ICLR audience as a whole (although I realize that ICLR is attempting to cast a broad net so this alone is not a reason for rejection of the paper)
In addition to the Cons cited by the reviewers above, I would also note that there is some relevant work on morphology in sequence-to-sequence models, e.g.:
* "What do Neural Machine Translation Models Learn about Morphology?" Belinkov et al. ACL 2017.
and that it is common in sequence-to-sequence models to use sub-word units, which allows for better handling of morphological phenomena:
* "Neural Machine Translation of Rare Words with Subword Units" Sennrich et al. ACL 2016.
While the paper is not without merit, given that the cons seem to significantly outweigh the pros, I don't think that it is worthy of publication at ICLR at this time, although submission to a future conference (perhaps NLP conference) seems warranted. | train | [
"r1dQH78gM",
"By3d5LqlM",
"H18dJfpxM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper is a pain to read. Most of the citation styles are off (i.e., without parentheses). Most of the sentences are not grammatically correct. Most, if not all, of the determiners are missing. It is ironic that the paper is proposing a model to generate grammatically correct sentences, while most of the sentences in the paper are not grammatically correct.\n\nThe experimental numbers look skeptical. For example, 1/3 of the training results are worse than the test results in Table 1. It also happens a few times in Table 5. Either the models are not properly trained, or the models are heavily tuned on the test set.\n\nThe running times in Table 9 are also skeptical. Why are the Concorde models faster than unigrams and bigrams? Maybe this can be attributed to the difference in the size of the vocabulary, but why is the unigram model slower than the bigram model?",
"In this work, the authors propose a sequence-to-sequence architecture that learns a mapping from a normalized sentence to a grammatically correct sentence. The proposed technique is a simple modification to the standard encoder-decoder paradigm which makes it more efficient and better suited to this task. The authors evaluate their technique using three morphologically rich languages French, Polish and Russian and obtain promising results.\n\nThe morphological agreement task would be an interesting contribution of the paper, with wider potential. But one concern that I have is regarding the evaluation metrics used for it. Firstly, word accuracy rate doesn't seem appropriate, as it does not measure morphological agreement. Secondly, sentence accuracy (w.r.t. the sentences from which the normalized sentences are derived) is not indicative of morphological agreement: even \"wrong\" sentences in the output could be perfectly valid in terms of agreement. A grammatical error rate (fraction of grammatically wrong sentences produced) would probably be a better measure.\n\nAnother concern I have is regarding the quality of the baseline: Additional variants of the baseline models should be considered and the best one reported. Specifically, in the conversation task, have the authors considered switching the order of normalized answer and context in the input? Also, the word order of the normalized answer and/or context could be reversed (as is done in sequence-to-sequence translation models).\n\nAlso, many experimental details are missing from the draft:\n-- What are the sizes of the train/test sets derived from the OpenSubtitles database?\n-- Details of the validation sets used to tune the models.\n-- In Section 5.4, no details of the question-answer corpus are provided. How many pairs were extracted? How many were used for training and testing?\n-- In Section 5.4.1, how many assessors participated in the evaluation and how many questions were evaluated?\n-- In some of the tables (e.g. 6, 7, 8) which show example sentences from Polish, Russian and French, please provide some more information in the accompanying text on how to interpret these examples (since most readers may not be familiar with these languages).\n\nPros:\n-- Efficient model\n-- Proposed architecture is general enough to be useful for other sequence-to-sequence problems\n\nCons:\n-- Evaluation metrics for the morphological agreement task are unsatisfactory\n-- It would appear that the baselines could be improved further using standard techniques",
"The key contributions of this paper are:\n(a) proposes to reduce the vocabulary size in large sequence to sequence mapping tasks (e.g., translation) by first mapping them into a \"standard\" form and then into their correct morphological form,\n(b) they achieve this by clever use of character LSTM encoder / decoder that sandwiches a bidirectional LSTM which captures context,\n(c) they demonstrate clear and substantial performance gains on the OpenSubtitle task, and\n(d) they demonstrate clear and substantial performance gains on a dialog question answer task.\n\nTheir analysis in Section 5.3 shows one clear advantage of this model in the context of long sequences. \n\nAs an aside, the authors should correct the numbering of their Figures (there is no Figure 3) and provide better captions to the Tables so the results shown can easily understood at a glance. \n\nThe only drawback of the paper is that this does not advance representation learning per se though a nice application of current models."
] | [
2,
5,
6
] | [
5,
4,
5
] | [
"iclr_2018_HyTrSegCb",
"iclr_2018_HyTrSegCb",
"iclr_2018_HyTrSegCb"
] |
iclr_2018_rJ7RBNe0- | Generative Models for Alignment and Data Efficiency in Language | We examine how learning from unaligned data can improve both the data efficiency of supervised tasks as well as enable alignments without any supervision. For example, consider unsupervised machine translation: the input is two corpora of English and French, and the task is to translate from one language to the other but without any pairs of English and French sentences. To address this, we develop feature-matching autoencoders (FMAEs). FMAEs ensure that the marginal distribution of feature layers are preserved across forward and inverse mappings between domains. We show that FMAEs achieve state of the art for data efficiency and alignment across three tasks: text decipherment, sentiment transfer, and neural machine translation for English-to-German and English-to-French. Most compellingly, FMAEs achieve state of the art for neural translation with limited supervision, with significant BLEU score differences of up to 5.7 and 6.3 over traditional supervised models. Furthermore, on English-to-German, they outperform last year's best fully supervised models such as ByteNet (Kalchbrenner et al., 2016) while using only half as many supervised examples. | rejected-papers | The pros and cons of this paper cited by the reviewers (with a small amount of my personal opinion) can be summarized below:
Pros:
* The method itself seems to be tackling an interesting problem, which is feature matching between encoders within a generative model
Cons:
* The paper is sloppily written and symbols are not defined clearly
* The paper overclaims its contributions in the introduction, which are not supported by experimental results
* It mis-represents the task of decipherment and fails to cite relevant work
* The experimental setting is not well thought out in many places (see Reviewer 1's comments in particular)
As a result, I do not think this is up to the standards of ICLR at this time, although it may have potential in the future. | train | [
"H1fmtttez",
"r1dDwZqxM",
"HyZEwgCgM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work propose a generative model for unsupervised learning of translation model using a variant of auto-encoder which reconstruct internal layer representation in two directions. Basic idea is to treat the intermediate layers as feature representation which is reconstructed from the other direction. Experiments on substitution cipher shows improvement over a state of the art results. For translation, the proposed method shows consistent gains over baselines, under a condition where supervised data is limited.\n\nOne of the problems of this paper is the clarity.\n- It is not immediately clear how the feature mapping explained in section 2 is related to section 3. It would be helpful if the authors could provide what is reconstructed using the transformer model as an example.\n- The improved noisy attention in section 3.3 sounds orthogonal to the proposed model. I'd recommend the authors to provide empirical results.\n- MT experiments are unclear to me. When running experiments for 2M data, did you use the remaining 2.5M for unsupervised training in English-German task?\n- It is not clear whether equation 3 is correct: The first term sounds g(e_x, e_y) instead of f(...)? Likewise, equation 4 needs to replace the first f(...) with g(...).\n",
"This paper proposes a generative model called matching auto-encoder to carry out the learning from unaligned data.\nHowever, it is very disappointed to read the contents after the introduction, since most of the contributions are overclaimed.\n\nDetailed comments:\n- Figure 1 is incorrect because the pairs (x, z) and (y, z) should be put into two different plates if x and y are unaligned.\n\n- Lots of contents in Sec. 3 are confusing to me. What is the difference between g_l(x) and g_l(y) if g_l : H_{l−1} → H_l and f_l: H_{l−1} → H_l are the same? What are e_x and e_y? Why is there a λ if it is a generative model?\n\n- If the title is called 'text decipherment', there should be no parallel data at all, otherwise it is a huge overclaim on the decipherment tasks. Please add citations of Kevin Knight's recent papers on deciperment.\n\n- Reading the experiment results of 'Sentiment Transfer' is a disaster to me. I couldn't get much information on 'sentiment transfer' from a bunch of ungrammatical unnatural language sentences. I would prefer to see some results of baseline models for comparison instead of a pure qualitative analysis.\n\n- The claim on \"FMAEs are state of the art for neural machine translation with limited supervision on EN-DE and EN-FR\" is not exciting to me. Semi-supervised learning is interesting, but in the scenario of MT we do have enough parallel data for many language pairs. Unless the model is able to exceed the 'real' state-of-the-art that uses the full set of parallel data, otherwise we couldn't identify whether the models are able to benefit NMT. Interestingly, the authors didn't provide any of the results that are experimented with full parallel data set. Possibly it is because the introduction of stochastic variables that prevent the models from overfitting on small datasets.\n\n",
"\nThe paper is sloppily written where math issues and undefined symbols make it hard to understand. The experiments seem to be poorly done and does not convey any clear points, and not directly comparable to previous results.\n\n(3) evaluates to 0, and is not a penalty. Same issue in (4). Use different symbols. I also do not understand how this is adversarial, as these are just computed through forward propagation.\n\nAlso what is this two argument f in eq 3? It seems to be a different and unspecified function from the one introduced in 2)\n\n4.1: a substitution cipher has an exact model, and there is no reason why a neural networks would do well here. I understand the extra-twist is that training set is unaligned, but there should be an actual baseline which correctly models the cipher process and decipher it. You should include that very natural baseline model.\n\n4.2 does not give any clear conclusions. The bottom is a draw from the model conditioned on the top? What was the training data, what is draw supposed to be? Some express the same sentiment, others different, and I have no idea if they are supposed to express the same meaning or not.\n\n4.3: why are all the results non-overlapping with previous results? You have to either reproduce some of the previous results, or run your own experiment in matching settings. The current result tables show your model is better than some version of the transformer, but not necessarily better than the \"big\" transformer. The setup and descriptions do not inspire confidence.\n\nMinor issues\n\n3.1: effiency => efficiency\n\nData efficiency is used as a task/technique, which I find hard to parse. \"Data efficiency and alignment have seen most success for dense, continuous data such as images.\"\n\"powerful data efficiency and alignment\"\n"
] | [
5,
4,
2
] | [
3,
3,
3
] | [
"iclr_2018_rJ7RBNe0-",
"iclr_2018_rJ7RBNe0-",
"iclr_2018_rJ7RBNe0-"
] |
iclr_2018_r1HNP0eCW | Estimation of cross-lingual news similarities using text-mining methods | Every second, innumerable text data, including all kinds news, reports, messages, reviews, comments, and twits have been generated on the Internet, which is written not only in English but also in other languages such as Chinese, Japanese, French and so on. Not only SNS sites but also worldwide news agency such as Thomson Reuters News provide news reported in more than 20 languages, reflecting the significance of the multilingual information.
In this research, by taking advantage of multi-lingual text resources provided by the Thomson Reuters News, we developed a bidirectional LSTM based method to calculate cross-lingual semantic text similarity for long text and short text respectively. Thus, users could understand the situation comprehensively, by investigating similar and related cross-lingual articles, when there an important news comes in. | rejected-papers | The pros and cons of this paper cited by the reviewers can be summarized below:
Pros:
* The motivation of the problem is presented well
* The architecture is simple and potentially applicable to real-world applications
Cons:
* The novel methodological contribution is limited to non-existant
* Comparison against other relevant baselines is missing, and the baseline is not strong
* The evaluation methodology does not follows standard practice in IR, and thus it is difficult to analyze and compare results
* Paper is hard to read and requires proofreading
Considering these pros and cons, my conclusion is that this paper is not up to the standards of ICLR. | test | [
"HylRsRmgz",
"ByIyxIKef",
"r1fg60Kgz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"* PAPER SUMMARY *\n\nThis paper proposes a siamese net architecture to compare text in different languages. The proposed architecture builds upon siamese RNN by Mueller and Thyagarajan. The proposed approach is evaluated on cross lingual bitext retrieval.\n\n* REVIEW SUMMARY * \n\nThis paper is hard to read and need proof-reading by a person proficient in English. The experiments are extremely limited, on a toy task. No other baseline than (Mueller and Thyagarajan, 2016) is considered. The related work section lacks important references. It is hard to find positive points that would advocate for a presentation at ICLR.\n\n* DETAILED REVIEW *\n\nOn related work, the authors need to consider related work on cross lingual retrieval, multilingual document representation:\n\nBai, Bing, et al. \"Learning to rank with (a lot of) word features.\" Information retrieval 13.3 (2010): 291-314. (Section 4).\n\nSchwenk, H., Tran, K., Firat, O., & Douze, M. Learning Joint Multilingual Sentence Representations with Neural Machine Translation, ACL Workshop on Representation Learning for NLP, 2017\n\nKarl Moritz Hermann and Phil Blunsom. Multilingual models for compositional distributed semantics. In ACL 2014. pages 58–68.\n\nHieu Pham, Minh-Thang Luong, and Christopher D. Manning. Learning distributed representations for multilingual text sequences. In Workshop\non Vector Space Modeling for NLP. 2015\n\nXinjie Zhou, Xiaojun Wan, and Jianguo Xiao. Cross-lingual sentiment classification with bilingual document representation learning. In ACL 2016\n\n...\n\nOn evaluation, the authors need to learn about standard retrieval evaluation metrics such as precision at top 10, etc and use them. For instance, this book will be a good read.\n\nBaeza-Yates, Ricardo, and Berthier Ribeiro-Neto. Modern information retrieval. Vol. 463. New York: ACM press, 1999.\n\nOn learning objective, the authors might want to read about learn-to-rank objectives for information retrieval, for instance, \n\nLiu, Tie-Yan. \"Learning to rank for information retrieval.\" Foundations and Trends in Information Retrieval 3.3 (2009): 225-331.\n\nBurges, Christopher JC. \"From ranknet to lambdarank to lambdamart: An overview.\" Learning 11, no. 23-581 (2010): 81.\n\nChapelle, Olivier, and Yi Chang. \"Yahoo! learning to rank challenge overview.\" Proceedings of the Learning to Rank Challenge. 2011.\n\nHerbrich, Ralf, Thore Graepel, and Klaus Obermayer. \"Large margin rank boundaries for ordinal regression.\" (2000).\n\nOn experimental setup, the authors want to consider a setup with more than 8k training documents. More importantly, ranking a document set of 1k documents is extremely small, toyish. For instance, (Schwenk et al 2017) search through 1.5 million sentences. (Bai, Bing, et al 2009) search through 140k documents. Since you mainly introduces 2 modifications with respect to (Mueller and Thyagarajan, 2016), i.e (i) not sharing the parameters on both branch of the siamese and (ii) the fully connected net on top, I would suggest to measure the effect of each of them both on multilingual data and on the SICK dataset used in (Mueller and Thyagarajan, 2016).",
"In the Following, pros and cons of the paper are presented.\n\nPros\n-------\n\n1. Many real-world applications.\n2. Simple architecture and can be reproduced (if given enough details.)\n\n\nCons\n----------------------\n\n1. Ablation study showing whether bidirectional LSTM contributing to the similarity would be helpful.\n2. Baseline is not strong. How about using just LSTM?\n4. It is suprising to see that only concatenation with MLP is used for optimization of capturing regularities across languages. \n5. Equation-11 looks like softplus function more than vanilla ReLU.\n6. How are the similarity assessments made in the gold standard dataset. The cost function used only suggest binary assessments. Please refer to some SemEval tasks for cross-lingual or cross-level assessments. As binary assessments may not be a right measure to compare articles of two different lengths or languages.\n\nMinor issues\n------------\n\n1. SNS is meant to be social networking sites?\n2. In Section 2.2, it denote that 'as the figure demonstrates'. No reference to the figure.\n3. In Section 3, 'discussed in detail' pointed to Section 2.1 related work section. Not clear what is discussed in detail there.\n4. Reference to Google Translate API is wrong.\n\n\nThe paper requires more experimental analysis to judge the significance of the approach presented.\n",
"\nThe paper studies the problem of estimating cross-lingual text similarity by mining news corpora. The motivation of the problem and applications are presented well, especially for news recommender systems.\n\nHowever, there are no novel scientific contributions. The idea of fusing standard bi-LSTM layers coupled with a dense fully-connected layer alone is not a substantial technical contribution. Did they try other deep architectures for the task? The authors cite some previous works to explain their choice of approach for this task. A detailed analysis of different architectures (recurrent and others) on the specifc task would have been more convincing. \n\nComparison against other relevant baselines (including other cross-lingual retrieval approaches) is missing. There are several existing works on learning cross-lingual word embeddings (e.g., Mikolov et al., 2013). Some of these also make available pre-trained embeddings in multiple languages. You could combine them to learn cross-lingual semantic similarities for the retrieval task. How does your approach compare to these other approaches besides the Siamese LSTM baseline? \n\nOverall, it is unclear what the contributions are — there has been a lot of work in the NLP/IR literature on the same task, yet there is no detailed comparison against any of these relevant baselines. The technical contributions are also not novel or strong to make the paper convincing. \n"
] | [
2,
6,
2
] | [
5,
4,
4
] | [
"iclr_2018_r1HNP0eCW",
"iclr_2018_r1HNP0eCW",
"iclr_2018_r1HNP0eCW"
] |
iclr_2018_ryacTMZRZ | Jiffy: A Convolutional Approach to Learning Time Series Similarity | Computing distances between examples is at the core of many learning algorithms for time series. Consequently, a great deal of work has gone into designing effective time series distance measures. We present Jiffy, a simple and scalable distance metric for multivariate time series. Our approach is to reframe the task as a representation learning problem---rather than design an elaborate distance function, we use a CNN to learn an embedding such that the Euclidean distance is effective. By aggressively max-pooling and downsampling, we are able to construct this embedding using a highly compact neural network. Experiments on a diverse set of multivariate time series datasets show that our approach consistently outperforms existing methods. | rejected-papers | R1 was neutral on the paper: they liked the problem, simplicity of the approach, and thought the custom pooling layer was novel, but raised issues with the motivation and design of experiments. R1 makes a reasonable point that training a CNN to classify time series, then throw away the output layer and use the internal representation in 1-NN classification is hard to justify in practice.
Results of the reproducibility report were good, though pointed out some issues around robustness to initialization and hyper-parameters. R2 gave a very strong score, though the review didn’t really expound on the paper’s merits. R3 thought the paper was well written but also sided with R1 on novelty. Overall, I side with R1 and R3. Particularly with respect to the practicality of the approach (as pointed out by both these reviewers). I would feel differently if the metric was used in another application beyond classification. | train | [
"Sk4LBfY4f",
"r1VHCbtNM",
"r1aYq6ieM",
"Hkns5PSlM",
"r1cIB5Fxf",
"SJUPi53zG",
"Hy94i92fz",
"SyDfYeqMz",
"r1rhVX1GM",
"SJ2qtgpWz",
"SJ6szzaWG",
"r1izTZpZf",
"Bk-UjRn-z"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"public",
"author",
"author",
"public",
"public"
] | [
"I did in fact read the reproducibility report, but I want to clarify that I did NOT take it into account in my review (and for the record, I think its impact would have been negligible in this case). Reproduction during review represents an additional level of scrutiny for a submission, and while I feel that is Good Thing, I feel that it is unfair to take it into account during reviews if only SOME submissions are subject to that scrutiny.\n\nThat said, I think two major points from the report are worth highlighting, one positive (+) and one negative (-):\n\n+ by and large, the students were able to reproduce the results of the paper!\n\n- the students hypothesize (and I'm inclined to agree) that the results may be misleading due to the small size of the data, particularly the very small test sets. I will quote them:\n\n\"We found that one of the most significant factors on accuracy was the random training and testing\nsplit, especially for the datasets that contained fewer examples...Our findings suggest that the accuracies obtained by Jiffy in the original paper are reasonable, but do not appear to be as robust to initialization and hyper-parameters as the\nauthors claim.\"\n\nI recommend that the authors do something to quantify uncertainty of the performance measures, e.g., k-fold cross-validation or bootstrapping. I also encourage them to consider inclusion of at least one larger data set.\n\nThe authors should also take note that the students assigned the original submission a \"low to moderate [reproducibility] score.\" ;)",
"I have increased my score by one point (from a 5 to a 6) to reflect the thoughtful and comprehensive response. The authors did a nice job of addressing many of my concerns about their methods and experiments, in particular:\n\n- they clarified that the the \"constant number, variable width\" pooling layer can be implemented in existing frameworks through appropriate padding of each time series. I accept that answer, though I want to encourage the authors to clarify what sort of padding they used (zero, repeat edge value, etc.), and I wonder about the impact of adding different amounts of padding to different length time series.\n- they showed in [10] that indeed, the KNN+embedding model beats the ConvNet used to generate the embeddings. With further thought, I've concluded that is not unexpected, as the KNN classifier on top of the embeddings provides a more flexible decision boundary than the linear classification output layer of the CovNet. That said, the small sample sizes of the data sets calls into question whether the differences are statistically significant.\n- they showed in [11] that Jiffy does beat max pooling, which is reasonable since the finer grained pooling necessarily preserves more detail\n- they showed in [11] that Jiffy also beats multichannel convolutions -- but I'd love some exposition hypothesizing why that might be\n- I think their explanation for why Jiffy beats Siamese/Triplet networks (small training data) is reasonable\n- I think the additional exposition re: motivation provided in their response would improve their paper if integrated. In particular, emphasizing retrieval (rather than classification) applications and providing experiments to back those up?\n\nI seriously considered a score of 7. However, I ultimately decided against that because a significant fraction of my critiques focused on the paper's exposition (motivation, related work, discussion). While the authors addressed some of this feedback in their response, they did not provide (as far as I can tell) a revised manuscript showing how they would integrate this new content into their actual paper. I understand that the ability to revise a submission is NOT the norm in the ML community and may be unfamiliar (or confusing) to first-time ICLR authors (it's also a bit of a chore over the holidays). Nonetheless, I feel uncomfortable (strongly) endorsing a submission that requires substantial rewrites.",
"This paper presents a solid empirical analysis of a simple idea for learning embeddings of time series: training a convolutional network with a custom pooling layer that generates a fixed size representation to classify time series, then use the fixed size representation for other tasks. The primary innovation is a custom pooling operation that looks at a fraction of a sequence, rather than a fixed window. The experiments are fairly thorough (albeit with some sizable gaps) and show that the proposed approach outperforms DTW, as well as embeddings learned using Siamese networks. On the whole, I like the line of inquiry and the elegant simplicity of the proposed approach, but the paper has some flaws (and there are some gaps in both motivation and the experiments) that led me to assign a lower score. I encourage the authors to address these flaws as much as possible during the review period. If they succeed in doing so, I am willing to raise my score.\n\nQUALITY\n\nI appreciate this line of research in general, but there are some flaws in its motivation and in the design of the experiments. Below I list strengths (+) and weaknesses (-):\n\n+ Time series representation learning is an important problem with a large number of real world applications. Existing solutions are often computationally expensive and complex and fail to generalize to new problems (particularly with irregular sampling, missing values, heterogeneous data types, etc.). The proposed approach is conceptually simple and easy to implement, faster to train than alternative metric learning approaches, and learns representations that admit fast comparisons, e.g., Euclidean distance.\n+ The experiments are pretty thorough (albeit with some noteworthy gaps) -- they use multiple benchmark data sets and compare against strong baselines, both traditional (DTW) and deep learning (Siamese networks).\n+ The proposed approach performs best on average!\n\n- The custom pooling layer is the most interesting part and warrants additional discussion. In particular, the \"naive\" approach would be to use global pooling over the full sequence [4]. The authors should advance an argument to motivate %-length pooling and perhaps add a global pooling baseline to the experiments.\n- Likewise, the authors need to fully justify the use of channel-wise (vs. multi-channel) convolutions and perhaps include a multi-channel convolution baseline.\n- There is something incoherent about training a convolutional network to classify time series, then discarding the classification layer and using the internal representation as input to a 1NN classifier. While this yields an apples-to-apples comparison in the experiments, I am skeptical anyone would do this in practice. Why not simply use the classifier (I am dubious the 1NN would outperform it)? To address this, I recommend the authors do two things: (1) report the accuracy of the learned classifier; (2) discuss the dynamic above -- either admit to the reader that this is a contrived comparison OR provide a convincing argument that someone might use embeddings + KNN classifier instead of the learned classifier. If embeddings + KNN outperforms the learned classifier, that would surprise me, so that would warrant some discussion.\n- On a related note, are the learned representations useful for tasks other than the original classification task? This would strengthen the value proposition of this approach. If, however, the learned representations are \"overfit\" to the classification task (I suspect they are), and if the learned classifier outperforms embeddings + 1NN, then what would I use these representations for?\n- I am modestly surprised that this approach outperformed Siamese networks. The authors should report the Siamese architectures -- and how hyperparameters were tuned on all neural nets -- to help convince the reader that the comparison is fair.\n- To that end, did the Siamese convolutional network use the same base architecture as the proposed classification network (some convolutions, custom pooling, etc.)? If not, then that experiment should be run to help determine the relative contributions of the custom pooling layer and the loss function.\n- Same notes above re: triplet network -- the authors should report results in Table 2 and disclose architecture details.\n- A stronger baseline would be a center loss [1] network (which often outperforms triplets).\n- The authors might consider adding at least one standard unsupervised baseline, e.g., a sequence-to-sequence autoencoder [2,3].\n\nCLARITY\n\nThe paper is clearly written for the most part, but there is room for improvement:\n\n- The %-length pooling requires a more detailed explanation, particularly of its motivation. There appears to be a connection to other time series representations that downsample while preserving shape information -- the authors could explore this. Also, they should add a figure with a visual illustration of how it works (and maybe how it differs from global pooling), perhaps using a contrived example.\n- How was the %-length pooling implemented? Most deep learning frameworks only provide pooling layers with fixed length windows, though I suspect it is probably straightforward to implement variable-width pooling layers in an imperative framework like PyTorch.\n- Figure 1 is not well executed and probably unnecessary. The solid colored volumes do not convey useful information about the structure of the time series or the neural net layers, filters, etc. Apart from the custom pooling layer, the architecture is common and well understood by the community -- thus, the figure can probably be removed.\n- The paper needs to fully describe neural net architectures and how hyperparameters were tuned.\n\nORIGINALITY\n\nThe paper scores low on originality. As the authors themselves point out, time series metric learning -- even using deep learning -- is an active area of research. The proposed approach is refreshing in its simplicity (rather than adding additional complexity on top of existing approaches), but it is straightforward -- and I suspect it has been used previously by others in practice, even if it has not been formally studied. Likewise, the proposed %-length pooling is uncommon, but it is not novel per se (dynamic pooling has been used in NLP [5]). Channel-wise convolutional networks have been used for time series classification previously [6].\n\nSIGNIFICANCE\n\nAlthough I identified several flaws in the paper's motivation and experimental setup, I think it has some very useful findings, at least for machine learning practitioners. Within NLP, there appears to be gradual shift toward using convolutional, instead of recurrent, architectures. I wonder if papers like this one will contribute toward a similar shift in time series analysis. Convolutional architectures are typically much easier and faster to train than RNNs, and the main motivation for RNNs is their ability to deal with variable length sequences. Convolutional architectures that can effectively deal with variable length sequences, as the proposed one appears to do, would be a welcome innovation.\n\nREFERENCES\n\n[1] Wen, et al. A Discriminative Feature Learning Approach for Deep Face Recognition. ECCV 2016.\n[2] Fabius and van Amersfoort. Variational Recurrent Auto-Encoders. ICLR 2015 Workshop Track.\n[3] Tikhonov and Yamshchikov. Music generation with variational recurrent autoencoder supported by history. arXiv.\n[4] Hertel, Phan, and Mertins. Classifying Variable-Length Audio Files with All-Convolutional Networks and Masked Global Pooling. \n[5] Kalchbrenner, Grefenstette, and Blunsom. A Convolutional Neural Network for Modelling Sentences. ACL 2014.\n[6] Razavian and Sontag. Temporal Convolutional Neural Networks for Diagnosis from Lab Tests. arXiv.",
"[Summary]\n\nThe paper is overall well written and the literature review fairly up to date.\nThe main issue is the lack of novelty.\nThe proposed method is just a straightforward dimensionality reduction based on\nconvolutional and max pooling layers.\nUsing CNNs to handle variable length time series is hardly novel.\nIn addition, as always with metric learning, why learning the metric if you can just learn the classifier?\nIf the metric is not used in some compelling application, I am not convinced.\n\n[Detailed comments and suggestions]\n\n* Since \"assumptions\" is the only subsection in Section 2, \nI would use \\texbf{Assumptions.} rather than \\subsection{Assumptions}.\n\n* Same remark for Section 4.1 \"Complexity analysis\".\n\n* Some missing relevant citations:\n\nLearning the Metric for Aligning Temporal Sequences.\nDamien Garreau, Rémi Lajugie, Sylvain Arlot, Francis Bach.\nIn Proc. of NIPS 2014.\n\nDeep Convolutional Neural Networks On Multichannel Time Series For Human Activity Recognition.\nJian Bo Yang, Minh Nhut Nguyen, Phyo Phyo San, Xiao Li Li, Shonali Krishnaswamy.\nIn Proc. of IJCAI 2015.\n\nTime Series Classification Using Multi-Channels Deep Convolutional Neural Networks\nYi ZhengQi LiuEnhong ChenYong GeJ. Leon Zhao.\nIn Proc. of International Conference on Web-Age Information Management.\n\nSoft-DTW: a Differentiable Loss Function for Time-Series.\nMarco Cuturi, Mathieu Blondel.\nIn Proc. of ICML 2017.",
"Paper proposes to use a convolutional network with 3 layers (convolutional + maxpoolong + fully connected layers) to embed time series in a new space such that an Euclidian distance is effective to perform a classification. The algorithm is simple and experiments show that it is effective on a limited benchmark. It would be interesting to enlarge the dataset to be able to compare statistically the results with state-of-the-art algorithms. In addition, Authors compare themselves with time series metric learning and generalization of DTW algorithms. It would also be interesting to compare with other types of time series classification algorithms (Bagnall 2016) .",
"[1] Ding, Hui, et al. \"Querying and mining of time series data: experimental comparison of representations and distance measures.\" Proceedings of the VLDB Endowment 1.2 (2008): 1542-1552.\n\n[2] Schäfer, Patrick. \"Towards time series classification without human preprocessing.\" International Workshop on Machine Learning and Data Mining in Pattern Recognition. Springer, Cham, 2014.\nAPA\t\n\n[3] Shokoohi-Yekta, Mohammad, Jun Wang, and Eamonn Keogh. \"On the non-trivial generalization of dynamic time warping to the multi-dimensional case.\" Proceedings of the 2015 SIAM International Conference on Data Mining. Society for Industrial and Applied Mathematics, 2015.\n\n[4] Wang, Jingdong, et al. \"A survey on learning to hash.\" IEEE Transactions on Pattern Analysis and Machine Intelligence (2017).\n\n[5] Wang, Jun, et al. \"Learning to hash for indexing big data—a survey.\" Proceedings of the IEEE 104.1 (2016): 34-57.\n\n[6] Erin Liong, Venice, et al. \"Deep hashing for compact binary codes learning.\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.\n\n[7] Zhang, Ruimao, et al. \"Bit-scalable deep hashing with regularized similarity learning for image retrieval and person re-identification.\" IEEE Transactions on Image Processing 24.12 (2015): 4766-4779.\n\n[8] Kim, Yongwook Bryce, and Una-May O'Reilly. \"Large-scale physiological waveform retrieval via locality-sensitive hashing.\" Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE. IEEE, 2015.\n\n[9] Pei, Wenjie, David MJ Tax, and Laurens van der Maaten. \"Modeling time series similarity with siamese recurrent networks.\" arXiv preprint arXiv:1603.04713 (2016).\n\n[10] https://ibb.co/g2X3LR\n\n[11] https://ibb.co/kTRkZm\n\n[12] https://ibb.co/iijkN6\n",
"Thanks to all for the feedback - we believe the following experiments and discussion serve to strengthen the paper's argument and answer outstanding questions. Here we detail the paper's motivation and experimentation, compare to a number of suggested baselines, and elaborate on architecture-specific questions.\n\n*Motivation and Experiment Choice*\nThe paper currently lacks clarity regarding our motivation/experiment choice in developing a distance metric with which to compare multivariate time series. This is partially due to our choice of experiment; we chose 1NN not because we expect it to be an especially good classifier, but because it is the standard means of evaluating time series representations and distance measures (Ding et. al. [1], Schafer et. al. [2], Shokoohi-Yekta et. al. [3]). Reviewer 1 and Reviewer 3 pose solid points regarding the value of considering an additional task. A distance metric is particularly valuable in the context of information retrieval. Constructing a compact representation for later retrieval of similar items is an important problem and has been the subject of a great deal of work, at least in the case of retrieving images. See [4], [5] for surveys, and [6],[7] for examples of doing so in the supervised setting.\n\nFor time series, consider the following use case. A hospital patient’s ECG signals indicate the presence of a premature ventricular contraction (PVC). To better understand the patient’s state, a physician would like to see similar heartbeats for comparison. Searching through a database of raw ECG waveforms would be computationally expensive, and, depending on the similarity measure, might not return useful examples. A learned distance measure could both accelerate the search through dimensionality reduction and increase the quality of the results. Note that, because heartbeat types are easily classified in most cases, the learning process can use labels. This clinical scenario occurs not only for ECG data, but for many medical time series [8]. We’ll articulate this use case and add another experiment to the paper.\n\nThe histogram linked at [12] visualizes the distribution of distances between same-label pairs and different-label pairs of examples. With this figure, we intend to show how DTW distances for different-class pairs are often the same as the DTW distances for same-class pairs. In the histogram, the blue bars describe the number of same-class example pairs that exist at that distance. The orange bars describe the same for different-class label pairs. Looking at AUSLAN and Libras, we see that the spread of distances for DTW is larger in both cases, where blue bars and orange bars often exist for the same distance. Comparing the DTW histograms to the Jiffy histograms, we see that the Jiffy histogram of distances has a smaller spread of distances for same-class pairs while achieving higher nearest-neighbor classification accuracy.\n\n*Baselines*\nWe support our method with the comparison of the NN classifier to three baselines: the basic CNN, a global pooling baseline, and a multi-channel CNN baseline.\n\nLinked at [10] is a figure demonstrating the comparison of the NN classifier to the CNN's softmax classification accuracies. For 5/6 of the datasets, the 1NN classification accuracy exceeds the CNN classifier's accuracy. As expected, the addition of more neighbors in the 3NN and 5NN results serves to increase or maintain the accuracy of each dataset, save for ECG and Libras. This may be explained by how small the ECG dataset is - points at the border of each cluster are often close enough to a separate cluster to adopt its points.\n\nLinked at [11] is a figure comparing the performance of the global pooling baseline and multi_channel CNN baseline to Jiffy. Both the global pooling and multi-channel convolution baselines fail to perform as consistently as the Jiffy’s percentage-pooled, single-channel convolution architecture. The multi-channel convolution baseline often achieves comparable accuracy, but this fails to justify the extra parameters in this model. \n\n*Architecture*\nThe architecture used in the Siamese/triplet network comparisons is identical to the architecture of the proposed CNN classifier. The Siamese RNN architecture is based on the architecture proposed by Pei et. al. [9]. We will elaborate on how the hyper-parameters were specified for each of these networks. \n\nThe architecture is parameterized by the %-pooling necessary. Because we know the (padded) length of the time series at the start of training, however, we can create pooling layers of “constant” size using existing APIs. E.g., for length 100 time series and 15% pooling, we simply use a pooling size of 15.\n\nWith respect to Figure 1, we intend to clarify the %-pooling aspect of the architecture - no hyperparameters were tuned for the resulting architecture, a detail that we will communicate in the revised edition of the paper.\n",
"Here's the link to our git repo, which contains a reproducibility report, and our reimplementation code:\n\nhttps://bitbucket.org/cagraff/iclr2018_repro_challenge_jiffy/src/cb8839bf6023?at=master\n",
"Is there any chance you could put the preprocessing/cleaning code up earlier- we have something that is performing decently already- but it would be a major help to not have to load/clean each of the 16 datasets you used.",
"Thanks for your interest in our work! Here are the answers to your questions:\n\n1. We will be releasing a fully testable implementation by the end of this weekend! Apologies for the delay; pre-processing and testing code will also be made available for ease of reproducibility.\n2. We do use padding and this padding occurs prior to the convolutional layer. This code will be released as well.\n3. The initial learning rate used is 2e-5.\n4. Yes, a holdout evaluation set was used. For datasets with a pre-specified train/test split, we used those. For the other datasets, we partitioned 20% of the dataset for evaluation.\n\nHappy to answer additional questions about the implementation in the meantime.",
"1. We used tensorflow as well!\n2. We train for a fixed number of iterations (20000).\n3. We just used the built-in Adam optimizer, with the aforementioned learning rate.\n4. No, we left beta_1 and beta_2 unmodified.\n5. We believe the Siamese CNN/RNN perform poorly in comparison to our architecture due to its weakness in the face of multi-class problems and limited data. In n-class scenarios, the Siamese architecture attempts to specify O(n^2) constraints. This lines up with the Siamese CNN/RNN's poor performance on the Libras dataset - we hypothesize that this is a result of the combination of relatively little data (360 examples) and a high number of classes (15).\n\nWe're hoping to get the code out earlier, before the deadline for this competition. Sunday is the absolute latest.\n",
"Thanks for the fast response!\n\nA few more questions:\n\n1.What framework did you use to implement the net? - we have something running in tensorflow right now.\n\n2. What were the stopping conditions were used for training?\n\n3. Was a learning rate annealing schedule used?\n\n4. Did you use non-default values for beta_1 and beta_2 for Adam?\n\n5. Finally, what do you think explains the model outperforming siamese cnn and rnn? \n",
"\nWe are trying to replicate your results for the 2018 ICLR Reproducibility Challenge : http://www.cs.mcgill.ca/~jpineau/ICLR2018-ReproducibilityChallenge.html\n\n1. Are you planning on releasing your code as described in the paper? The repository is currently empty.\n\n2. Variable size maxpool implementation: Did you use padding? If so was it before or after the convolutional layer?\n\n3. What initial learning rate was used for the Adam Optimizer? \n\n4. Was a holdout evaluation set used? \n\n \n "
] | [
-1,
-1,
6,
4,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"r1aYq6ieM",
"r1aYq6ieM",
"iclr_2018_ryacTMZRZ",
"iclr_2018_ryacTMZRZ",
"iclr_2018_ryacTMZRZ",
"Hy94i92fz",
"iclr_2018_ryacTMZRZ",
"iclr_2018_ryacTMZRZ",
"SJ6szzaWG",
"Bk-UjRn-z",
"r1izTZpZf",
"SJ2qtgpWz",
"iclr_2018_ryacTMZRZ"
] |
iclr_2018_ryj38zWRb | Optimizing the Latent Space of Generative Networks | Generative Adversarial Networks (GANs) have achieved remarkable results in the task of generating realistic natural images. In most applications, GAN models share two aspects in common. On the one hand, GANs training involves solving a challenging saddle point optimization problem, interpreted as an adversarial game between a generator and a discriminator functions. On the other hand, the generator and the discriminator are parametrized in terms of deep convolutional neural networks. The goal of this paper is to disentangle the contribution of these two factors to the success of GANs. In particular, we introduce Generative Latent Optimization (GLO), a framework to train deep convolutional generators without using discriminators, thus avoiding the instability of adversarial optimization problems. Throughout a variety of experiments, we show that GLO enjoys many of the desirable properties of GANs: learning from large data, synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors. | rejected-papers | This paper attempts to decouple two factors underlying the success of GANs: the inductive bias of deep CNNs and adversarial training. It shows that, surprisingly, the second factor is not essential. R1 thought that comparisons to Generative Moment Matching Networks and Variational Autoencoders should be provided (note: this was added to a revised version of the paper). They also pointed out that the paper lacked comparisons to newer flavors of GANs. While R1 pointed out that the use of 128x128 and 64x64 images was weak, I tend to disagree as this is still common for many GAN papers. R2 was neutral to positive about the paper and thought that most importantly, the training procedure was novel. R3 also gave a neutral to positive review, claiming the paper was easy to follow and interesting. Like R1, R3 thought that a stronger claim could be made by using different datasets. In the rebuttal, the authors argued that the main point was not in proposing a state-of-the-art generative model of images but to provide more an introspection on the success of GANs. Overall, I found the work interesting but felt that the paper could go through one more review/revision cycle. In particular, it was very long. Without a champion, this paper did not make the cut. | val | [
"BkILtntlz",
"SynXdTKeM",
"HyE2oHixz",
"ryNFIO6mf",
"SkvIrda7G",
"HJ8VH_pmz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Summary: The authors observe that the success of GANs can be attributed to two factors; leveraging the inductive bias of deep CNNs and the adversarial training protocol. In order to disentangle the factors of success, and they propose to eliminate the adversarial training protocol while maintaining the first factor. The proposed Generative Latent Optimization (GLO) model maps a learnable noise vector to the real images of the dataset by minimizing a reconstruction loss. The experiments are conducted on CelebA and LSUN-Bedroom datasets. \n\nStrengths: \nThe paper is well written and the topic is relevant for the community.\nThe notations are clear, as far as I can tell, there are no technical errors.\nThe design choices are well motivated in Chapter 2 which makes the main idea easy to grasp. \nThe image reconstruction results are good. \nThe experiments are conducted on two challenging datasets, i.e. CelebA and LSUN-Bedroom.\n\nWeaknesses:\nA relevant model is Generative Moment Matching Network (GMMN) which can also be thought of as a “discriminator-less GAN”. However, the paper does not contrast GLO with GMMN either in the conceptual level or experimentally. \n\nAnother relevant model is Variational Autoencoders (VAE) which also learns the data distribution through a learnable latent representation by minimizing a reconstruction loss. The paper would be more convincing if it provided a comparison with VAE.\n\nIn general, having no comparisons with other models proposed in the literature as improvements over GAN such as ACGAN, InfoGAN, WGAN weakens the experimental section.\n\nThe evaluation protocol is quite weak: CelebA images are 128x128 while LSUN images are 64x64. Especially since it is a common practice nowadays to generate much higher dimensional images, i.e. 256x256, the results presented in this paper appear weak. \n\nAlthough the reconstruction examples (Figure 2 and 3) are good, the image generation results (Figure 4 and 5) are worse than GAN, i.e. the 3rd images in the 2nd row in Figure 4 for instance has unrealistic artifacts, the entire Figure 5 results are quite boxy and unrealistic. The authors mention in Section 3.3.2 that they leave the careful modeling of Z to future work, however the paper is quite incomplete without this.\n\nIn Section 3.3.4, the authors claim that the latent space that GLO learns is interpretable. For example, smiling seems correlated with the hair color in Figure 6. This is a strong claim based on one example, moreover the evidence of this claim is not as obvious (based on the figure) to the reader. Moreover, in Figure 8, the authors claim that the principal components of the GLO latent space is interpretable. However it is not clear from this figure what each eigenvector generates. The authors’ observations on Figure 8 and 9 are not clearly visible through manual inspection. \n\nFinally, as a minor note, the paper has some vague statements such as\n“A linear interpolation in the noise space will generate a smooth interpolation of visually-appealing images”\n“Several works attempt at recovering the latent representation of an image with respect to a generator.”\nTherefore, a careful proofreading would improve the exposition. ",
"In this paper, the authors propose a new architecture for generative neural networks. Rather than the typical adversarial training procedure used to train a generator and a discriminator, the authors train a generator only. To ensure that noise vectors get mapped to images from the target distribution, the generator is trained to map noise vectors to the set of training images as closely as possible. Both the parameters of the generator and the noise vectors themselves are optimized during training. \n\nOverall, I think this paper is useful. The images generated by the model are not (qualitatively and in my opinion) as high quality as extremely recent work on GANs, but do appear to be better than those produced by DCGANs. More importantly than the images produced, however, is the novel training procedure. For all of their positive attributes, the adversarial training procedure for GANs is well known to be fairly difficult to deal with. As a result, the insight that if a mapping from noise vectors to training images is learned directly, other noise images still result in natural images is interesting.\n\nHowever, I do have a few questions for the authors, mostly centered around the choice of noise vectors.\n\nIn the paper, you mention that you \"initialize the z by either sampling them from a Gaussian distribution or by taking the whitened PCA of the raw image pixels.\" What does this mean? Do you sample them from a Gaussian on some tasks, and use PCA on others? Is it fair to assume from this that the initialization of z during training matters? If so, why?\n\nAfter training, you mention that you fit a full Gaussian to the noise vectors learned during training and sample from this to generate new images. I would be interested in seeing some study of the noise vectors learned during training. Are they multimodal, or is a unimodal distribution indeed sufficient? Does a Gaussian do a good job (in terms of likelihood) of fitting the noise vectors, or would some other model (even something like kernel density estimation) allow for higher probability noise vectors (and therefore potentially higher quality images) to be drawn? Does the choice of distribution even matter, or do you think uniform random vectors from the space would produce acceptable images?",
"The paper is well written and easy to follow. I find the results very interesting. In particular the paper shows that many properties of GAN (or generative) models (e.g. interpolation, feature arithmetic) are a in great deal result of the inductive bias of deep CNN’s and can be obtained with simple reconstruction losses. \n\nThe results on CelebA seem quite remarkable for training examples (e.g. interpolation). Samples are quite good but inferior to GANs, but still impressive for the simplicity of the model. The results on SUN are a bit underwhelming, but still deliver the point reasonably well in my view. Naturally, the paper would make a much stronger claim showing good results on different datasets. \n\nThe authors mentioned that the current method can recover all the solutions that could be found by an autoencoder and reach some others. It would be very interesting to empirically explore this statement. Specifically, my intuition is that if we train a traditional autoencoder (with normalization of the latent space to match this setting) and compute the corresponding z vectors for each element in the dataset, the loss function (1) would be lower than that achieved with the proposed model. If that is true, the way of solving the problem is helping find a solution that prevents overfitting. \n\nFollowing with the previous point, the authors mention that different initializations were used for the z vectors in the case of CelebA and LSUN. Does this lead to significantly different results? What would happen if the z values were initialized say with the representations learned by a fully trained deterministic autoencoder (with the normalization as in this work)? It would be good to report and discuss these alternatives in terms of loss function and results (e.g. quality of the samples). \n\nIt seems natural to include VAE baselines (using both of the losses in this work). Also, recent works have used ‘perceptual losses’, for instance for building VAE’s capable of generating sharper images:\n\nLamb, A., et al (2016). Discriminative regularization for generative models. arXiv preprint arXiv:1602.03220.\n\nIt would be good to compare these results with those presented in this work. One could argue that VAE’s are also mainly trained via a regularized reconstruction loss. Conversely, the proposed method can be thought as a form of autoencoder. The encoder could be thought to be implicitly defined by the optimization procedure used to recover the latent vectors in GAN's. Using explicit variables for each image would be a way of solving the optimization problem.\n\nIt would be informative to also shows reconstruction and interpolation results for a set of ‘held out’ images. Where the z values would be found as with GANs. This would test the coverage of the method and might be a way of making the comparison with GANs more relevant. \n\nThe works:\n\nNguyen, Anh, et al. \"Synthesizing the preferred inputs for neurons in neural networks via deep generator networks.\" Advances in Neural Information Processing Systems. 2016.\n\nHan, Tian, et al. \"Alternating Back-Propagation for Generator Network.\" AAAI. 2017.\n\nSeems very related.\n",
"We updated the draft with the following changes:\n- Added VAE baselines trained on CelebA 128x128 (figures 2, 4, 6).\n- Added image reconstructions of held-out images (figure 10).",
"\n# R3 “The authors mention in Section 3.3.2 that they leave the careful modeling of Z to future work, however the paper is quite incomplete without this”\n\nAs mentioned above, our primary goal is to tease apart the influence of inductive bias via the convolutional network architecture from the GAN training protocol; as such, we have also restricted ourselves to the simplest sampling methods. Because a GAN is sampled via a simple Gaussian (or uniformly), we do the same here. Of course we can improve sample quality (and log-likelihood) with more sophisticated model for Z, but this does not serve to help understand how a GAN is working. For example, consider figure 8 which we used to show the effects of moving principal components of the Z. We could easily make this into a method for generating new samples.\n\nNote that DCGAN generations trained on faces will also include many “monsters” (even look at the results in the original DCGAN paper). On the other hand, we agree that on the SUN bedrooms, our model produces less convincing generations than DCGAN; but we also think the discrepancy between the results on two datasets (and indeed, the *way* in which our model's samples are less convincing paired with the way GANs fail to reconstruct training samples) is worth publishing.\n\n\n# baselines (R3, R1):\n\nWe agree that VAE's are reasonable to include for comparison, and we will do so. \n\nW.r.t. better GAN training protocols, most of the improvements have dealt with reliability. In our experiments, we trained hundreds of GAN models and picked the one with the best generations for comparison. As our goal is not to claim a SOTA image generation method, but rather to try to understand the factors in the success of a GAN, using simple techniques with with *standardized implementations* (and running them lots of times and picking the best outcomes) is preferable to getting the bleeding edge of reliable GAN training.\n\n\n# GLO / AE loss (R3):\n\nThe loss from our model (with direct optimization of Z) is lower than that from an auto-encoder (although an auto-encoder fine-tuned with direct optimization of Z is comparable with our model). However, we do agree with the intuition that random initialization serves as a kind of regularizer; we consistently see that random initialization leads to better generations.\n\n\n# Reconstruction of held out images\n\nWe agree, and we will do this.",
"First of all, we would like to thank the reviewers for their thoughtful comments. \nThe missing references suggested by reviewer R3 are indeed relevant and we will include them in the discussion of the related work in our revised draft. We would also like to thank reviewer R1 for pointing out problems with writing.\n\n# R1: “The results on SUN are a bit underwhelming”\n# R3: “the image generation results (Figure 4 and 5) are worse than GAN”\n\nThe primary focus of this work is shedding light on the success of GANs, as opposed to demonstrating a SOTA generative model for images. In particular, in this work we focused on DCGAN like architectures, without progressive generation or other more sophisticated setups. Our aim was to understand what part of the success of DCGAN models could be explained by the inductive bias of the architecture, rather than as a result of the GAN training protocol. We have demonstrated that on celeba, the inductive bias is key, and the GAN training protocol is not crucial, as the GAN generations are at best marginally superior to GLO generations. On the other hand, with a DCGAN architecture, the GAN training protocol is important on the bedrooms. We suspect that this is a capacity issue, and that the GAN “solves” the capacity issue by ignoring a large part of the training distribution, as evidenced by the reconstruction results in figure 2 and 3. Even if one does not believe this hypothesis, the discrepancy between the results on the faces and on the bedrooms is interesting as it suggests multiple other avenues for understanding the success of GANs. \n\nIn short: we fully acknowledge (here and in the text of the paper) that our generations are inferior to GAN on bedrooms and not noticeably superior to GAN on celeb; but this does not invalidate the thesis of the paper or its scientific value. \n\n\n# R1: “ Especially since it is a common practice nowadays to generate much higher dimensional images, i.e. 256x256, the results presented in this paper appear weak.”\n\nGenerating 256x256 images with a DCGAN architecture on the datasets we used is still not common. In future work we will building models with more capacity and use more powerful generation protocols, with sample quality as the primary focus.\n\n\n# R1: “the authors mention that different initializations were used for the z vectors in the case of CelebA and LSUN. Does this lead to significantly different results?” \n# R2 “Is it fair to assume from this that the initialization of z during training matters? If so, why”\n\nFor all of the celeb images in the paper, we initialized Z with Gaussian normal vectors projected to the sphere. For the bedrooms, we initialized with PCA. On the faces, we found that initializing with whitened PCA leads to faster convergence, comparable reconstruction, and worse generations; and initializing with the results of an auto-encoder leads to even faster convergence, but still worse generations. On the bedrooms, because for the models described in the paper, we are capacity limited, and because the data set is so large (and so optimization takes time), we used PCA initializations.\n\n"
] | [
4,
6,
6,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1
] | [
"iclr_2018_ryj38zWRb",
"iclr_2018_ryj38zWRb",
"iclr_2018_ryj38zWRb",
"iclr_2018_ryj38zWRb",
"iclr_2018_ryj38zWRb",
"iclr_2018_ryj38zWRb"
] |
iclr_2018_B1ZZTfZAW | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a framework for training models to produce realistic-looking data. In this work, we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to produce realistic real-valued multi-dimensional time series, with an emphasis on their application to medical data. RGANs make use of recurrent neural networks (RNNs) in the generator and the discriminator. In the case of RCGANs, both of these RNNs are conditioned on auxiliary information. We demonstrate our models in a set of toy datasets, where we show visually and quantitatively (using sample likelihood and maximum mean discrepancy) that they can successfully generate realistic time-series. We also describe novel evaluation methods for GANs, where we generate a synthetic labelled training dataset, and evaluate on a real test set the performance of a model trained on the synthetic data, and vice-versa. We illustrate with these metrics that RCGANs can generate time-series data useful for supervised training, with only minor degradation in performance on real test data. This is demonstrated on digit classification from ‘serialised’ MNIST and by training an early warning system on a medical dataset of 17,000 patients from an intensive care unit. We further discuss and analyse the privacy concerns that may arise when using RCGANs to generate realistic synthetic medical time series data, and demonstrate results from differentially private training of the RCGAN. | rejected-papers | Overall I agree with the assessment of R1 that the paper touches on many interesting issues (deep learning for time series, privacy-respecting ML, simulated-to-real-world adaptation) but does not make a strong contribution to any of these. Especially with respect to the privacy-respecting aspect, there needs to be more analysis showing that the generative procedure does not leak private information (noting R1 and R3’s comments). I appreciate the authors clarifying the focus of the work, and revising the manuscript to respond to the reviews. Overall it’s a good paper on an important topic but I think there are too many issues outstanding for accept at this point. | train | [
"H1q2baOxM",
"SyhqIGYxG",
"Hk6aJkmWM",
"HJwRYDdXG",
"B1nOSPu7z",
"BJ0gSwd7f"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The authors propose to use synthetic data generated by GANs as a replacement for personally identifiable data in training ML models for privacy-sensitive applications such as medicine. In particular it demonstrates adversarial training of a recurrent generator for an ICU monitoring multidimensional time series, proposes to evaluate such models by the performance (on real data) of supervised classifiers trained on the synthetic data (\"TSTR\"), and empirically analyzes the privacy implications of training and using such a model. \n\nThis paper touches on many interesting issues -- deep/recurrent models of time series, privacy-respecting ML, adaptation from simulated to real-world domains. But it is somewhat unfocused and does not seem make a clear contribution to any of these. \n\nThe recurrent GAN architecture does not appear particularly novel --- the authors note that similar architectures have been used for discrete tasks such language modeling (and fail to note work that uses convolutional or recurrent generators for video prediction, a more relevant continuous task, see e.g. http://carlvondrick.com/tinyvideo/, or autoregressive approaches to deep models of time series, e.g. WaveNet https://arxiv.org/abs/1609.03499) and there is no obvious new architectural innovation. \n\nI also find it difficult to assess whether the proposed model is actually generating reasonable time series. It may be true that \"one plot showing synthetic ICU data would not provide enough information to evaluate its actual similarity to the real data\" because it could not rule out that case that the model has captured the marginal distribution in each dimension but not joint structure. However producing marginal distributions that look reasonable is at least a *necessary* condition and without seeing those plots it is hard to rule out that the model may be producing highly unrealistic samples. \n\nThe basic privacy paradigm proposed seems to be:\n1. train a GAN using private data\n2. generate new synthetic data, assume this data does not leak private information\n3. train a supervised classifier on the private data\nso that the GAN training-sampling loop basically functions as an anonymization procedure. For this to pan out, we'd need to see that the GAN samples are a) useful for a range of supervised tasks, and b) do not leak private information. But the results in Table 2 show that the TSTR results are quite a lot worse than real data in most cases, and it's not obvious that the small set of tasks evaluated are representative of all tasks people might care about. The attempts to demonstrate empirically that the GAN does not memorize training data aren't particularly convincing; this is an adversarial setting so the fact that a *particular* test doesn't reveal private data doesn't imply that a determined attacker wouldn't succeed. In this vein, the experiments with DP-\u000fSGD are more interesting, although a more direct comparison would be helpful (it is frustrating to flip back and forth between Tables 2 and 3 in an attempt to tease out relative performance) and and it is not clear how the settings (ε \u000f\u000f\u000f= 0.5 and δ ≤ 9.8 × 10−3) were selected or whether they provide a useful level of privacy. That said I agree this is an interesting avenue for future work.\n\nFinally it's worth noting that discarding patients with missing data is unlikely to be innocuous for ICU applications; data are quite often not missing at random (e.g., a patient going into a seizure may dislocate a sensor). It appears that the analysis in this paper threw out more than 90% of the patients in their original dataset, which would present serious concerns in using the resulting synthetic data to represent the population at large. One could imagine coding missing data in various ways (e.g. asking the generator to produce a missingness pattern as well as a time series and allowing the discriminator to access only the masked time series, or explicitly building a latent variable model) and some sort of principled approach to missing data seems crucial for meaningful results on this application. ",
"In this paper, the authors propose a recurrent GAN architecture that generates continuous domain sequences. To accomplish this, they use a generator LSTM that takes in a sequence of random noise as well as a sequence of conditonal information and outputs a sequence. The discriminator LSTM takes a sequence (and conditional information) as input and classifies each element of the sequence as real or synthetic -- the entire sequence is then classified by vote. The authors evaluate on several synthetic tasks, as well as an ICU timeseries data task.\n\nOverall, I thought the paper was clearly written and extremely easy to follow. To the best of my knowledge, the method proposed by the authors is novel, and differs from traditional sentence generation (as an example) models because it is intended to produce continuous domain outputs. Furthermore, the story of generating medical training data for public release is an interesting use case for a model like this, particularly since training on synthetic data appears to achieve not competitive but quite reasonable accuracy, even when the model is trained in a differentially private fashion.\n\nMy most important piece of feedback is that I think it would be useful to include a few examples of the eICU time series data, both real and synthetic. This might give a better sense of: (1) how difficult the task is, (2) how much variation there is in the real data from patient to patient, and (3) how much variation we see in the synthetic time series. Are the synthetic time series clearly multimodal, or do they display some of the mode collapse behavior occasionally seen in GANs?\n\nI would additionally like to see a few examples of the time series data at both the 5 minute granularity and the 15 minute granularity. You claim that downsampling the data to 15 minute time steps still captures the relevant dynamics of the data -- is it obvious from the data that variations in the measured variables are not significant over a 5 minute interval? As it stands, this is somewhat an unknown, and should be easy enough to demonstrate.",
"This paper proposes to use RGANs and RCGANS to generate synthetic sequences of actual data. They demonstrate the quality of the sequences on sine waves, MNIST, and ICU telemetry data.\n\nThe authors demonstrate novel approaches for generating real-valued sequences using adversarial training, a train on synthetic, test of real and vice versa method for evaluating GANS, generating synthetic medical time series data, and an empirical privacy analysis. \n\nMajor\n- the medical use case is not motivating. de-identifying the 4 telemetry measures is extremely easy and there is little evidence to show that it is even possible to reidentify individuals using these 4 measures. our institutional review board would certainly allow self-certification of the data (i.e. removing the patient identifiers and publishing the first 4 hours of sequences).\n- the labels selected by the authors for the icu example are to forecast the next 15 minutes and whether a critical value is reached. Please add information about how this critical value was generated. Also it would be very useful to say that a physician was consulted and that the critical values were \"clinically\" useful.\n- the changes in performance of TSTR are large enough that I would have difficulty trusting any experiments using the synthetic data. If I optimized a method using this synthetic data, I would still need to assess the result on real data.\n- In addition it is unclear whether this synthetic process would actually generate results that are clinically useful. The authors certainly make a convincing statement about the internal validity of the method. An externally valid measure would strengthen the results. I'm not quite sure how the authors could externally validate the synthetic data as this would also require generating synthetic outcome measures. I think it would be possible for the synthetic sequence to also generate an outcome measure (i.e. death) based on the first 4 hours of stay.\n\nMinor\n- write in the description for table 1 what task the accuracies correspond.\n\nSummary\nThe authors present methods for generating synthetic sequences. The MNIST example is compelling. However the ICU example has some pitfalls which need to be addressed.",
"Thank you for your feedback and comments.\n\nRegarding your concern about the lack of clear contributions in this work, we would like to clarify that the work is ultimately focused on generating synthetic medical time series data. Achieving this necessitated multiple elements - finding and developing an appropriate architecture and making it work in this domain, finding and developing evaluation techniques for synthetic time series, analysing the privacy implications and testing a differential private training algorithm in combination with RGANs. Any one of these topics could constitute an independent research project, and we have necessarily not addressed each to the fullest extent possible, but it was important to address all components in the pursuit of our objective. Without an evaluation measure, we cannot judge whether the method really works beyond visual assessment. Also with medical data the privacy question immediately pops up and earlier versions of this manuscript were rejected because we didn’t cover differential privacy. It therefore appears necessary to include all the pieces into one manuscript to have a coherent piece of work. We appreciate the pointers to other recurrent generators and have extended the related work section accordingly.\n\nFollowing your suggestion, we have added several figures to an appendix, including comparisons of marginal distributions between the synthetic and real eICU data. We compare both the marginal distributions of each variable at each time point (figure 8) and histograms of each variable ignoring the temporal component (figure 9). As you have already noted, such marginal distributions are still imperfect, but at least by this measure the synthetic and real distributions are roughly similar. The most obvious differences arise from the real data being integer-valued (in the case of SpO2) and our method of scaling the synthetic data back into medical ranges. We thank you for proposing to include these plots, since they are very helpful as a sanity check, and highlighted in particular the weakness in our choice of data scaling during training (now discussed in the appendix).\n\nWe elected to focus on ICU tasks pertaining to early warning of extreme values because these simplified endpoints can contribute to more sophisticated early warning systems, which we work on in collaboration with clinicians in other projects. Developing a realistic, clinically useful system for intensive care is beyond the scope of this work.\n\nRegarding the privacy analysis, we agree that the tests we have done are not all-encompassing, however we intentionally focused on the question of whether or not the generator preferentially generates samples from the training set (assessed with established methods using maximum mean discrepancy), and not whether the samples are robust against any conceivable attack. We felt this question of memorisation would be of broader interest to the GAN field, while allowing us to make some weaker claims about the privacy of the original data.\n\nRegarding the TSTR results in the differentially private setting: to make comparison easier, we have merged tables 3 and table 2 into two sub-tables. We have updated the results in Table 3 (now Table 2b) in light of choosing of less arbitrary cut-off for the acceptable delta value. Following a common heuristic (cited in the revised manuscript) we require delta to be below 1/|D| where |D| is the size of the training set, and only consider epochs satisfying this criterion (we show how delta increases for different epsilon values during training in a new Figure 5 in the main body of the paper). This caused minor changes in the TSTR results.\n\nWe are conscious of the fact that discarding patients with missing data may limit the generalisability of the model. In this case, the majority of excluded patients were missing measurements of mean arterial pressure (MAP), which is an invasive measurement. Thus, any model built on this data is restricted to patients with measurements of MAP. However, since those patients with MAP measurements are typically more critical, building an early warning system restricted to such patients is not unreasonable. Also, we don’t claim and in this work we don’t aim that our approach could or should be used on patients outside the cohort of interest. Nevertheless, restricting to cohorts is common practice in medical science.\n\nRegarding accounting for missing data, this is a very interesting direction for this research, and we are looking into it. For the use-case in ML for healthcare, building realistic missingness patterns is particularly relevant. However, for methods developments it is important to separate the different challenges and to tackle one problem at a time. For this work we considered the problem of generating medical time-series without (much) missing data. Medical data harbors many more challenges that needs answers before such a system can be used for the benefit of patients. ",
"Thank you for your feedback. Following your suggestions and those from other reviewers, we have extended the manuscript, including several new figures in an appendix. \n\nAs suggested, in Figure 6 we now show samples from three random real eICU patients, both at their original sampling resolution (approximately 5 minute granularity) and downsampled to 15 minute and 30 minute granularity. The downsampling loses some variation (high frequency fluctuations naturally), but over the 4-5 hour period of interest, at 15 minute granularity the main trends and variations are still visible.\n\nIn Figure 7 we show three random synthetic patients. Regarding mode collapse, based purely on Figures 6 and 7 there is some evidence that the GAN produces SpO2 traces that hover near 99/100 and fluctuate, whereas in the real data patients sometimes stay perfectly at 100, and others degrade more steadily. If the GAN is failing to capture ‘degrading modes’ of patients, we would expect low TSTR scores in the early warning tasks, which is partially observed, but could be explained by other aspects of the synthetic data. We observed some mode collapse of the RCGAN in producing MNIST digits (more readily observable), so we don’t think this architecture is intrinsically resistant to it. However, the eICU data doesn’t exhibit any obviously multimodal behaviour.\n\nTo further study the general properties of the synthetic eICU data, we also produced a set of marginal distributions in Figures 8 and 9, providing further evidence that the synthetic data has captured the main characteristics of the real data.\n\nThese additional figures will give some additional information about the data that may help the reader, but we consider a developing a specifically tuned model for the eICU being beyond the scope of this data. ",
"Thank you for your comments and suggestions.\n\n- We agree that de-identifying these specific variables is not difficult, and unlikely to pose issues for data sharing. However, the approach we have taken in this work is to assume that the data is private (for whatever reason, IRB compliance is one example) and to work from there. We think this is a reasonable approach because the difficulty of data-sharing is a common complaint in machine learning applied to healthcare, and having methods to enable data-sharing independent of the specifics of the data addresses that. It also allows us to avoid answering the question of which specific aspects of ICU time-series need protecting. The answer to that question lies somewhere between ‘nothing’ and ‘everything’, depending on the temporal resolution, the length of the time series, and the number of variables released, the laws of each specific country, and is arguably a full research question on its own. Hence, we decided to focus on the idealised case of commonly-measured variables.\n\n- We agree that for several of the tasks, the TSTR performance with synthetic data is noticeably worse, indicating that the synthetic data isn't capturing all properties of the real data. While the performance may still be acceptable in some settings (for example, where false discoveries are less harmful), the purpose of the evaluation is not to demonstrate that the data is optimal for these specific tasks, but that the data is sufficiently realistic generally speaking. Of course, if real data is available it should be preferred, but in the case where this is not possible, we show that even the reduced performance from synthetic data can be useful.\n\n- The objective of the TSTR method is to assess how useful the synthetic data is. Here, we focused on an early-warning system situation, which is a very relevant task in intensive care medicine. An interesting external validation would be to use the classifier trained on the eICU-derived synthetic data, and to test it on another ICU dataset, such as MIMIC-III, to test cross-hospital generalisation. We will investigate this for the next revision of the manuscript.\n\n- Demonstrating the system in a realistic, clinically useful setting is not the intention of this work. As it is common in the ML field, we choose datasets to demonstrate the capabilities of the new methodologies, without demonstrating the full use in practice. That was done so for the MNIST dataset in this paper and any papers in the past, without demonstrating that the new classifier indeed would make a difference in practice. Implementing such a system goes beyond what can typically be done in a conference paper. Similarly, to illustrate usefulness, we would need to train the system on a larger set of identifiable variables, make sure that it predicts something clinically relevant (which requires involvements of physicians) and then convince an IRB that the generated data by our system does not leak private data. However, this work is about the technical basis to perform such work in the future. We therefore consider the demonstration of clinical usefulness of the system beyond the scope of this work. \n\n- Clarification: the tasks pertain to the patient’s values in the next hour, not 15 minutes. We have added details in the revised manuscript on how the critical thresholds were obtained. Briefly, we looked at the distribution of the data, so no clinician was needed. We also cross-referenced with easily-obtained healthy ranges to make sure our ICU population didn’t deviate too strongly from the norm.\n\n- Minor: we have extended the description of Table 1 as requested."
] | [
4,
6,
5,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1
] | [
"iclr_2018_B1ZZTfZAW",
"iclr_2018_B1ZZTfZAW",
"iclr_2018_B1ZZTfZAW",
"H1q2baOxM",
"SyhqIGYxG",
"Hk6aJkmWM"
] |
iclr_2018_SkfNU2e0Z | Statestream: A toolbox to explore layerwise-parallel deep neural networks | Building deep neural networks to control autonomous agents which have to interact in real-time with the physical world, such as robots or automotive vehicles, requires a seamless integration of time into a network’s architecture. The central question of this work is, how the temporal nature of reality should be reflected in the execution of a deep neural network and its components. Most artificial deep neural networks are partitioned into a directed graph of connected modules or layers and the layers themselves consist of elemental building blocks, such as single units. For most deep neural networks, all units of a layer are processed synchronously and in parallel, but layers themselves are processed in a sequential manner. In contrast, all elements of a biological neural network are processed in parallel. In this paper, we define a class of networks between these two extreme cases. These networks are executed in a streaming or synchronous layerwise-parallel manner, unlocking the layers of such networks for parallel processing. Compared to the standard layerwise-sequential deep networks, these new layerwise-parallel networks show a fundamentally different temporal behavior and flow of information, especially for networks with skip or recurrent connections. We argue that layerwise-parallel deep networks are better suited for future challenges of deep neural network design, such as large functional modularized and/or recurrent architectures as well as networks allocating different network capacities dependent on current stimulus and/or task complexity. We layout basic properties and discuss major challenges for layerwise-parallel networks. Additionally, we provide a toolbox to design, train, evaluate, and online-interact with layerwise-parallel networks. | rejected-papers | This paper presents a toolbox for the exploration of layerwise-parallel deep neural networks. The reviewers were consistent in their analysis of this paper: it provided an interesting class of models which warranted further investigation, and that the toolbox would be useful to those who are interested in exploring further. However, there was a lack of convincing examples, and also some concern that Theano (no longer maintained) was the only supported backend. The authors responded to say that they had subsequently incorporated TensorFlow support, they were not able to provide any more examples due to several reasons: “time, pending IP concerns, open technical details, sufficient presentation quality, page restriction.” I agree with the consensus reached by the reviewers. | val | [
"SJ1tsSFgf",
"HkeBFwYgf",
"B1KY-MqgG",
"H1VGzc9zf",
"SJMlG95Mf",
"r1ka-99zM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Quality and clarity\n\nThe paper goes to some length to explain that update order in a neural network matters in the sense that different update orders give different results. While standard CNN like architectures are fine with the layer parallel updating process typically used in standard tools, for recurrent networks and also for networks with connections that skip layers, different update orders may be more natural, but no GPU-accelerated toolboxes exist that support this. The authors provide such a toolbox, statestream, written Theano.\n\nThe paper's structure is reasonably clear, though the text has very poor \"flow\": the english could use a native speaker straightening out the text. For example, a number of times there are phrases like \"previously mentioned\", which is ugly. \n\nMy main issue is with the significance of the work. There are no results in the paper that demonstrate a case where it is useful to apply fully parallel updates. As such, it is hard to see the value of the contribution, also since the toolbox is written in Theano for which support has been discontinued. ",
"This paper introduces a new toolbox for deep neural networks learning and evaluation. The central idea is to include time in the processing of all the units in the network. For this, the authors propose a paradigm switch: form layerwise-sequential networks, where at every time frame the network is evaluated by updating each layer – from bottom to top – sequentially; to layerwise-parallel networks, where all the neurons are updated in parallel. The new paradigm implies that the layer update is achieved by using the stored previous state and the corresponding previous state of the previous layer. This has three consequences. First, every layer now use memory, a condition that already applies for RNNs in layerwise-sequential networks. Second, in order to have a consistent output, the information has to flow in the network for a number of time frames equal to the number of layers. In Neuroscience, this concept is known as reaction time. Third, since the network is not synchronized in terms of the information that is processed in a specific time frame, there are discrepancies w.r.t. the layerwise-sequential networks computation: all the techniques used to train deep NNs have to be reconsidered. \n\nOverall, the concept is interesting and timely especially for the rising field of spiking neural networks or for large and distributed architectures. The paper, however, should probably provide more examples and results in terms of architectures that can been implemented with the toolbox in comparison with other toolboxes. The paper presents a single example in which either the accuracy and the training time are not reported. While I understand that the main result of this work is the toolbox itself, more examples and results would improve the clarity and the implications for such paradigm switch. Another concern comes from the choice to use Theano as back-end, since it's known that it is going to be discontinued. Finally I suggest to improve the clarity and description of Figure 2, which is messy and confusing especially if printed in B&W. \n",
"In this paper, the authors present an open-source toolbox to explore layerwise-parallel deep neural networks. They offer an interesting and detailed comparison of the temporal progression of layerwise-parallel and layerwise-sequential networks, and differences that can emerge in the results of these two computation strategies.\n\nWhile the open-source toolbox introduced in this paper can be an excellent resource for the community interested in exploring these networks, the present submission offers relatively few results actually using these networks in practice. In order to make a more compelling case for these networks, the present submission could include more detailed investigations, perhaps demonstrating that they learn differently or better than other implementations on standard training sets.",
"Please see the comment below the first review.",
"Please see the comment below the first review.",
"We thank the reviewers for their feedback on our work. Considering that responses over reviewers greatly overlapped, we only wrote one comment and put it under the first with a brief note below the other two reviews.\n\nOne major concern across reviewers is the lack of compelling examples. We understand and share this concern. Because, we experienced some difficulties in the past explaining the general idea / concept of layerwise parallel networks, we chose to introduce and compare (on a textual level) the two approaches and their implications in some length. On the basis of reviewer's summaries, we think the core idea is well explained (we will try to improve Fig. 1 in the future). Another goal of the paper is to raise awareness inside the community that there are ways to integrate time into networks which are better suited to bridge the gap between spiking and current deep networks than the ones currently used (e.g. rollout or convolution over time). \n\nWhile we where able to integrate tensorflow support for our toolbox (dependence solely on theano was a concern of two reviewers), we cannot provide meaningful additional examples in the scope of this submission for several reasons: time, pending IP concerns, open technical details, sufficient presentation quality, page restriction.\n\nAgain, we want to thank the reviewers for their effort and fair feedback.\n"
] | [
3,
5,
5,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1
] | [
"iclr_2018_SkfNU2e0Z",
"iclr_2018_SkfNU2e0Z",
"iclr_2018_SkfNU2e0Z",
"SJ1tsSFgf",
"HkeBFwYgf",
"B1KY-MqgG"
] |
iclr_2018_HyIFzx-0b | BinaryFlex: On-the-Fly Kernel Generation in Binary Convolutional Networks | In this work we present BinaryFlex, a neural network architecture that learns weighting coefficients of predefined orthogonal binary basis instead of the conventional approach of learning directly the convolutional filters. We have demonstrated the feasibility of our approach for complex computer vision datasets such as ImageNet. Our architecture trained on ImageNet is able to achieve top-5 accuracy of 65.7% while being around 2x smaller than binary networks capable of achieving similar accuracy levels. By using deterministic basis, that can be generated on-the-fly very efficiently, our architecture offers a great deal of flexibility in memory footprint when deploying in constrained microcontroller devices. | rejected-papers | The paper proposes using a set of orthogonal bases that combine to form convolution kernels for CNNs leading to a significant reduction of memory usage. The main concerns raised by the reviewers were 1) clarity; 2) issues with writing and presentation of results; 3) some missing experiments. The authors released a revised version of the paper and a short summary of the enhancements. None of the reviewers changed scores following the author response. The reviews were detailed and came from those familiar with CNNs. I have decided to go with reviewer consensus.
| train | [
"BkA2-XteM",
"B1wZVecxM",
"B1hW9m5gG",
"HybCNdpQM",
"rk5aXua7M",
"ryiZXupXf",
"r1E4fOp7G"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The paper presents a binary neural network architecture that operated on predefined orthogonal binary basis. The binary filters that are used as basis are generated using Orthogonal Variable Spreading Factor. \nBecause the filters are weighted combinations of predefined basis, only the weights need to be trained and saved. The network is tested on ImageNet and able to achieve top-5 accuracy of 65.9%.\n\nThe paper is clearly written. Few mistakes and questions: \nIs Equation 2 used to measure the quality of the kernel approximation?\n\nIn Figure 2, what is Sparse layer? Is it FlexModule?\n\nIn 4.1 Results Section, the paper states that “On ImageNet, BinaryFlex is compared to BinaryConnect and BinaryNeuralNet; otherwise, BinaryFlex is compared to BinaryConnect and BNN.” It should be Binary Weight Network instead of BinaryNeuralNet. \n\nBased on Results in Table 1, BinaryFlex is able to reduce the model size and provide better accuracy than BinaryConnect (2015). However, the accuracy results are significantly worse than Binary-Weight-Network (2016). Could you comment on that? The ImageNet results are worrying, while BNN (7.8Mb) achieves 79.4%, this BinaryFlex (3.4Mb) achieves 65.7%. The accuracy difference is huge.\n",
"The paper proposes a neural net architecture that uses a predefined orthogonal binary basis to construct the filter weights of the different convolutional layers. Since only the basis weights need to be stored this leads to an exponential reduction in memory. The authors propose to compute the filter weights on the fly in order to tradeoff memory for computation time. Experiments are performed on ImageNet, MNIST, CIFAR datasets with comparisons to BinaryConnect, Binary-weight-networks and studies showing the memory vs time vs accuracy tradeoff.\n\nPositives\n- The idea of using a predefined basis to estimate filter weights in a neural network is novel and leads to significant reduction in memory usage.\n\nNegatives\n- The proposed method seems significantly worse than other binary techniques on ImageNet, CIFAR and SVHN. On Imagenet in particular binary-weight-network is 21% better at only 2x the model size. Would a binary-weight-network of the same model size be better than the proposed approach? It would help to provide results using the proposed method with the same model size as binary-weight-networks on the different datasets. \n- The citation to binary-weight-networks is missing.\n- The descriptions in section 3.3, 3.4 need to be more rigorous. For instance, how many basis weights are needed for a filter of size N. Does N need to be a power of 2 or are extra dimentions from the basis just ignored?\n",
"This paper proposes using a set of orthogonal basis and their combination to represent convolutional kernels. To learn the set of basis, the paper uses an existing algorithm (OSVF)\n\n-- Related Work\n\nRelated work suggests there is redundancy in the number of parameters (According to Denil et al) but the training can be done by learning a subset directly without drop in accuracy. I am not really sure this is strictly correct as many approaches (including Denil et al) suggest the additional parameters are needed to help the optimization process (therefore hard to learn directly a small model).\n\nAs in the clarity point below, please be consistent. Acronyms are not properly defined.\n\n\n-- Method / Clarity\n\nIt is nice to read section 3.1 but at the same time probably redundant as it does not add any value (at least the first two paragraphs including Eq. 1). Reading the text, it is not clear to me why there is a lower number of parameters to be updated. To the best of my understanding so far in the explanation, the number of parameters is potentially the same but represented using a single bit. Rephrasing this section would probably improve readability.\n\nRuntime is potentially reduced but not clear in current hardware.\n\nSection 3.2 is nice as short overview but happens to take more than the actual proposal (so I get lost). \n\nFigures 2 and 3. I am surprissed the FlexModule (a building block of BinaryFlex) is not mentioned in the binaryflex architecture and then, sparse blocks are not defined anywhere. Would be nice to be consistent here. Also note that filter banks among other details are not defined. \n\n\nNow, w and b in eq 2 are meant to be binary, is that correct? The text defines them as real valued so this is confusing. \n\n- From the explanations in the text, it is not clear to me how the basis and the weights are learned (except using backprop). How do we actually generate the filter bank, is this from scratch? or after some pretraining / preloaded model? What is the difference between BinaryFlex models and how do I generate them when replicating these results? It is correct to assume f_k is a pretrained kernel that is going to be approximated?\n\n\n\n\n-- more on clarity\n\n\nI would also appreciate rephrasing some parts of the paper. For instance, the paragraph under section 4.1 is confusing. There is no consistency with namings / acronyms and seems not to be in the right order. Note that the paragraph starts talking about ImageNet and then suggests different schedules for different datasets. The naming for state-of-the-art methods is not consistent. \nAlso note that acronyms are later used (such as BWN) but not defined here. This should be easy to improve.\n\nI guess Figurre 4 needs clarification. What are the axis? Why square and cercles? Same for Figure 5. \n\nOverall, text needs reviewing. There are typos all over the text. I think ImageNet is not a task but classification using ImageNet.\n\n-- Results\n\nI find it hard to follow the results. Section 4.1.1 suggests accuracy is comparable when constraints are relaxed and then only 7% drop in accuracy for a 4.5x model reduction. I have not been able to match these numbers with those in table 2. How do I get to see 7% lower accuracy for BinaryFlex-1.6? \n\nResults suggest a model under 2MB is convenient for using in ARM, is this actually a fact (is it tested in an ARM?) or just guessing? This is also a point made in the introduction and I would expect at least an example of running time there (showing the benefit compared to competitors). It is also interesting the fact that in the text, the ARM is said to have 512KB while in the experiments there is no model achieving that lower-bound.\n\nI would like to see an experiment on ImageNet where the proposed BinaryFlex leads to a model of approximately 7.5MB and see what the preformance is for that model (so comparable in size with the state-of-the-art).\n\nI missed details for the exact implementation for the other datsets (as said in the paper). There are modifications that are obscure and the benefits in model size (at least compared to a baseline) are not mentioned. Why?\n\n",
"Thank you for your comments and issues raised in our submission. Many of the issues were raised by all the reviewers, we believe to have addressed them all. \n\nWe have provided a fairer comparison between BinaryFlex and BWN on ImageNet when both architectures have similar model sizes. The difference in accuracy terms is less than 5%. ",
"We appreciate your comments. We have now submitted a new version of our paper and addressed the issues raised during the reviewing period. \n\nThe standard OVSF codes are a power of 2 arrays of 1s and -1s. Using other configurations, e.g. leading to 5x5 kernels, is a possibility that we haven't explored at this stage.",
"Thank you for your comments. We have now submitted a new version of our BinaryFlex paper and addressed the issues raised during the reviewing period. We focused on improving clarity and readability as well as providing more results.",
"Based on the comment provided by the reviewers, we have made de following modifications to our paper:\n\n- The filter generation stage using OVSF orthogonal binary basis was an important stage in our architecture that wasn't properly explained. We have addressed this problem with a diagram and an explanation in Section 3.1. \n\n- Added result comparing the performance of BinaryFlex on ImageNet, CIFAR-10 and MNIST at different model sizes, i.e. different OVSF ratios.\n\n- Introduced a more readable and informative Section 4. Included a table showing all the parameters used in a give BinaryFlex configuration with 3.4 MB of model size.\n\n- Fixed typos, acronyms usage and figures. "
] | [
5,
5,
3,
-1,
-1,
-1,
-1
] | [
3,
3,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HyIFzx-0b",
"iclr_2018_HyIFzx-0b",
"iclr_2018_HyIFzx-0b",
"BkA2-XteM",
"B1wZVecxM",
"B1hW9m5gG",
"iclr_2018_HyIFzx-0b"
] |
iclr_2018_SJtfOEn6- | ResBinNet: Residual Binary Neural Network | Recent efforts on training light-weight binary neural networks offer promising execution/memory efficiency. This paper introduces ResBinNet, which is a composition of two interlinked methodologies aiming to address the slow convergence speed and limited accuracy of binary convolutional neural networks. The first method, called residual binarization, learns a multi-level binary representation for the features within a certain neural network layer. The second method, called temperature adjustment, gradually binarizes the weights of a particular layer. The two methods jointly learn a set of soft-binarized parameters that improve the convergence rate and accuracy of binary neural networks. We corroborate the applicability and scalability of ResBinNet by implementing a prototype hardware accelerator. The accelerator is reconfigurable in terms of the numerical precision of the binarized features, offering a trade-off between runtime and inference accuracy.
| rejected-papers | R1 and R3’s main concern was that the work was not actually outperforming existing work and therefore its advantages were unclear. R2 brought up several questions on the experiments and asked for clarification with respect to previous work. R3 had several other detailed questions for the authors. The authors did not provide a response. | train | [
"S1X0siBxz",
"HkG6r4Kgf",
"Byi5CZcxz",
"Hy4pBX-kM",
"HyUlYClJM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"This paper proposes a method to quantize weights and activations in neural network during propagations.\n\nThe residual binarization idea is interesting. However, the experimental results are not sufficiently convincing that this method is meaningfully improving over previous methods. Specifically:\n\n1) In table 2, the 1-st level method is not performing better the FINN, while at the higher levels we pay with a much higher latency (about x2-x3 in figure 7) to get slightly better accuracy. \n\n2) Even at the highest level, the proposed method is not performing better than BinaryNet in terms of accuracy. The only gain in this comparison is the number of epochs needed for training. However, this is might be due to the size difference between the models, and not due to the proposed method. \n\n3) In a comment during the review period, the authors mention that \"For Imagenet, we can obtain a top-1 accuracy of 28.4%, 32.6%, and 33.6% for an Alexnet architecture with 1-3 levels of residual binarizations, while the Binarynet baseline achieves a top-1 accuracy of 27.9% with the same architecture.\" However, this is not accurate, BinaryNet actually achieves 41.8% top-1 accuracy for Imagenet with Alexnet (e.g., see BNN on table 2 in Hubara et al.). \n\nMinor comment regarding novelty:\nThe temperature adjustment method sounds somewhat similar to previous method of increasing the slope described \"Adjustable Bounded Rectifiers: Towards Deep Binary Representations\"",
"1. The idea of multi-level binarization is not new. The author may have a check at Section \"Multiple binarizations\" in [a] and Section 3.1 in [b]. The author should also have a discussion on these works.\n\n2. For the second contribution, the authors claim \"Temperature Adjustment\" significantly improves the convergence speed. This argument is not well supported by the experiments.\n\nI prefer to see two plots: one for Binarynet and one for the proposed method. In these plot, testing accuracy v.s. the number of epoch (or time) should be shown. The total number of epochs in Table 2 does not tell anything.\n\n3. Confusing in Table 2. In ResBinNet, why 1-, 2- and 3- level have the same size? Should more bits required by using higher level?\n\n4. While the performance of the 1-bit system is not good, we can get very good results with 2 bits [a, c]. So, please also include [c] in the experimental comparison.\n\n5. The proposed method can be trained end-to-end. However, a comparison with [b], which is a post-processing method, is still needed (see Question 1). \n\n6. Could the authors also validate their proposed method on ImageNet? It is better to include GoogleNet and ResNet as well. \n\n7. Could the authors make tables and figures in the experiment section large? It is hard to read in current size.\n\nReference\n[a] How to Train a Compact Binary Neural Network with High Accuracy. AAAI 2017\n[b] Network Sketching: Exploiting Binary Structure in Deep CNNs. CVPR 2017\n[c] Trained Ternary Quantization. ICLR 2017",
"This paper proposes ResBinNet, with residual binarization, and temperature adjustment. It is a reconfigurable binarization method for neural networks. It improves the convergence rate during training. \n\nI appreciate a lot that the authors were able to validate their idea by building a prototype of an actual hardware accelerator.\n\nI am wondering what are the values of \\gamma’s in the residual binarization after learning? What is its advantage over having only one \\gamma, and then the rest are just 1/2*\\gamma, 1/4* \\gamma, … , etc.? The latter is an important baseline for residual binarization because that corresponds to the widely used fixed point format for real numbers. If you can show some results that residual encoding is better than having {\\gamma, 1/2*\\gamma, 1/4* \\gamma, …, } (which contains only one \\gamma), it would validate the need of using this relatively complex binarization scheme. Otherwise, we can just use the l-bit fixed point multiplications, which is off-the-shelf and already highly optimized in many hardwares. \n\nFor the temperature adjustment, modifying the tanh() scale has already had a long history, for example, http://yann.lecun.com/exdb/publis/pdf/lecun-89.pdf page 7, which is exactly the same form as in this paper. Adjusting the slope during training has also been explored in some straight-through estimator approaches, such as https://arxiv.org/pdf/1609.01704.pdf. In addition, having this residual binarization and adjustable tanh(), is already adding extra computations for training. Could you provide some data for comparing the computations before and after adding residual binarization and temperature adjustment? \n\nThe authors claimed that ResBinNet converges faster during training, and in table 2 it shows that ResBinNet just needs 1/10 of the training epochs needed by BinaryNet. However, I don’t find it very fair. Given that the accuracy RBN gets is much lower than Binary Net, the readers might suspect that maybe the other two models already reach ResBinNet’s accuracy at an earlier training epochs (like epoch 50), and just take all the remaining epochs to reach a higher accuracy. On the other hand, this comparison is not fair for ResBinNet as well. The model size was much larger in BinaryNet than in ResBinNet. So it makes sense to train a BinaryNet or FINN, in the same size, and then compare the training curves. Lastly, in CIFAR-10 1-level case, it didn’t outperform FINN, which has the same size. Given these experiments, I can’t draw any convincing conclusion.\n\nApart from that, There is an error in Figure 7 (b), where the baseline has an accuracy of 80.1% but its corresponding bar is lower than RBN1, which has an accuracy of 76%. ",
"Thank you very much for your comments. Here are the responses to your questions:\n\nQuestion1: We will emphasize the differences in the updated paper. In summary, the differences between our approach and XNOR-net are the following:\n\n- In XNOR-net, each layer utilizes multiple scaling factors for the weights. For example, it uses a separate scaling factor for each column of the weight matrix in a fully connected layer. In our approach, the whole parameter set in one layer has a single Gamma value. This is particularly important to devise efficient hardware accelerators for the corresponding binary CNN.\n\n- Regarding the scaling factors for the activations, XNOR-net again uses multiple values for a certain layer. In our approach, the number of Gamma values for each layer is limited to the number of residual levels which is less than 4 in our experiments.\n\n- In XNOR-net, the scaling factors for the activations are dynamically computed during the execution by taking the average of feature maps. Computing these scaling factors involves a lot of full-precision operations which is in contrast with the whole rational of network binarization. In our approach, the Gamma values are learned in the training phase and they are fixed during the inference. \n\nAll in all, the previous properties of XNOR-net help their design to achieve a higher accuracy, but prevents an efficient implementation of their binary network. To the best of our knowledge, no hardware accelerator has been proposed for XNOR-net. \n\nQuestion 2: The experiments in the paper aim to demonstrate the effectiveness of the approach. We have evaluated the method on Imagenet and will include the results in the revised version. For Imagenet, we can obtain a top-1 accuracy of 28.4%, 32.6%, and 33.6% for an Alexnet architecture with 1-3 levels of residual binarizations, while the Binarynet baseline achieves a top-1 accuracy of 27.9% with the same architecture.\n\nQuestion 3: The output of soft-binarization is actually full-precision; the point is that these full-precision values are so close to the binary values that the accuracy does not degrade significantly after hard-binarization. Note that we retrain the hard-binarized model for only 1 epoch. We will add the soft-binarized network accuracies in Table 2 as suggested.\n\nHope the responses above clarified your questions.",
"Considering the approach of XNOR-net (Rastegari et al. (2016)), what are the differences between your binarization and theirs? In particular, how are the Gamma values discussed in your paper different from the scaling factors used in XNOR-net?\n\nIn addition, I am curious how ResBinNet would compare to other works on large-scale tasks such as\nImagenet classification. Do you have any results in that direction?\n\nUnless I am mistaken, the output of your “soft-binarization” method is a full-precision network. I am\nwondering if the final “hard-binarization” can recover the accuracy of the full-precision model after re-\ntraining. It would be better if you also reported the accuracy of the full-precision model (the soft-\nbinarized model) in Table 2."
] | [
4,
4,
4,
-1,
-1
] | [
4,
4,
4,
-1,
-1
] | [
"iclr_2018_SJtfOEn6-",
"iclr_2018_SJtfOEn6-",
"iclr_2018_SJtfOEn6-",
"HyUlYClJM",
"iclr_2018_SJtfOEn6-"
] |
iclr_2018_HyFaiGbCW | Generalization of Learning using Reservoir Computing | We investigate the methods by which a Reservoir Computing Network (RCN) learns concepts such as 'similar' and 'different' between pairs of images using a small training dataset and generalizes these concepts to previously unseen types of data. Specifically, we show that an RCN trained to identify relationships between image-pairs drawn from a subset of digits from the MNIST database or the depth maps of subset of visual scenes from a moving camera generalizes the learned transformations to images of digits unseen during training or depth maps of different visual scenes. We infer, using Principal Component Analysis, that the high dimensional reservoir states generated from an input image pair with a specific transformation converge over time to a unique relationship. Thus, as opposed to training the entire high dimensional reservoir state, the reservoir only needs to train on these unique relationships, allowing the reservoir to perform well with very few training examples. Thus, generalization of learning to unseen images is interpretable in terms of clustering of the reservoir state onto the attractor corresponding to the transformation in reservoir space. We find that RCNs can identify and generalize linear and non-linear transformations, and combinations of transformations, naturally and be a robust and effective image classifier. Additionally, RCNs perform significantly better than state of the art neural network classification techniques such as deep Siamese Neural Networks (SNNs) in generalization tasks both on the MNIST dataset and more complex depth maps of visual scenes from a moving camera. This work helps bridge the gap between explainable machine learning and biological learning through analogies using small datasets, and points to new directions in the investigation of learning processes. | rejected-papers | Both R1 and R2 suggested that Conceptors (Jaeger, 2014) had previously explored learning transformations in the context of reservoir computing. The authors acknowledged this in their response and added a reference. The main concern raised by the reviewers was lack of novelty and weak experiments (both the MNIST and depth maps were small and artificial). The authors acknowledged that it was mainly a proof of concept type of work. R1 and R2 also rejected the claim of biological plausibility (and this was also acknowledged by the authors). Though the authors have taken great care to respond in detail to each of the reviewers, I agree with the consensus that the paper does not meet the acceptance bar. | train | [
"r1IUXROxz",
"SkMZxYKgz",
"rJNd7A1bz",
"rkisUzUQf",
"r1EuVzUXz",
"BkKkRW8mM",
"BkVgNWIQM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The claimed results of \"combining transformations\" in the context of RC was done in the works of Herbert Jaeger on conceptors [1], which also should be cited here.\n\nThe argument of biological plausibility is not justified. The authors use an echo-state neural network with standard tanh activations, which is as far away from real neuronal signal processing than ordinary RNNs used in the field, with the difference that the recurrent weights are not trained. If the authors want to make the case of biological plausibility, they should use spiking neural networks.\n\nThe experiment on MNIST seems artificial, in particular transforming the image into a time-series and thereby imposing an artificial temporal structure. The assumption that column_i is obtained by information of column_{i-k},..,column_{i-1} is not true for images. To make a point, the authors should use a datasets with related sets of time-series data, e.g EEG or NLP data.\n\nIn total this paper does not have enough novelty for acceptance and the experiments are not well chosen for this kind of work. Also the authors overstate the claim of biological plausibility (just because we don't train the recurrent weights does not make a method biologically plausible).\n\n[1] H. Jaeger (2014): Controlling Recurrent Neural Networks by Conceptors. Jacobs University technical report Nr 31 (195 pages) \n\n",
"The technical part of the paper is a nice study for classification with Echo State Networks. The main novelty here is the task itself, classifying different distortions of MNIST data. The actual technique presented is not original, but an application of the standard ESN approach. The task is interesting but by itself I don't find it convincing enough. Moreover, the biological plausibility that is used as an argument at several places seems to be false advertising in my view. The mere presence of recurrent connections doesn't make the approach more biological plausible, in particular given that ridge regression is used for training of the output weights. If biological plausibility was the goal, a different approach should have been used altogether (e.g., what about local training of connections, unsupervised training, ...). Also there is no argument why biological plausibility is supposed to be an advantage. A small number of training examples would have been a more specific and better motivation, given that the number of \"training\" examples for humans is only discussed qualitatively and without a reference. \n\nThe analysis using the PCs is nice; the works by Jaeger on Conceptors (2014) make also use of the principal components of the reservoir states during presentation of patterns (introduction in https://arxiv.org/abs/1406.2671), so seem like relevant references to me. \n\nIn my view the paper would benefit a lot from more ambitious task (with good results), though even then I would probably miss some originality in the approach.",
"The paper uses an echo state network to learn to classify image transformations (between pairs of images) into one of fives classes. The image data is artificially represented as a time series, and the goal is generalization of classification ability to unseen image pairs. The network dynamics are studied and are claimed to have explanatory power.\n\nThe paper is well-written and easy to follow, but I have concerns about the claims it makes relative to how convincing the results are. The focus is on one simple, and frankly now-overused data set (MNIST). Further, treating MNIST data as a time series is artificial and clunky. Why does the series go from left to right rather than right to left or top to bottom or inside out or something else? How do the results change if the data is \"temporalized\" in some other way?\n\nFor training in Section 2.4, is M the number of columns for a pair of images? It's not clear how pairs are input in parallel--- one after the other? Concatenated? Interleaved columns? Something else? What are k, i, j in computing $\\delta X_k$? Later, in Section 3.2, it says, \"As in section 2.2, $xl(mn)$ is the differential reservoir state value of the $m$th reservoir node at time $n$ for input image $l$\", but nothing like this is discussed in Section 2.2; I'm confused.\n\nThe generalization results on this one simple data set seem pretty good. But, how does this kind of approach do on other kinds of or more complex data? I'm not sure that RC has historically had very good success scaling up to \"real-world\" problems to date.\n\nTable 1 doesn't really say anything. Of course, the diagonals are higher than the off diagonals because these are dot products. True, they are dot products of averages over different inputs (which is why they are less than 1), but still. Also, what Table 1 really seems to say is that the off-diagonals really aren't all that different than the diagonals, and that especially the differences between same and different digits is not very different, suggesting that what is learned is pretty fragile and likely won't generalize to harder problems. I like the idea of using dynamical systems theory to attempt to explain what is going on, but I wonder if it is not being used a bit simplistically or naively.\n\nWhy were the five transform classes chosen? It seems like the \"transforms\" a (same) and e (different) are qualitatively different than transforms b-d (rotated, scaled, blurred). This seems like it should talked about.\n\n\"Thus, we infer, that the reservoir is in fact, simply training these attractors as opposed to training the entire reservoir space.\" What does this mean? The reservoir isn't trained at all in ESNs (which is also stated explicitly for the model presented here)…\n\nFor 3.3, why did were those three classes chosen? Was this experiment tried with other subsets of three classes? Why are results reported on only the one combination of rotated/blurred vs. rotated? Were others tried? If so, what were the results? If not, why? How does the network know when to take more than the highest output (so it can say that two transforms have been applied)? In the case of combination, counting either transform as the correct output kind of seems like cheating a bit—it over states how well the model is doing. Also, does the order in which the transforms are applied affect their relative representative strength in the reservoir?\n\nThe comparison with SNNs is kind of interesting, but I'm not sure that I'm (yet) convinced, as there is little detail on how the experiment was performed and what was done (or not) to try to get the SNN to generalize. My suspicion is that with the proper approach, an SNN or similar non-dynamical system could generalize well on these tasks. The need for a dynamical system could be argued to make sense for the camera task, perhaps, as video frames naturally form a time series; however, as already mentioned, for the MNIST data, this is not the case, and the fact that the SNN does not generalize here seems likely due to their under utilization rather than due to an inherent lack of capability.\n\nI don't believe that there is sufficient support for this statement in the conclusion, \"[ML/deep networks] do not work as well for generalization of learning. In generalized learning, RCNs outperform them, due to their ability to function as a dynamical system with ‘memory’.\" First of all, ML is all about generalization, and there are lots and lots and lots of results showing that many ML systems generalize very well on a wide variety of problems, well beyond just classification, in fact. And, I don't think the the paper has convincingly shown that a dynamical system 'memory' is doing something especially useful, given that the main task studied, that of character recognition (or classification of transformation or even transformation itself), does not require such a temporal ability.\n",
"We would like to sincerely thank the reviewers for their extensive and helpful comments. We have carefully examined each issue and have in some cases conducted additional simulations to better explore the concerns raised. We have addressed the concerns by adding and modifying text throughout the manuscript and providing additional results and reasoning in our responses. In particular, we have added Fig 6(a,b) to the manuscript to convince the reader that the RC performs much better than the deep SNN, even with a small dataset. We believe the result is a significantly improved manuscript that more clearly articulates the contributions of our research. The changes we made are described in the responses addressed to each reviewer. \nWe thank all the reviewers for their comments and suggestions. Hopefully, in our responses above to the issues raised by the reviewers, we have (1) clarified the motivations for this work and justified its biological plausibility as having to do more with the way learning is implemented that the network structure; (2) more clearly explained why we believe the dynamical systems perspective is beneficial and applicable even to simple and non-temporal datasets, and how this perspective leads us to believe RC will scale well with real world problems since we are able to learn and generalize relationships with very few training examples; and (3) acknowledged that additional features and/or alternate datasets should be explored in future work. We hope we were able to answer your questions and address some your concerns satisfactorily.\n",
"R1: The actual technique presented is not original, but an application of the standard ESN approach.
\nResponse: The approach is definitely an application of the ESN procedure, however we think it is important to note that we modify the ESN approach in order to study relationships between images in pairs (analogous to a siamese network). Our implementation allows for generalization through analogies, as explained through reservoir dynamics, better than conventional deep SNNs (Fig. 6(a,b)) for image pairs.\n\n
R2: The biological plausibility that is used as an argument at several places seems to be false advertising in my view…The mere presence of recurrent connections doesn't make the approach more biological plausible\nResponse: This is an important point and we agree that having recurrent connections is only marginally more biologically plausible than not having them, especially given that we still use ridge regression. Binzegger et. al. 2004 talks about how (~70%) of the connections in the visual cortex are recurrent. In contrast, the \"feed-forward\" pathway into visual cortex makes up less than 1% of the excitatory synapses(Costa & Martin 2011). \nNumerous connections of Reservoir Computing (RC) principles to architectural and dynamical properties of mammalian brains have been established. RC (or closely related models) provides explanations of why biological brains can carry out accurate computations with an “inaccurate” and noisy physical substrate (Haeusler and Maass 2007), especially accurate timing (Karmarkar & Buonomano 2007); of the way in which visual information is super-imposed and processed in primary visual cortex (Nikolic et. al. 2007); of how cortico-basal pathways support the representation of sequential information; and RC offers a functional interpretation of the cerebellar circuitry (Yamazaki et. al. 2007). A central role is assigned to an RC circuit in a series of models explaining sequential information processing in human and primate brains, most importantly of speech signals (Blanc & Dominey 2003). We have added this to the introduction.\nOur primary argument for biological plausibility, however, is based not RC architecture but on how the learning takes place. There has been some evidence in psychology studies that children learn through analogies (Duit 1991). Additionally, Guirfa et. al. 2001 shows that bees that were trained to respond positively to similar scents, when shown two images, one that is similar to a base image they were shown earlier and one that was different, flew towards the similar image. Thus, they have an inherent understanding of the concept of ‘similarity’ and were able to naturally extend it from one system (olfactory) to another (visual). We attempt to learn in an analogous way with the reservoir, teaching it concepts of ‘similar’, ‘rotated’, ‘different’ etc. such that it naturally extends these concepts from one image set (digits 0-5 or depth maps 1-3) to another (digits 6-9 or depth maps 4-6). \n\nWe realise that our claim about biological plausibility may not have been convincingly conveyed in the manuscript and have modified it to emphasize that our reasons for claiming biological plausibility arise more from learning technique and less from network architecture.\n\nR1: A small number of training examples would have been a more specific and better motivation\nResponse: We agree, and we have modified the introduction to reflect this.\n\nR1:The analysis using the PCs is nice; the works by Jaeger on Conceptors (2014)... seem like relevant references to me. \nResponse: We thank the reviewer for bringing this to our attention and have included the reference.\n\nR1: In my view the paper would benefit a lot from more ambitious task (with good results), though even then I would probably miss some originality in the approach.\nResponse: We thank the reviewer for their insight. We realise that MNIST and depth maps are a very simple datasets. Our goal in this work was to demonstrate proof of concept by sticking to a simple dataset. A more complex temporal dataset like videos might have been more convincing, but have other problems. For instance, the dynamical state corresponding to two images that potentially have lots of ‘sub-similarities’ or sub-features (say, similar sub-objects in the image) could converge onto a local minima (corresponding to one of the similarities), and not the global minima that represents the total similarity. In order to resolve this, we would, at the least, have to ensure the dimensionality of the reservoir is large enough and the attractors corresponding to different ‘sub-similarites’ don’t overlap. In conclusion, while we agree that the task could be more ambitious, we think it would’ve drawn away from our interest, which is to demonstrate proof of concept. We find very promising that an RC, as a dynamical system with attractors, is capable of much better explainable generalization compared to a deep SNN, even with a small dataset.\n",
"R2: The claimed results of \"combining transformations\" in the context of RC was done in the works of Herbert Jaeger on conceptors .\n\nResponse: We thank the reviewer for pointing this out and have included the reference.\n\nR2: The argument of biological plausibility is not justified…If the authors want to make the case of biological plausibility, they should use spiking neural networks.\n\nResponse: This is an important point and we agree that having recurrent connections is only marginally more biologically plausible than not having them, especially given that we still use ridge regression. Binzegger et. al. 2004 talks about how (~70%) of the connections in the visual cortex are recurrent. In contrast, the \"feed-forward\" pathway into visual cortex makes up less than 1% of the excitatory synapses(Costa & Martin 2011). \nNumerous connections of Reservoir Computing (RC) principles to architectural and dynamical properties of mammalian brains have been established. RC (or closely related models) provides explanations of why biological brains can carry out accurate computations with an “inaccurate” and noisy physical substrate (Haeusler and Maass 2007), especially accurate timing (Karmarkar & Buonomano 2007); of the way in which visual information is super-imposed and processed in primary visual cortex (Stanley et. al. 1999, Nikolic et. al. 2007); of how cortico-basal pathways support the representation of sequential information; and RC offers a functional interpretation of the cerebellar circuitry (Yamazaki et. al. 2007). A central role is assigned to an RC circuit in a series of models explaining sequential information processing in human and primate brains, most importantly of speech signals (Dominey et. al. 2003, Blanc & Dominey 2003). We have added this to the introduction.\nOur primary argument for biological plausibility, however, is based not RC architecture but on how the learning takes place. There has been some evidence in child psychology studies that children learn through analogies (Duit 1991). Additionally, Guirfa et. al. 2001 shows that bees that were trained to respond positively to similar scents, when shown two images, one that is similar to a base image they were shown earlier and one that was different, flew towards the similar image. Thus, they have an inherent understanding of the concept of ‘similarity’ and were able to naturally extend it from one system (olfactory) to another (visual). We attempt to learn in an analogous way with the reservoir, teaching it concepts of ‘similar’, ‘rotated’, ‘different’ etc. such that it naturally extends these concepts from one image set (digits 0-5 or depth maps 1-3) to another (digits 6-9 or depth maps 4-6). \nWe realize that our claim about biological plausibility may not have been convincingly conveyed in the manuscript and have modified it to emphasize that our reasons for claiming biological plausibility arise more from learning technique and less from network architecture. We hope that our reasoning is satisfactory.\n\nR2: The experiment on MNIST seems artificial, in particular transforming the image into a time-series and thereby imposing an artificial temporal structure. To make a point, the authors should use a datasets with related sets of time-series data, e.g EEG or NLP data.\n\nResponse: While the experiment on images may seem artificial, there are certain advantages with choosing a simple visual dataset that we have outlined below. Treating the image as temporal doesn’t change the analysis (there’s a one-to-one corresponding between an image and it’s ‘temporalized’ version) we preferred to stick to simple datasets such as MNIST and depth map from a moving camera to demonstrate that the reservoir does indeed generalize transformations.\nWhile using EEG or NLP data would definitely be more appropriate in some respects, it wouldn’t allow us the same freedom to study relationships such as rotation or scaling in the easy-to-see manner as we do now. A more natural application of this to temporal image data would be to use a video dataset, however, the complexity of the dataset deterred us from using video. For instance, the dynamical state corresponding to two images that potentially have lots of ‘sub-similarities’ or sub-features could converge onto a local minima (corresponding to one of the potentially many similarities), and not the global minima that represents the total similarity. In order to resolve such a problem, we would, at the minimum, have to ensure the dimensionality of the reservoir is large enough and the attractors/regions in reservoir space corresponding to different ‘sub-similarites’ don’t overlap. \nIn conclusion, while we agree that the task could be more ambitious, we think it would’ve drawn away from our interest, which is to demonstrate proof of concept.\nHowever, we hope to explore generalization in more complex datasets, such as similarities in the action domain, in future work. ",
"While the ‘temporalization’ of images may seem artificial, it doesn’t affect the results themselves. There is a one-to-one correspondence between an image and the temporalized version of it. To demonstrate this further, we present results for top to bottom and left to right temporalization.\n\t\t\tFraction Correct\nTop to Bottom \t0.848\nLeft to right\t\t0.842\nSpectral radius = 0.5, reservoir size=1000 nodes, training size=250 pairs.\n\nComplex temporal datasets like video have problems like there are bound to be several visual sub-similarities with potentially overlapping attractors. In conclusion, while we agree that the task could be more ambitious if we used a temporal dataset, we think it would’ve drawn away from our interest, which is to demonstrate proof of concept in an easy-to-interpret manner. We believe that RC’s can scale to real world problems since we don’t require large datasets, and we have an understanding of how they work.\n\n\nSection 2.4 has been modified to answer these questions.\n\nExplanation of Table 1: The reservoir state, can in principle, even for very similar inputs, diverge substantially, since it’s not just linearly mapping of the input. The nodes in RC are represented as a coupled system of first-order equations (one for each node) with time-delayed feedback. This represents our dynamical system. The solution to such a system of equations could be drastically different even for similar inputs. We don’t see, to the best of our knowledge, why the diagonal terms would be naturally expected to be higher, even if they are dot products.\nAll images are visually similar in our dataset to begin with. Hence, the off diagonal terms are expected to be pretty high. However, the consistency with which the reservoir identifies transformations in all our experiments tells us that the reservoir is robust, despite the off diagonal terms being high.\n\n\nRotation, scaling and blurring seemed natural extensions to the fundamental concepts of ‘similar’ and ‘different’ (updated section 2.3).\n\n\nReviewer: ”Thus, we infer, that the reservoir is in fact, simply training these attractors as opposed to training the entire reservoir space.\" What does this mean? \n\nWe have modified section 2.4 to convey the information easily. While the internal reservoir connections aren’t trained, the output weights are. It is these weights that we referr to. Since the reservoir states converge onto 5 different attractors or regions in reservoir space (one for each transformation), we are now training only only the attractors and not the entire reservoir space.\n\n\nTaking the reviewer’s comment into consideration, we define our way of counting fraction correct and count it as correct if the two (or n) transformations in the combination have the two (or n) highest probabilities. This yields the same results as Fig. 5 (a). \nOther subsets of classes were tried (only one representative is included in the paper).\nFor instance, results on a different set of 3 classes, different, scaled and blurred:\n\nTransformation combination tested: Fraction Correct\t\t\nBlurred+Scaled: \t\t\t0.972\t \nBlur+Different\t: \t\t\t0.998\t \t \t\nBlurred only: \t\t\t\t0.943\t\t\t\t\t\t\nspectral radius=0.8, reservoir size=1000. Training digits: 0-5, testing digits: 6-9.\n\nThere are two ways in which the network could be trained to identify more than one transformation: \n1. User specified\n2. Thresholding the average probabilities to count only the transformations that have a significant jump in probability from the previous transformation. For instance, in Fig. 5 (a) the two correct transformations have a significant increase in average probability than the incorrect one. However in 5(b), the average probability of very similar and blurred is about the same, and much lower than rotated.\nThe order in which the transformations are applied makes no difference since all transformations are applied to the image prior to being fed into the reservoir.\n\nMore details on the SNN have been included in sec. 3.4. The implementation is a direct extension of the inbuilt as an example in keras, (Hadsell et. al. 2006 ) designed for image pairs. Training is done using contrastive loss over these transformations on a subset of the data and testing is done on the other set. The only parameters we changed/controlled are number of nodes, depth, training data size, and epochs. \nTaking the reviewers comments into consideration, we have added Fig 6 (a,b) that shows performance of the SNN on varying depth, training data size and epochs.\nAs seen in Fig. 6 (a,b) the SNN does indeed generalize much worse than the reservoir on the untrained set of images (digits 6-9) for all parameters. We believe this is mainly due to the lack of ability to exploit the dynamics in the ‘attractor space’.\n\nLastly, we have corrected the conclusion to reflect what we mean: SNN’s don’t perform as well as a dynamical system like RC for generalization as defined through analogies for a small training set.\n"
] | [
4,
4,
4,
-1,
-1,
-1,
-1
] | [
3,
5,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HyFaiGbCW",
"iclr_2018_HyFaiGbCW",
"iclr_2018_HyFaiGbCW",
"iclr_2018_HyFaiGbCW",
"SkMZxYKgz",
"r1IUXROxz",
"rJNd7A1bz"
] |
iclr_2018_H1vCXOe0b | Interpreting Deep Classification Models With Bayesian Inference | In this paper, we propose a novel approach to interpret a well-trained classification model through systematically investigating effects of its hidden units on prediction making. We search for the core hidden units responsible for predicting inputs as the class of interest under the generative Bayesian inference framework. We model such a process of unit selection as an Indian Buffet Process, and derive a simplified objective function via the MAP asymptotic technique. The induced binary optimization problem is efficiently solved with a continuous relaxation method by attaching a Switch Gate layer to the hidden layers of interest. The resulted interpreter model is thus end-to-end optimized via standard gradient back-propagation. Experiments are conducted with two popular deep convolutional classifiers, respectively well-trained on the MNIST dataset and the CI- FAR10 dataset. The results demonstrate that the proposed interpreter successfully finds the core hidden units most responsible for prediction making. The modified model, only with the selected units activated, can hold correct predictions at a high rate. Besides, this interpreter model is also able to extract the most informative pixels in the images by connecting a Switch Gate layer to the input layer.
| rejected-papers | The paper proposes a new method for interpreting the hidden units of neural networks by employing an Indian Buffet Process. The reviewers felt that the approach was interesting, but at times hard to follow and more analysis was needed. In particular, it was difficult to glean any advantage of this method over others. The authors did not provide a response to the reviews. | train | [
"rye1W_5lf",
"HJmy1rjlM",
"HyESiU7WG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper intends to interpret a well-trained multi-class classification deep neural network by discovering the core units of one or multiple hidden layers for prediction making. However, these discovered core units are specific to a particular class, which are retained to maintain the deep neural network’s ability to separate that particular class from the other ones. Thus, these non-core units for a particular class could be core units for separating another class from the remaining ones. Consequently, the aggregation of all class-specific core units could include all hidden units of a layer. Therefore, it is hard for me to understand what’s the motivation to identify the core units in a one-vs-remaining manner. At this moment, these identified class-specific core units are useful for neither reducing the size of the network, nor accelerating computation. ",
"Pros\n- The paper proposes a novel formulation of the problem of finding hidden units\n that are crucial in making a neural network come up with a certain output.\n- The method seems to be work well in terms of isolating a few hidden units that\n need to be kept while preserving classification accuracy.\n\nCons\n- Sections 3.1 and 3.2 are hard to understand. There seem to be inconsistencies\n in the notation. For example,\n(1) It would help to clarify whether y^b_n is the prediction score or its\ntransformation into [0, 1]. The usage is inconsistent.\n(2) It is not clear how \"y^b_n can be expressed as \\sum_{k=1}^K z_{nk}f_k(x_n)\"\nin general. This is only true for the penultimate layer, and when y^b_n denotes\nthe input to the output non-linearity. However, this analysis seems to be\napplied for any hidden layer and y^b_n is the output of the non-linearity unit\n(\"The new prediction scores are transformed into a scalar ranging from 0 to 1,\ndenoted as y^b_n.\")\n(3) Section 3.1 denotes the DNN classifier as F(.), but section 3.2 denotes the\nsame classifier as f(.).\n(4) Why is r_n called the \"center\" ? I could not understand in what sense is\nthis the center, and of what ? It seems that the max value has been subtracted\nfrom all the logits into a softmax (which is a fairly standard operation).\n\n- The analysis seems to be about finding neurons that contribute evidence for\n a particular class. This does not address the issue of understanding why the\nnetwork makes a certain prediction for a particular input. Therefore this\napproach will be of limited use.\n\n- The paper should include more analysis of how this method helps interpret the\n actions of the neural net, once the core units have been identified.\nCurrently, the focus seems to be on demonstrating that the classifier\nperformance is maintained as a significant fraction of hidden units are masked.\nHowever, there is not enough analysis on showing whether and how the identified\nhidden units help \"interpret\" the model.\n\nQuality\nThe idea explored in the paper is interesting and the experiments are described\nin enough detail. However, the writing still needs to be polished.\n\nClarity\nThe problem formulation and objective function (Section 3.1) was hard to follow.\n\nOriginality\nThis approach to finding important hidden units is novel.\n\nSignificance\nThe paper addresses an important problem of trying to have more interpretable\nneural networks. However, it only identifies hidden units that are important for\na class, not what are important for any particular input. Moreover, the main\nthesis of the paper is to describe a method that helps interpret neural network\nclassifiers. However, the experiments only focus on identifying important hidden\nunits and fall short of actually providing an interpretation using these hidden\nunits.",
"The paper develops a technique to understand what nodes in a neural network are important\nfor prediction. The approach they develop consists of using an Indian Buffet Process \nto model a binary activation matrix with number of rows equal to the number of examples. \nThe binary variables are estimated by taking a relaxed version of the \nasymptotic MAP objective for this problem. One question from the use of the \nIndian Buffet Process: how do the asymptotics of the feature allocation determine \nthe number of hidden units selected? \n\nOverall, the results didn't warrant the complexity of the method. The results are neat, but \nI couldn't tell why this approach was better than others.\n\nLastly, can you intuitively explain the additivity assumption in the distribution for p(y')"
] | [
3,
5,
3
] | [
3,
3,
4
] | [
"iclr_2018_H1vCXOe0b",
"iclr_2018_H1vCXOe0b",
"iclr_2018_H1vCXOe0b"
] |
iclr_2018_HJ1HFlZAb | Evaluation of generative networks through their data augmentation capacity | Generative networks are known to be difficult to assess. Recent works on generative models, especially on generative adversarial networks, produce nice samples of varied categories of images. But the validation of their quality is highly dependent on the method used. A good generator should generate data which contain meaningful and varied information and that fit the distribution of a dataset. This paper presents a new method to assess a generator. Our approach is based on training a classifier with a mixture of real and generated samples. We train a generative model over a labeled training set, then we use this generative model to sample new data points that we mix with the original training data. This mixture of real and generated data is thus used to train a classifier which is afterwards tested on a given labeled test dataset. We compare this result with the score of the same classifier trained on the real training data mixed with noise. By computing the classifier's accuracy with different ratios of samples from both distributions (real and generated) we are able to estimate if the generator successfully fits and is able to generalize the distribution of the dataset. Our experiments compare the result of different generators from the VAE and GAN framework on MNIST and fashion MNIST dataset. | rejected-papers | Given that the paper proposes a new evaluation scheme for generative models, I agree with the reviewers that it is essential that the paper compare with existing metrics (even if they are imperfect). The choice of datasets was very limited as well, given the nature of the paper. I acknowledge that the authors took care to respond in detail to each of the reviews. | val | [
"HyjUd0Kgf",
"ryYjvicxM",
"B1u5na9lG",
"ryaVoRcfz",
"B1EXiA5Mf",
"By4esCczz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The main idea is to use the accuracy of a classifier trained on synthetic training examples produced by a generative model to define an evaluation metric for the generative model. Specifically, compare the accuracy of a classifier trained on a noise-perturbed version of the real dataset to that of a classifier trained on a mix of real data and synthetic data generated by the model being evaluated. Results are shown on MNIST and Fashion MNIST.\n\nThe paper should discuss the assumptions needed for classifier accuracy to be a good proxy for the quality of a generative model that generated the classifier's training data. It may be the case that even a \"bad\" generative model (according to some other metric) can still result in a classifier that produces reasonable test accuracy. Since a classifier can be a highly nonlinear function, it can potentially ignore many aspects of its input distribution such that even poor approximations (as measured by, say, KL) lead to similar test accuracy as good approximations.\n\nThe sensitivity of the evaluation metric defined in equation 2 to the choice of hyperparameters of the classifier and the metric itself (e.g., alpha) is not evaluated. Is it possible that a different choice of hyperparameters can change the model ranking? Should the hyperparameters be tuned separately for each generative model being evaluated?\n\nThe intuition behind comparing against a classifier trained on a noise-perturbed version of the data is not explained clearly. Why not compare a classifier trained on only (unperturbed) real data to a classifier trained on both real and synthetic data?\n\nEvaluation on two datasets is not sufficient to provide insight into whether the proposed metric is useful. Other datasets such as ImageNet, Cifar10/100, Celeb A, etc., should also be included.",
"The authors propose to evaluate how well generative models fit the training set by analysing their data augmentation capacity, namely the benefit brought by training classifiers on mixtures of real/generated data, compared to training on real data only. Despite the the idea of exploiting generative models to perform data augmentation is interesting, using it as an evaluation metric does not constitute an innovative enough contribution. \n\nIn addition, there is a fundamental matter which the paper does not address: when evaluating a generative model, one should always ask himself what purpose the data is generated for. If the aim is to have realistic samples, a visual turing test is probably the best metric. If instead the purpose is to exploit the generated data for classification, well, in this case an evaluation of the impact of artificial data over training is a good option.\n\nPROS:\nThe idea is interesting. \n\nCONS:\n1. The authors did not relate the proposed evaluation metric to other metrics cited (e.g., the inception score, or a visual turing test, as discussed in the introduction). It would be interesting to understand how the different metrics relate. Moreover, the new metric is introduced with the following motivation “[visual Turing test and Inception Score] do not indicate if the generator collapses to a particular mode of the data distribution”. The mode collapse issue is never discussed elsewhere in the paper. \n\n2. Only two datasets were considered, both extremely simple: generating MNIST digits is nearly a toy task nowadays. Different works on GANs make use of CIFAR-10 and SVHN, since they entail more variability: those two could be a good start. \n\n3. The authors should clarify if the method is specifically designed for GANs and VAEs. If not, section 2.1 should contain several other works (as in Table 1). \n\n4. One of the main statements of the paper “Our approach imposes a high entropy on P(Y) and gives unbiased indicator about entropy of both P(Y|X) and P(X|Y)” is never proved, nor discussed.\n\n5. Equation 2 (the proposed metric) is not convincing: taking the maximum over tau implies training many models with different fractions of generated data, which is expensive. Further, how many tau’s one should evaluate? In order to evaluate a generative model one should test on the generated data only (tau=1) I believe. In the worst case, the generator experiences mode collapse and performs badly. Differently, it can memorize the training data and performs as good as the baseline model. If it does actual data augmentation, it should perform better.\n\n6. The protocol of section 3 looks inconsistent with the aim of the work, which is to evaluate data augmentation capability of generative models. In fact, the limit of training with a fixed dataset is that the model ‘sees’ the data multiple times across epochs with the risk of memorizing. In the proposed protocol, the model ‘sees’ the generated data D_gen (which is fixed before training) multiple time across epochs. This clearly does not allow to fully evaluate the capability of the generative model to generate newer and newer samples with significant variability.\n\n\nMinor: \nSection 2.2 might be more readable it divided in two (exploitation and evaluation). \n",
"The paper proposes a technique for analysis of generative models.\n\nThe main idea is to (1) define a classification task on the underlying data, (2) use the generative model to produce a training set for this classification task, and (3) compare performance on the classification task when training on generated and real training data. \n\nNote that in step (2) is it required to assign class labels to the generated samples. In this paper this is achieved by learning a separate generative model for each class label. \n\nSummary:\n\nI think the proposed technique is useful, but needs to be combined with other techniques to exclude the possibility that model just memorized the training set. To be stronger the paper needs to consider other more realistic tasks from the literature and directly compare to other evaluation protocols. \n\nPositive:\n+ the technique operates directly on the samples from the model. It is not required to compute the likelihood of the test set as for example is needed in the \"perplexity\" measure). This makes the technique applicable for evaluation of a wider class of techniques. \n\n+ I like the result in Fig. 1. There is a clear difference between results by WGAN and by other models. This experiment convinces me that the peroposed analysis by augmentation is a valuable tool. \n\n+ I think the technique is particularly valuable verify that samples are capturing variety of modes in the data. Verifying this via visual inspection is difficult.\n\nNegative: \n\n- I think this metric can be manipulated by memorizing training data, isn't it? The model that reproduces the training set will still achieve good performance at \\tau = 1, and the model that does simple augmentation like small shifts / rotations / scale and contrast changes might even improve over training data alone. So the good performance on the proposed task does not mean that the model generalized over the training dataset.\n\n- I believe that for tasks such as image generating none of the existing models generate samples that would be realistic enough to help in classification. Still some methods produce images that are more realistic than others. I am not sure if the proposed evalaution protocol would be useful for this type of tasks. \n\n- The paper does not propose an actual metric. Is the metric for performance of generative model defined by the best relative improvement over baseline after tuning \\tau as in Tab. 1? Wouldn't it be better to fix \\tau, e.g. \\tau = 1?\n\n- Other datasets / methods and comparison to other metrics. This is perhaps the biggest limitation for me right now. To establish a new comparison method the paper needs to demonstrate it on relevant tasks (e.g. image generation?), and compare to existing metrics (e.g. \"visual inspection\" and \"average log-likelihood\"). \n\n",
"Thanks for the review,\n\nIs the classifier accuracy a good proxy ? The classifier we use is a deep net ( 2 convolutional layer, one dropout, 2 fully connected). We don’t have mathematical proof but the idea is that if the (generated) training data is biased with respect to the (real) testing data, the test error will be large. So being able to train or improve training with generated data empirically indicates that the “fake” data are part of the same manifold than the testing data and cover most of this manifold. Therefore we can assume that the generated data have high variability and good (enough) visual quality. \nHowever for some classifiers, the classification accuracy would not be representative as KNN. We were more thinking of deep net classifier which are harder to train successfully without good training data.\nHowever for some classifiers, the classification accuracy would not be as representative. For example, KNN could have good accuracy by taking advantage of a few good samples while ignoring bad samples. On the contrary, CNN are trained to be able to create representations from all training data and use them for classification. Bad training data will induce learning bad representations and usually bad generalization in classification.\n\nThe hyperparameters can indeed change the ranking, like in any other classification algorithm, they have to be tuned to assess a particular generative model with a particular dataset in order to reach the best possible performance. We did not had time to evaluate the sensitivity of the equation 2.\n\n- Why not compare a classifier trained on only (unperturbed) real data?\nWe did it as, in Figure 1, it corresponds to tau=0. But this comparison against real data is unfair, in the sense that when we have samples from generative model we add some noise, the model will never see twice the same sample. And it’s known that classifiers are more robust when training with perturbed data. \n\n- comparison with noise data augmentation\nThe reason to compare data augmentation (DA) from generative model with classic DA methods was to show that the generative model produce better DA than just random perturbation. It also gives insight on how the metric evaluate simple data augmentation. Therefore the DA introduced by generative models is not only due to a bad reconstruction that would introduce variability in the training data.\n\n- comparison with other classifier / dataset\nWe plan to test other classifier and other dataset to compare the performance. However, making several generative model work on the same dataset is not an easy task. We plan to use LSUN and tiny-imagenet for further experimentation.\n",
"Thanks for your review.\n\nWe assumed that the purpose of the GAN is to be able to fit the distribution of a given dataset, not only to be able to generate some nice realistic samples.\n\n1 . the inception score or a visual turing test alone only address the realistic characteristic of sample not their variability. It could be interesting to compare but it is easy to make a model overfit to make inception and visual turing test good while our method would detect the overfit.\n\n2 . We agree that we used toy example. We experimented cifar10 but did not add the results because we did not achieve to make all the generative models work on it. Some papers present result in cifar10 but the training is very hard to design (looking at the sample is clearly enough to know when a training absolutely does not work). We are planning to also use LSUN and tiny-imagenet.\n\n3 . As the method is based on data only (generate and true), it is designed for any type of generative models. We took GANs and VAEs as example, the goal was to present the idea, but we are not able to experiment all possible generative models in a reasonable amount of time. However, we would be interested by suggestions of other models to compare.\n\n4 . We indeed did not proved it (and we will rephrase the paragraph to explain it is the intuition behind the approach) but we impose high entropy of P(Y) because we sample uniformly the different classes. The indicator is unbiased because it's evaluated on never seen data. We evaluate the entropy of P(X|Y) and P(Y|X) because we need a good variability of the data with not too much uncertainty in P(Y|X) for each class to have a good training of the classifier (we could add the result class by class).\n\n5 . Tau=1 is a difficult setting as it necessitates to be able to fit the whole training distribution. We wanted to add simpler settings where the generative model can show that even if it is not able to fit the whole distribution, it can generalize to some extend. When tau is considered as an hyperparameter, it leave the possibility to Generative model developers to choose a particular value to highlight some behavior of the mode. Besides this, finding the best tau is expensive but assessing visual data variability is a difficult problem.\n\n6 . It's not well explained in the paper, we will update it, but D_gen is generated so that the classifier only uses newly generated samples and none of the generated sample is used multiple times.\n",
"Thanks for your review.\n\nUsing just memorization should normally give a result of 0 (which is not so bad). For a given number of example, a result better than 0 indicates that the generative model is able to achieve data augmentation because it leads to better performance than the baseline. ‘Traditional’ data augmentation can also be compared to DA from generative models. We only included gaussian noise and random dropout to give a simple comparison but our first goal is to compare generative models. It also gives insight on how the metric evaluate simple data augmentation. \n\nKnowing if generative models can really help for classification is not the goal of the paper. The result we provide gives case where it works (in simple setting) but the important thing here is that the ‘metric’ makes it possible to discriminate between generative models. If it’s with negative results, it is still valid.\n\nAs you suggest, comparing with alternatives is obviously important and we will compare with other metrics for a next submission.\n"
] | [
3,
3,
5,
-1,
-1,
-1
] | [
5,
5,
3,
-1,
-1,
-1
] | [
"iclr_2018_HJ1HFlZAb",
"iclr_2018_HJ1HFlZAb",
"iclr_2018_HJ1HFlZAb",
"HyjUd0Kgf",
"ryYjvicxM",
"B1u5na9lG"
] |
iclr_2018_r1RQdCg0W | MACH: Embarrassingly parallel K-class classification in O(dlogK) memory and O(KlogK+dlogK) time, instead of O(Kd) | We present Merged-Averaged Classifiers via Hashing (MACH) for K-classification with large K. Compared to traditional one-vs-all classifiers that require O(Kd) memory and inference cost, MACH only need O(dlogK) memory while only requiring O(KlogK+dlogK) operation for inference. MACH is the first generic K-classification algorithm, with provably theoretical guarantees, which requires O(logK) memory without any assumption on the relationship between classes. MACH uses universal hashing to reduce classification with a large number of classes to few independent classification task with very small (constant) number of classes. We provide theoretical quantification of accuracy-memory tradeoff by showing the first connection between extreme classification and heavy hitters. With MACH we can train ODP dataset with 100,000 classes and 400,000 features on a single Titan X GPU (12GB), with the classification accuracy of 19.28\%, which is the best-reported accuracy on this dataset. Before this work, the best performing baseline is a one-vs-all classifier that requires 40 billion parameters (320 GB model size) and achieves 9\% accuracy. In contrast, MACH can achieve 9\% accuracy with 480x reduction in the model size (of mere 0.6GB). With MACH, we also demonstrate complete training of fine-grained imagenet dataset (compressed size 104GB), with 21,000 classes, on a single GPU. | rejected-papers | There is a very nice discussion with one of the reviewers on the experiments, that I think would need to be battened down in an ideal setting. I'm also a bit surprised at the lack of discussion or comparison to two seemingly highly related papers:
1. T. G. Dietterich and G. Bakiri (1995) Solving Multiclass via Error Correcting Output Codes.
2. Hsu, Kakade, Langford and Zhang (2009) Multi-Label Prediction via Compressed Sensing.
| train | [
"SJB-0Mtlz",
"H1tJH9FxM",
"H1VwD15lG",
"Sk9lYDj7M",
"r1GeQQxmG",
"S1YDsMlXf",
"SJfRHbeQz",
"Skch5gafG",
"HkC5WzafM",
"HJVtSZaMz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author"
] | [
"The manuscript proposes an efficient hashing method, namely MACH, for softmax approximation in the context of large output space, which saves both memory and computation. In particular, the proposed MACH uses 2-universal hashing to randomly group classes, and trains a classifier to predict the group membership. It does this procedure multiple times to reduce the collision and trains a classifier for each run. The final prediction is the average of all classifiers up to some constant bias and multiplier as shown in Eq (2).\n\nThe manuscript is well written and easy to follow. The idea is novel as far as I know. And it saves both training time and prediction time. One unique advantage of the proposed method is that, during inference, the likelihood of a given class can be computed very efficiently without computing the expensive partition function as in traditional softmax and many other softmax variants. Another impressive advantage is that the training and prediction is embarrassingly parallel, and thus can be linearly sped up, which is very practical and rarely seen in other softmax approximation.\n\nThough the results on ODP dataset is very strong, the experiments still leave something to be desired.\n(1) More baselines should be compared. There are lots of softmax variants for dealing with large output space, such as NCE, hierarchical softmax, adaptive softmax (\"Efficient softmax approximation for GPUs\" by Grave et. al), LSH hashing (as cited in the manuscript) and matrix factorization (adding one more hidden layer). The results of MACH would be more significant if comparison to these or some of these baselines can be available.\n(2) More datasets should be evaluated. In this manuscript, only ODP and imagenet are evaluated. However, there are also lots of other datasets available, especially in the area of language modeling, such as one billion word dataset (\"One billion\nword benchmark for measuring progress in statistical language modeling\" by Chelba et. al) and many others.\n(3) Why the experiments only focus on simple logistic regression? With neural network, it could actually save computation and memory. For example, if one more hidden layer with M hidden units is added, then the memory consumption would be M(d+K) rather than Kd. And M could be a much smaller number, such as 512. I guess the accuracy might possibly be improved, though the memory is still linear in K.\n\nMinor issues:\n(1) In Eq (3), it should be P^j_b rather than P^b_j?\n(2) The proof of theorem 1 seems unfinished",
"Thanks to the authors for their feedback.\n==============================\nThe paper presents a method for classification scheme for problems involving large number of classes in multi-class setting. This is related to the theme of extreme classification but the setting is restricted to that of multi-class classification instead of multi-label classification. The training process involves data transformation using R hash functions, and then learning R classifiers. During prediction the probability of a test instance belonging to a class is given by the sum of the probabilities assigned by the R meta-classifiers to the meta-class in the which the given class label falls. The paper demonstrates better results on ODP and Imagenet-21K datasets compared to LOMTree, RecallTree and OAA.\n\nThere are following concerns regarding the paper which don't seem to be adequately addressed :\n \n - The paper seems to propose a method in which two-step trees are being constructed based on random binning of labels, such that the first level has B nodes. It is not intuitively clear why such a method could be better in terms of prediction accuracy than OAA. The authors mention algorithms for training and prediction, and go on to mention that the method performs better than OAA. Also, please refer to point 2 below.\n\n - The paper repeatedly mentions that OAA has O(Kd) storage and prediction complexity. This is however not entirely true due to sparsity of training data, and the model. These statements seem quite misleading especially in the context of text datasets such as ODP. The authors are requested to check the papers [1] and [2], in which it is shown that OAA can perform surprisingly well. Also, exploiting the sparsity in the data/models, actual model sizes for WikiLSHTC-325K from [3] can be reduced from around 900GB to less than 10GB with weight pruning, and sparsity inducing regularizers. It is not clear if the 160GB model size reported for ODP took the above suggestions into considerations, and which kind of regularization was used. Was the solver used from vowpal wabbit or packages such as Liblinear were used for reporting OAA results.\n\n - Lack of empirical comparison - The paper lacks empirical comparisons especially on large-scale multi-class LSHTC-1/2/3 datasets [4] on which many approaches have been proposed. For a fair comparison, the proposed method must be compared against these datasets. It would be important to clarify if the method can be used on multi-label datasets or not, if so, it needs to be evaluated on the XML datasets [3].\n\n[1] PPDSparse - http://www.kdd.org/kdd2017/papers/view/a-parallel-and-primal-dual-sparse-method-for-extreme-classification\n[2] DiSMEC - https://arxiv.org/abs/1609.02521\n[3] http://manikvarma.org/downloads/XC/XMLRepository.html\n[4] http://lshtc.iit.demokritos.gr/LSHTC2_CFP",
"The paper presents a hashing based scheme (MACH) for reducing memory and computation time for K-way classification when K is large. The main idea is to use R hash functions to generate R different datasets/classifiers where the K classes are mapped into a small number of buckets (B). During inference the probabilities from the R classifiers are summed up to obtain the best scoring class. The authors provide theoretical guarantees showing that both memory and computation time become functions of log(K) and thus providing significant speed-up for large scale classification problems. Results are provided on the Imagenet and ODP datasets with comparisons to regular one-vs-all classifiers and tree-based methods for speeding up classification.\n\nPositives\n- The idea of using R hash functions to remap K-way classification into R B-way classification problems is fairly novel and the authors provide sound theoretical arguments showing how the K probabilities can be approximated using the R different problems.\n- The theoritical savings in memory and computation time is fairly significant and results suggest the proposed approach provides a good trade-off between accuracy and resource costs.\n\nNegatives\n- Hierarchical softmax is one of more standard techniques that has been very effective at large-scale classification. The paper does not provide comparisons with this baseline which also reduces computation time to log(K).\n- The provided baselines LOMTree, Recall Tree are missing descriptions/citations. Without this it is hard to judge if these are good baselines to compare with.\n- Figure 1 only shows how accuracy varies as the model parameters are varied. A better graph to include would be a time vs accuracy trade-off for all methods. \n- On the Imagenet dataset the best result using the proposed approach is only 85% of the OAA baseline. Is there any setting where the proposed approach reaches 95% of the baseline accuracy?",
"Alright, we will add a discussion about dismec and ppd-sparse and also make a note about this conversation, and the results over 100 node machine. It seems sparsity regularizer can help with accuracy (known in literature). We note that MACH does not use any such regularization. \n\nTill now, we were still trying to run DisMEC using our machines which we can access (56 cored and 512 RAM) on both the datasets Imagenet and ODP. However, the results seem hopeless so far, and it seems the progress is significantly slower on both of them. It will take a couple of weeks more before we can see the final accuracy (if the machines don't crash). Imagenet seems even worse. It would be a lot of more convenient to report some official accuracy numbers if we can get them. \n\nIt should be noted that we can run MACH on both the dataset over a smaller and significantly cheaper machine (64GB and 1 Titan X with 8 cores) and in substantially lesser time. \n\nWe thank you for bringing these newer comparisons. We think it makes our method even more exciting and bolsters our arguments further.\n \nWe hope that under the light of these discussions you will be more supportive of the paper. \n\nWe will be happy to take into account any other suggestions.\n\nThanks again, we appreciate your efforts, and we find this discussion very useful. \n \n\n\n\n \n\n\n\n",
"Thanks for your feedback. \n\nThe results above were for DiSMEC and not PPDSparse. Since it trains One-versus-rest in a parallel way, the memory requirements on a single node are quite moderate, something around 8GB for training a batch of 1,000 labels on a single node. Each batch of labels is trained on a separate node.\n\nYou are absolutely right that sparsity does not make sense in case of Imagenet and the results for OAA in Figure 1(right) will hold. In both cases OAA seems to be better than MACH.\n\nI completely agree that MACH has computational advantages. However, at the same time, the performance is also lost in the speedup gain, i.e. 25% versus 19%. The impact of MACH would be substantial if similar levels of accuracy at much lower computational cost.\n\nIt is important that authors could verify these, and update the manuscript appropriately thereby mentioning the pros and cons of each scheme, which is missing from the current version.",
"We are really grateful for your efforts and taking time to run dismec. \nCould you send us details of (or link to your codes?). We would like to report this in the paper (and also the comparison with imagenet). \nWe want to know memory usage, running time (approx a day?), how many cores. In our codes, dismec was run on a single 64GB machine, with 8 cores and one titan X. \n\nFurthermore, on imagenet, sparsity won't help. MACH does not need this assumption. So we need to think beyond sparsity. \n\nMACH has all these properties. \n\nThe main argument is that we can run on Titan X (< 12GB working memory) (sequentially run 25 logistic regression of size 32 classes each) in 7.2 hrs. If we run with 25 GPUs in parallel, then it can be done in 17 minutes! Compare this to about a day on a large machine. \n\nWe think the ability to train dataset on GPUs or even single GPU is very impactful. GPU clusters are everywhere and cheap now. If we can train in few hours on easily available single GPU or in few minutes on 25 GPUs (also cheap to have). Then why wait over a day on a high-memory, high-core machines (expensive). Furthermore, with data growing faster than our machines, any work which enhances our capability to train them is beneficial. \n\nWe hope you see the importance of simplicity of our method and how fast we can train with increased parallelism. 17 min on 25 Titan X. The parallelism is trivial. \n\nWe are happy to run any specific benchmark (head-to-head) you have in mind if that could convince you. ",
"Thanks for the update on various points. \n\nI would disagree with some of the responses particularly on sparsity, on the merit of using a single Titan X and hence the projected training time mentioned for DiSMEC on ODP dataset. These are mentioned in detail below. Before that I would like to mention some of my empirical findings.\n\nTo verify my doubts on using DiSMEC on ODP as in the initial review, I was able to run it in a day or so, since I had access to a few hundreds cores. It turns out it gives an accuracy of 24.8% which is about 30% better than MACH, and much better than reported for the OAA performance in earlier papers such as Daume etal [1] which reported 9% on this dataset. \n\nFurthermore, after storing the model in sparse format, the model size was around 3.1GB, instead of 160 GB as mentioned in this and earlier papers. It would be great if the authors could verify these findings if they have access to a moderately sized cluster with a few hundred cores. If the authors then agree, it would be great to mention these in the new version of the paper for future references.\n\n - Sparsity : For text dataset with large number of labels such as in ODP, it is quite common for the model to be sparse. This is because, all the words/features are highly unlikely to be surely present or surely not present for each label/class. Therefore, there is bound to lots of zeros in the model. From an information theoretic view-point as well, it does not make much of a sense for ODP model to be 160GB when the training data is 4GB. Therefore, sparsity is not merely an assumption as an approximation but is a reasonable way to control model complexity and hence the model size.\n\n- Computational resources - The argument of the paper mainly hinges on the usage of a single Titan X. However, it is not clear what is the use-case/scenario in which one wants to train strictly on a single GPU. This needs to be appropriately emphasized and explained. On the other hand, a few hundred/thousands cores is something which typically is available in organizations/institutions which might care about problems of large sizes such as on ODP and Imagenet dataset.\n\nAlso, the authors can download the PPDSparse code from XMC respository or directly from the link http://www.cs.cmu.edu/~eyan/software/AsyncPDSparse.zip\n\n[1] Logarithmic Time One-Against-Some, ICML 2017",
"Thanks for pointing our sparsity and also reference related. We tried compared with [1] and [2] (referred in your comment) as pointed out on ODP dataset, and we are delighted to share the results. We hope these results (below) will convince you that \n1) we are indeed using challenging large-scale dataset. \n2) sparsity is nice to reduce the model size, but training is prohibitively slow. We still have 40 billion parameters to think about, even if we are not storing all of them (See results of dismec) \n3) And our proposal is blazing fast and accurate and above all simple. Afterall what will beat small (32 classes only instead of 100k) logistic regression? \n4) Still, we stress, (to the best of our knowledge) no known method can train ODP dataset on a single Titan X.\n\nWe will add the new results in any future version of the paper. \n\nFirst of all, ODP is a large scale dataset, evident from the fact that both the methods [1] and [2] are either prohibitively slow or goes out or memory.\n\nIt is perfectly fine to have sparse models which will make the final model small in memory. The major hurdle is to train them. We have no idea which weights are sparse. So the only hope to always keep the memory small is some variant of iterative hard thresholding to get rid of small weights repetitively. That is what is done by \nDismec, reference [2]. As expected, this should be very slow. \n\n****** Dismec Details on ODP dataset***********\n\nWe tried running dismec with the recommended control model set. \nControl Model size: Set a ambiguity control hyper-parameter delta (0.01). if a value in weight matrix is between -delta and delta, prune the value because the value carries very little discriminative information of distinguishing one label against another.\n\nRunning time: approx. 3 models / 24h, requires 106 models for ODP dataset, approx. 35 days to finish training on Rush. We haven't finished it yet. \nCompare this to our proposed MACH which takes 7.3 hrs on a single GPU. Afterall, we are training small logistic regression with 32 classes only, its blazing fast. No iterative thresholding, not slow training. \n\nFurthermore, Dismec does not come with probabilistic guarantees of log{K} memory. Sparsity is also a very specific assumption and not always the way to reduce model size. \n\nThe results are not surprising as in [2] sophisticated computers with 300-1000 cores were used. We use a simple machine with a single Titan X. \n\n********** PD-Sparse**************\n\nWe also ran PD-sparse a non-parallel version [1] (we couldn't find the code for [1]), but it should have same memory consumption as [1]. The difference seems regarding parallelization. We again used the ODP dataset with recommended settings. We couldn't run it. Below are details \n\nIt goes out of memory on our 64gb machine. So we tried using another 512GB RAM machine, it failed after consuming 70% of memory. \n\nTo do a cross sanity check, we ran PPD on LSHTC1 (one of the datasets used in the original paper [1]). It went out of memory on our machine (64 GB) but worked on 512 GB RAM Machine with accuracy as expected in [1]. Interestingly, the run consumed more than 343 GB of main memory. This is ten times more than the memory required for storing KD double this dataset with K =12294 and D=347255. \n***********************************\n\nLet us know if you are still not convinced. We are excited about MACH, a really simple, theoretically sound algorithm, for extreme class classification. No bells and whistles, no assumption, not even sparsity.",
"First of all, we appreciate your detail comments, spotting typos, and encouragement. \n\n(1) Hierarchical softmax and LSH does not save memory; they make memory worse compared to the vanilla classifier. \nHierarchical softmax and any tree-like structure will lead to more (around twice) memory compared to the vanilla classifier. Every leaf (K leaves) requires memory (for a vector), and hence the total memory is of the order 2k ( K + K/2 + ...) . Of course, running time will be log(K). \nIn theory, LSH requires K^{1 + \\rho} memory ( way more than K or 2K). We still need all the weights. \nMemory is the prime bottleneck for scalability. Note prediction is parallelizable over K (then argmax) even for vanilla models. Thus prediction time is not a major barrier with parallelism.\n\nWe stress, (to the best of our knowledge) no known method can train ODP dataset on a single Titan X with 12GB memory. All other methods will need more than 160gb main memory. The comparison will be trivial, they all will go out of memory. \n\nAlso, see new comparisons with Dismec and PDsparse algorithms (similar) in comment to AnonReviewer1\n\nmatrix factorization (see 3) \n\n2) ODP is similar domain as word2vec. We are not sure, but direct classification accuracy in word2vec does not make sense (does it?), it is usually for word embeddings (or other language models) which need all the parameters as those are the required outputs, not the class label (which is argmax ). \n\n3) What you are mentioning (similar to matrix factorization) is a form of dimensionality reduction from D to M. As mentioned in the paper, this is orthogonal and complementary. We can treat the final layer as the candidate for MACH for more savings. As you said, just dimentionality reduction won't be logarithmic in K by itself. \n\n\nWe thank you again for the encouragement and hope that your opinion will be even more favorable after the discussions mentioned above. \n\n\n",
"Thanks for taking the time in improving our work. \n\n- We DID compare with log(K)running time methods (both LOMTree and RecallTree are log(K) running time not memory). Hierarchical softmax and any tree-like structure will lead to more (around twice) memory compared to the vanilla classifier. Every leaf (K leaves) requires a memory and hence the total memory is of the order 2k ( K + K/2 + ...) . Of course running time will be log(K). \nHowever, as mentioned memory is prime bottleneck in scalability. We still have to update and store those many parameters. \n- Although, we have provided citations. we appreciate you pointing it out. We will make it more explicit at various places.\n- We avoided the time tradeoff because time depends on several factors like parallelism, implementation etc. For example, we can trivially parallelize across R processors. \n- It seems there is a price for approximations on fine-grained imagenet. Even recalltree and LOMTree with twice the memory does worse than MACH. \n\nWe thank you again for the encouragement and hope that your opinion will be even more positive after these discussions. \n"
] | [
6,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_r1RQdCg0W",
"iclr_2018_r1RQdCg0W",
"iclr_2018_r1RQdCg0W",
"r1GeQQxmG",
"S1YDsMlXf",
"SJfRHbeQz",
"Skch5gafG",
"H1tJH9FxM",
"SJB-0Mtlz",
"H1VwD15lG"
] |
iclr_2018_r1AoGNlC- | Code Synthesis with Priority Queue Training | We consider the task of program synthesis in the presence of a reward function over the output of programs, where the goal is to find programs with maximal rewards. We introduce a novel iterative optimization scheme, where we train an RNN on a dataset of K best programs from a priority queue of the generated programs so far. Then, we synthesize new programs and add them to the priority queue by sampling from the RNN. We benchmark our algorithm called priority queue training (PQT) against genetic algorithm and reinforcement learning baselines on a simple but expressive Turing complete programming language called BF. Our experimental results show that our deceptively simple PQT algorithm significantly outperforms the baselines. By adding a program length penalty to the reward function, we are able to synthesize short, human readable programs. | rejected-papers | This paper introduces a possibly useful new RL idea (though it's a incremental on Liang et al), but the evaluations don't say much about why it works (when it does), and we didn't find the target application convincing.
| train | [
"B1xdjLPef",
"Sk8MrZ9gM",
"HJK2Mt3lG",
"rJ7ZnliGM",
"SyF8pGsmM",
"Bk0b7t57f",
"HJDzCgsMz",
"Bk9r6xsfM",
"rJXb6gjGG",
"BJWFneiGM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper presents an algorithm called Priority Queue Training (PQT) for\nprogram synthesis using an RNN where the RNN is trained in presence of a \nreward signal over the desired program outputs. The RNN learns a policy \nthat generates a sequence of characters in BF conditioned on a prefix of characters.\nThe key idea in PQT is to maintain a buffer of top-K programs at each \ngradient update step, and use them to perform additional supervised learning\nof the policy to guide the policy towards generating higher reward programs.\nPQT is compared against genetic algorithm (GA) and policy gradient (PG) based\napproaches on a set of BF benchmarks, where PQT is able to achieve the most \nnumber of average successes out of 25 runs.\n\nUnlike previous synthesis approaches that use supervision in terms of ground\ntruth programs or outputs, the presented technique only requires a reward \nfunction, which is much more general. It is impressive to see that a simple \ntechnique of using top-K programs to provide additional supervision during \ntraining can outperform strong GA and PG baselines.\n\nIt seems that the PQT approach is dependent on being able to come up with a \nreasonably good initial policy \\pi such that the top-K programs in the priority \nqueue are reasonable, otherwise the supervised signal might make the RNN policy\nworse. How many iterations are needed for PQT to come up with the target programs?\nIt would be interesting to see the curve that plots the statistics about the \nrewards of the top-K programs in the priority queue over the number of iterations.\n\nIs there also some assumption being made here in terms of the class of programs\nthat can be learnt using the current scoring function? For example, can the if/then\nconditional tasks from the AI Programmer paper be learnt using the presented \nscoring function?\n\nI have several questions regarding the evaluation and comparison metrics.\n\nFor the tuning task (remove-char), GA seems to achieve a larger number of\nsuccesses (12) compared to PQT (5) and PG+PQT (1). Is it only because the \nNPE is 5M or is there some other reasons for the large gap?\n\nThe criterion for success in the evaluation is not a standard one for program\nsynthesis. Ideally, the success criterion should be whether a technique is able\nto find any program that is consistent with the examples. Observing the results\nin Table 3, it looks like GA is able to synthesize a program for 3 more benchmarks\n(count-char, copy-reverse, and substring) whereas PQT is only able to solve one\nmore benchmark (cascade) than GA (on which PQT instead seems to overfit).\nIs there some insights into how the PQT approach can be made to solve those \ntasks that GA is able to solve?\n\nHow do the numbers in Table 3 look like when the NPE is 5M and when NPE is \nlarger say 100M?\n\nThere are many learnt programs in Table 4 that do not seem to generalize to new \ninputs. How is a learnt program checked to be correct in the results reported \nin Table 3? It seems the current criterion is just to check for correctness on \na set of 5 to 20 predefined static test cases used during training. For every \nbenchmark, it would be good to separately construct a set of held out test cases \n(possibly of larger lengths) to evaluate the generalization correctness of the \nlearnt programs. How would the numbers in Table 3 look with such a correctness \ncriterion of evaluating on a held-out set?\n\nAre there some insights regarding why programs such as divide-2, dedup or echo-thrice\ncan not be learnt by PQT or any other approach? The GA approach in the AI programmer \npaper is able to learn multiply-2 and multiply-3 programs that seems comparable to \nthe complexity of divide-2 task. Can PQT learn multiply-3 program as well?\n",
"This paper focuses on using RNNs to generate straightline computer programs (ie. code strings) using reinforcement learning. The basic setup assumes a setting where we do not have access to input/output samples, but instead only have access to a separate reward function for each desired program that indicates how close a predicted program is to the correct one. This reward function is used to train a separate RNN for each desired program.\n\nThe general space of generating straight-line programs of this form has been explored before, and their main contribution is the use of a priorty queue of highest scoring programs during training. This queue contains the highest scoring programs which have been observed at any point in the training so far, and they consider two different objectives: (1) the standard policy-gradient objective which tries to maximize the expected reward and (2) a supervised learning objective which tries to maximize the average probability of the top-K samples. They show that this priority queue algorithm significantly improves the stability of the resulting synthesis procedure such that when synthesis succeeds at all, it succeeds for most of the random seeds used.\n\nThis is a nice result, but I did not feel as though their algorithm was sufficently different from the algorithm used by Liang et. al. 2017. In Liang et. al. they keep around the best observed program for each input sample. They argue that their work is different from Liang et. al. because they show that they can learn effectively using only objective (2) while completely dropping objective (1). However I'm quite worried these results only apply in very specific setups. It seems that if the policy gradient objective is not used, and there are not K different programs which generate the correct output, then the Top-K objective alone will encourage the model to continue to put equal probability on the programs in the Top-K which do not generate an incorrect output.\n\nI also found the setup itself to be poorly motivated. I was not able to imagine a reasonable setting where we would have access to a reward function of this form without input/output examples. The paper did not provide any such examples, and in their experiments they implement the proposed reward function by assuming access to a set of input/output examples. I feel as though the restriction to the reward function in this case makes the problem uncessarily hard, and does not represent an important use-case. \n\nIn addition I had the following more minor concerns:\n\n1. At the end of section 4.3 the paper is inconsistent about whether the test cases are randomly generated or hand picked, and whether they use 5 test cases for all problems, or sometimes up to 20 test cases. If they are hand picked (and the number of test cases is hand chosen for each problem), then how dependant are the results on an appropriate choice of test cases?\n\n2. They argue that they don't need to separate train and test, but I think it is important to be sure that the generated programs work on test cases that are not a part of the reward function. They say that \"almost always\" the synthesizer does not overfit, but I would have liked them to be clear about whether their reported results include any cases of overfitting (i.e. did they ensure they the final generate program always generalized)? \n\n3. It is worth noting that while their technique succeeds much more consistently than the baseline genetic algorithm, the genetic algorithm actually succeeds at least once, on more tasks (19 vs. 17). The success rate is probably a good indicator of whether the technique will scale to more complex problems, but I would have prefered to see this in the results, rather than just hoping it will be true (i.e. by including my complicated problems where the genetic algorithm never succeeds).\n",
"This paper introduces a method for regularizing the REINFORCE algorithm by keeping around a small set of known high-quality samples as part of the sample set when performing stochastic gradient estimation.\n\nI question the value of program synthesis in a language which is not human-readable. Typically, source code as function representation is desirable because it is human-interpretable. Code written in brainfuck is not readable by humans. In the related work, a paper by Nachum et al is criticized for providing a sequence of machine instructions, rather than code in a language. Since code in brainfuck is essentially a sequence of pointer arithmetic operations, and does not include any concept of compositionality or modularity of code (e.g. functions or variables), it is not clear what advantage this representation presents. Neither am I particularly convinced by the benchmark of a GA for generating BF code. None of these programs are particularly complex: most of the examples found in table 4 are quite short, over half of them 16 characters or fewer. 500 million evaluations is a lot. There are no program synthesis examples demonstrating types of functions which perform complex tasks involving e.g. recursion, such as sorting operations.\n\nThere is also an odd attitude in the writing of this paper, reflected in the excerpt from the first paragraph describing that traditional approaches to program synthesis “… typically do not make use of machine learning and therefore require domain specific knowledge about the programming languages and hand-crafted heuristics to speed up the underlying combinatorial search. To create more generic programming tools without much domain specific knowledge …”. Why is this a goal? What is learned by restricting models to be unaware of obviously available domain-specific knowledge? \n\nAll this said, the priority queue training presented here for reinforcement learning with sparse rewards is interesting, and appears to significantly improve the quality of results from a naive policy gradient approach. It would be nice to provide some sort of analysis of it, even an empirical one. For example, how frequently are the entries in the queue updated? Is this consistent over training time? How was the decision of K=10 reached? Is a separate queue per distributed training instance a choice made for implementation reasons, or because it provides helpful additional “regularization”? While the paper does demonstrate that PQT is helpful on this very particular task, it makes very little effort to investigate *why* it is helpful, or whether it will usefully generalize to other domains.\n\nSome analysis, perhaps even on just a small toy problem, of e.g. the effect of the PQT on the variance of the gradient estimates produced by REINFORCE, would go a long way towards convincing a skeptical reader of the value of this approach. It would also help clarify under what situations one should or should not use this. Any insight into how one should best set the lambda hyperparameters would also be very appreciated.",
"We thank all reviewers for their valuable feedback. To address the reviewers’ comments, we have revised the manuscript.\n\nHere is a summary of the changes:\n\n1) We update Table 3 to include eval results on a held-out test dataset (1000 test cases per task). This should give readers an idea of how well the synthesized programs generalize to each task.\n2) We soften our claim on novelty of PQT and we give more credit to prior work, such as Liang et al 2017.\n3) We also update Table 4 to highlight which code strings were observed to overfit the test cases via manual inspection.\n4) In Section 3.2 we note that we use separate queues per worker for ease of implementation.\n5) We fix our description of the test cases at the bottom of the section titled \"Other model and environment choices.\" We also note differences between the tuning and eval variants of the \"reverse\" and \"remove-char\" tasks.\n",
"Thank you for your additional feedback. Regarding your remaining concerns:\n\n1) We updated the paper and added results on program generalization to Table 3. Specifically, we reported success rates on 1000 held-out test cases for each task. We previously believed that manual inspection would be impractical because all programs in Table 3 are of length 100 and they are fairly complex. We decided that adding held out eval cases to each task is a better way to measure generalization, and we ran all of the synthesized code solutions on these additional test cases.\n\n2) We agree with the reviewer that learning on input/output examples is simpler and more natural. We decided in this paper to write about single-task code synthesis because we wanted to focus on making that as good as possible, before moving on to the more complex multi-task scenario. It is not apparent to us how a single-task learning method can make use of test cases, since these test cases are static throughout training of the model.\n\nThough we use a shaped reward function, it is based on a Hamming distance function. We hope that it is generic enough to be usable on a wide array of coding tasks, and simple enough for less skilled users. Any set of input/output examples can be fed to this distance to create a proper reward function.",
"Thanks for your response, and your fixes to the paper. I still have a couple of concerns, however. (1) Why did you only update the results in Table 4 with manual inspection, and not the results in Table 3? Presumably the number of different programs generated is not so huge that manual inspection is unreasonable? (2) Unfortunately, I don't find your reward function examples particularly convincing. I agree, that there are certain particular problems where it may be reasonable to write a reward shaped specification of this form, but in most cases I think it would be much simpler to provide a few input/output examples, rather than to write a reward shaped spec. Even in the case of sorting a list, providing input/output examples is pretty simple and natural and possibly easier for many less skilled users than coming up with the specification that you describe.\n",
"Regarding the reviewer's comment, \"I was not able to imagine a reasonable setting where we would have access to a reward function of this form without input/output examples.\" We believe that for any coding task where there exists an algorithm for checking a solution, that algorithm can be used to compute reward on a test input. We take the reviewer's statement, \"reward function of this form,\" to mean a shaped reward function, i.e. where there is some quantitative notion of \"goodness\" for incorrect code outputs, so that reward gets larger as the output gets closer to the correct output. We provide two such examples:\n\nExample 1: Consider the task of synthesizing code for the well known traveling salesman problem (TSP). One could naively construct a shaped reward function that takes only a set of test inputs and a candidate code string. Since the goal of TSP is to produce the shortest path given a set of vertices, the reward for each test input (set of vertices) can just be the negative of the path length returned by the candidate code, with penalties for invalid paths (paths that do not hit every city exactly once). The total reward is the sum of rewards for each test input. Though it is likely not feasible to synthesize code for the TSP problem in BF, our experimental setup is general enough to allow non-deterministic polynomial time coding problems to be considered.\n\nExample 2: Consider the task of synthesizing code to sort lists in ascending order. A simple way to compute shaped reward given a test input and an output emitted by a candidate program, is to count the number of adjacent pairs which are in ascending order in the output list, and subtract off penalties for elements which were not contained in the input or are missing from the output.",
"Thank you for your review. We address your key concerns:\n\nThe reviewer argues that PQT depends on a good initial policy. We agree that PQT depends on the initial policy having appropriate bias so that it is able to sample programs with non-zero reward, but genetic algorithms and policy gradient methods require this initial bias as well. All the search methods we consider in our paper need to be able to bootstrap from no information, and rely on stumbling upon code strings with non-trivial reward in the initial stages. We also intentionally designed the our coding environment such that these search methods are able to bootstrap themselves, by making it so that all possible program strings can be executed and receive reward.\n\nThe reviewer is concerned that strong assumptions are made with the current scoring function. We agree that our scoring function does determine what coding tasks are solvable in our setup. We see this as an unavoidable drawback to the problem of code synthesis under a reward function. A truly unbiased reward function would be too sparse and make the search infeasible. In shaping our reward function, there is no avoiding building in bias for certain types of problems. We hope that future work can be done in removing the need to shape the reward function.\n\nThe reviewer argues that GA appears to solve more tasks than PQT. We disagree, and as we stated in our response to AnonReviewer1, we do not feel that this difference (in which tasks are solved) between GA and PQT is statistically significant. Since these results have high variance, that motivated our decision to compare success rates instead.\n\nRegarding other concerns:\n\nThe reviewer asks why PQT does so poorly in tuning on the remove-char task, while doing much better in eval. As the reviewer points out, the lower NPE is one reason. Another reason is that the test cases we use for the remove-char task in tuning are not the same as the ones used in eval. We update the paper to state these differences.\n\nThe reviewer asks “How do the numbers in Table 3 look like when the NPE is 5M and when NPE is larger say 100M?” We did some initial experiments with NPE of 5M, and did not observe any significant difference over 20M. We imagine that lower NPE would reduce success rates overall. We believe that 20M is approaching the upper bound on what is reasonable in terms of compute and time. For reproducibility, we did not want to use larger NPEs.\n\nThe reviewer asks \"How is a learnt program checked to be correct in the results reported \nin Table 3?\" We do not check generalization of programs in Table 3. AnonReviewer1 also asks why we do not have held out test cases to test generalization, and we admit that to be an oversight in our experimental setup. We update Table 4 in the paper to highlight which tasks were observed to overfit via manual inspection. \n\nThe reviewer asks if PQT can learn multiply-3, which was a task in the AI programmer paper. We decided that multiply-N tasks were too easy, but we did not verify this.\nFor example, here are solutions to multiply-2, multiply-3, and multiply-4:\n,[->++<]>.\n,[->+++<]>.\n,[->++++<]>.\nWe feel these are fairly short and contain just a single loop. We already included a few simple tasks and didn't want to add more. We also did not include the if/then task for the same reason.",
"Thank you for your review. We address your key concerns:\n\nRegarding the novelty of our method: After considering the reviewer's concerns, we are happy to soften the claim on novelty. We have updated the paper to give more credit to prior work, such as Liang et al 2017. However, we believe there are some key differences between our method and Liang et al 2017 which make PQT interesting. Mainly, our method does not use beam search, and we do not take any input into the RNN. Additionally, even though they are similar, it is important that our empirical evidence suggests that topk training without reinforcement learning is good enough. This further simplifies program synthesis and potentially has implications to other areas in reinforcement learning. \n\nThe reviewer is concerned that our results only apply in very specific setups, because \"the Top-K objective alone will encourage the model to continue to put equal probability on the programs in the Top-K [buffer] which do not generate [a correct] output.\" We agree that putting equal weight on all top-K solutions may be problematic in some situations, but we also believe there are improvements that can be made to PQT which remove the issue. For instance, one could imagine sampling from the top-K buffer with probability proportional to reward (or some transformation on rewards to make all sampling weights positive). We show that PQT as presented in the paper is a viable method, and leave improvements to future work.\n\nRegarding the problem setup is being too restrictive: We address this in our response to AnonReviewer3. Though we agree that hiding input/output examples from the code synthesizer makes synthesis harder, we feel that the problem setup becomes more elegant, as it reduces to a search problem over a reward function. We would like to stress that our experimental setup is motivated by our goal to simplify the problem in order to isolate aspects of code synthesis that are fundamentally hard, and to show that general methods are viable for solving this problem.\n\nRegarding other concerns:\n\n1) This reviewer's confusion is due to a mistake in the paper. We fix the language and hope that clears up confusion around choice of test cases.\n\n2) The reviewer comments “They argue that they don't need to separate train and test...”\nWe agree that having held out eval cases would have been better, and we admit that to be an oversight in our design of the experimental setup. We update Table 4 in the paper to highlight which tasks were observed to overfit via manual inspection. We also note the reviewer's confusion around the sentence \"Almost always the synthesizer does not overfit.\" We agree that this language was imprecise and unhelpful to the reader, and we remove it from the paper.\n\n3) Regarding the metric for success: We chose to compare success rate across tasks, rather than absolute number of tasks solved, because the latter has higher variance. Two of the tasks which GA solves over PQT, copy-reverse and substring, have a success of 1 out of 25. We feel these results are not statistically significant enough to draw conclusions, since it is always possible that PQT would also achieve a small amount of successes on these tasks given more runs. As for the cases where GA clearly solves a task over PQT (and vise-versa), we also feel these differences are not significant enough to draw conclusions, as this only happens once for each method (count-char for GA and cascade for PQT).\n\nAdditionally the reviewer comments “The success rate is probably a good indicator of whether the technique will scale to more complex problems, but I would have prefered to see this in the results, rather than just hoping it will be true (i.e. by including my complicated problems where the genetic algorithm never succeeds).” We agree that there is uncertainty around whether success rate is a good indicator of whether the technique will scale to more complex problems. However, we were not able to come up with adequate way to measure that. Including complicated problems where the genetic algorithm never succeeds, as the reviewer suggests, implies having to find tasks where specifically GA fails while PQT succeedes. We feel this is the same as picking data which supports our conclusion, and would not be a scientific way to choose tasks. We took care to select the tasks in Table 3 before knowing how GA and PQT will perform on them.",
"Thank you for your review. We address your key concerns:\n\nThe reviewer is concerned with the value of generating code in BF, saying that it is not human-readable. We first want to point out that program synthesis is an important task by itself, without considering human readability. For example, having a way to reliably do program synthesis would help with algorithm induction problems (training a model to carry out an algorithm) where generalization past training domain is still an issue. Furthermore, we want the experimental setup we introduce in our paper to serve as MNIST for program synthesis. We believe that a method which can code in C++ or Python should at least be able to write code for BF. By starting with BF, we hope to remove many of the complexities of higher level languages while focusing on a core hard problem in the code synthesis space. We also want to note, that we do synthesize some programs which we consider to be human readable (see Table 4), by adding program length bonus to the reward. Though BF in general may be difficult to read, that does not mean code written in BF is useless.\n\nThe reviewer is not convinced that domain-specific knowledge is a limitation. We agree that the reviewer is right in saying that we formulate our code synthesis problem in a very restrictive way that does not take advantage of task information or domain-specific knowledge. In an applied setting, like code completion, it would be in the experimenters' best interests to leverage all available knowledge and tools to solve the problem. In our case, however, our goal is to simplify the problem in order to isolate aspects of code synthesis that are fundamentally hard. We also want to show that general methods are viable for solving this problem, in the hope that they can be more easily adapted to any programming language, and might even benefit other applications of RL.\n \nThe reviewer is concerned with the lack of analysis of why and how PQT works. We agree that it would be better to have included analysis about why and how PQT works. However, we wanted to keep the focus of the paper on the experimental setup and the comparison between methods. Understanding PQT and its effectiveness in other settings, as well as its benefits to REINFORCE, we leave to follow-up work.\n\nRegarding other concerns:\n\nThe reviewer asks, \"How was the decision of K=10 reached?\" We would like to note that we say the following in the paper: \"In early experiments we found 10 is a nice compromise between a very small queue which is too easy for the RNN to memorize, and a large queue which can dilute the training pool with bad programs.\" We also would like to add that we did not tune K, as the increase in hyperparameter search space would make tuning prohibitively expensive. So the choice of K here was more of an intuitive guess.\n\nThe reviewer asks, \"Is a separate queue per distributed training instance a choice made for implementation reasons, or because it provides helpful additional regularization?\" We did indeed use a separate queue per distributed training instance to make the implementation easier. We update the paper to say that we use separate queues for ease of implementation.\n\nThe reviewer comments, \"Any insight into how one should best set the lambda hyperparameters would also be very appreciated.\" In the paper we discuss using grid search to tune hyperparameters, including lambda, and we give the search space we used. As with hyperparameters in many machine learning models, picking the correct values is a very difficult problem, and using standard hyperparameter tuning methods serves as a good first approximation."
] | [
6,
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_r1AoGNlC-",
"iclr_2018_r1AoGNlC-",
"iclr_2018_r1AoGNlC-",
"iclr_2018_r1AoGNlC-",
"Bk0b7t57f",
"HJDzCgsMz",
"rJXb6gjGG",
"B1xdjLPef",
"Sk8MrZ9gM",
"HJK2Mt3lG"
] |
iclr_2018_r154_g-Rb | Composable Planning with Attributes | The tasks that an agent will need to solve often aren’t known during training. However, if the agent knows which properties of the environment we consider im- portant, then after learning how its actions affect those properties the agent may be able to use this knowledge to solve complex tasks without training specifi- cally for them. Towards this end, we consider a setup in which an environment is augmented with a set of user defined attributes that parameterize the features of interest. We propose a model that learns a policy for transitioning between “nearby” sets of attributes, and maintains a graph of possible transitions. Given a task at test time that can be expressed in terms of a target set of attributes, and a current state, our model infers the attributes of the current state and searches over paths through attribute space to get a high level plan, and then uses its low level policy to execute the plan. We show in grid-world games and 3D block stacking that our model is able to generalize to longer, more complex tasks at test time even when it only sees short, simple tasks at train time.
| rejected-papers | Overall the reviewers appear to like the ideas in this paper, though this is some disagreement about novelty (I agree with the reviewer who believes that the top-level search can very easily be interpreted as an MDP, making this very similar to SMDPs). The reviewers generally felt that the experimental results need to more closely compare with some existing techniques, even if they're not exactly for the same setting. | train | [
"S1bTbEyrG",
"H1G5f2A4G",
"rkxi5qA4G",
"HJhTjeREz",
"Hk10FQqef",
"r1cviMjgf",
"B1CBt8ilG",
"r1i-SlcQG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author"
] | [
"I'm not sure that's quite fair to the authors, as the paper you linked to was only published on arXiv about three weeks before the ICLR deadline; I would consider that concurrent work.",
"- The paper needs to be better contextualized with prior work. As other reviewers agree, the connections to MDPs and semi-MDPs is not really well crafted and compared. \n\n- As I mentioned earlier, the paper is promising and I wish to see it develop further. One productive avenue would be to scale the experimental setup in terms of visual richness or to enforce solving new stack configurations with one or few shot trial-by-error learning",
"I just think the current work is not significantly novel, given there are a few papers recently coming out with similar ideas at such as \nhttps://arxiv.org/pdf/1710.00459.pdf \nwhich also maps states to attributes (handcrafted) and solve hierarchical planning problems.\n\n",
"After reading the other reviewers' comments and the authors' response, I still think this is a good paper and is worth publishing. First, I think the point about discovering attributes is important, but that it is out of scope for the present paper. Second, I agree the proposed framework is very similar to options*, but I think this ok: the novelty isn't so much in the framework itself but in that this is a very nice instantiation of it and that it demonstrates the efficacy of the compositionality afforded by options in more complex domains like block stacking.\n\n* I have to disagree with the authors when they say that their framework is different from options just because they are not doing RL. A MDP over options does not necessarily need to be solved with RL; if the transition function is known then you can use planning to solve the MDP which is exactly what the authors do here with Dijkstra's algorithm. I realize there is no explicit reward, but that doesn't mean the reward isn't implicit in the goal state. What's interesting about the present paper is that the agent has to also learn the high-level transition function through experience.",
"This paper proposed a method that enables hierarchical planning. Specifically, given human defined attributes, it learns a graph of attribute transitions. Given a test task and its target set of attributes and a current state, it infers the attribute of current state and search over paths through attribute spaces to get a high-level plan, and then use its low level policy to execute the plan. Based on the relation (transition) of attributes, new and more complex tasks at test time can be solved compositionally. The proposed method is indeed technically sound and have some distinctions to other existing methods in literature, however, the novelty of this work does not seem to be significant as I will elaborate more.\n\n1.\tIn this work, the attributes are provided by human, which certainty can incur a significant amount of effort hence limit is generalizable of the proposed method. It might be more appealing if automatic attributes discovery can be incorporated into current framework to remove such restriction as well as better justify the assumption underlying the proposed method is that “the cost of the supervision required to identify the important features of an environment, or to describe the space of possible tasks within it, is not too expensive” \n\n2.\tthe definition of ignorabilty a little confusing. “transition between \\rho_i and \\rho_j should only depend on the attributes \\rho, not exact state” should be written.\n\n3.\tWhen evaluating the model, the authors mentioned that “We recompute the path at intermediate steps in case we reach an attribute set we don’t expect”. What does “attribute set we don’t expect” mean? Do you mean the attribute never seen before?\n\n4.\tThe author should give better account of the relation between the proposed method to other frameworks. The authors mentioned that the proposed method can be placed into the framework of option. However, the option frame is mainly dealing with temporal abstraction, whereas this work seems have much more to do state abstraction. \n\n5.\tThe current work is limited to dealing with problems with two levels of hierarchy \n \n6. Minor comments \nwhich properties of the environment we consider important -> which properties of the environment are important\na model that learns -> a method \nfrom the set of goals rho_j -> from the set of goals, \nGVF is undefined\n",
"- This paper proposes a framework where the agent has access to a set of user defined attributes parametrizing features of interest. The agent learns a policy for transitioning between similar sets of attributes and given a test task, it can repurpose its attributes to reactively plan a policy to achieve the task. A grid world and tele-kinetically operated block stacking task is used to demonstrate the idea\n\n- This framework is exactly the same as semi-MDPs (Precup, Sutton) and its several generalizations to function approximators as cited in the paper. The authors claim that the novelty is in using the framework for test generalization. \n\n- So the main burden lies on experiments. I do not believe that the experiments alone demonstrate anything substantially new about semi-MDPs even within the deep RL setup. There is a lot of new vocabulary (e.g. sets of attributes) that is introduced, but it dosen't really add a new dimension to the setup. But I do believe in the general setup and I think its an important research direction. However the demonstrations are not strong enough yet and need further development. For instance automatically discovering attributes is the next big open question and authors allude to it.\n\n- I want to encourage the authors to scale up their stacking setup in the most realistic way possible to develop this idea further. I am sure this will greatly improve the paper and open new directions of researchers. \n\n",
"Summary: This paper proposes a method for planning which involves learning to detect high-level subgoals (called \"attributes\"), learning a transition model between subgoals, and then learning a policy for the low-level transitions between subgoals. The high-level task plan is not learned, but is computed using Dijkstra's algorithm. The benefit of this method (called the \"Attribute Planner\", or AP) is that it is able to generalize to tasks requiring multi-step plans after only training on tasks requiring single-step plans. The AP is compared against standard A3C baselines across a series of experiments in three different domains, showing impressive performance and demonstrating its generalization capability.\n\nPros:\n- Impressive generalization results on multi-step planning problems.\n- Nice combination of model-based planning for the high-level task plan with model-free RL for the low-level actions.\n\nCons:\n- Attributes are handcrafted and pre-specified rather than being learned.\n- Rather than learning an actual parameterized high-level transition model, a graph is built up out of experience, which requires a large sample complexity.\n- No comparison to other hierarchical RL approaches.\n\nQuality and Clarity:\n\nThis is a great paper. It is extremely well written and clear, includes a very thorough literature review (though it should probably also discuss [1]), takes a sensible approach to combining high- and low-level planning, and demonstrates significant improvements over A3C baselines when generalizing to more complex task plans. The experiments and domains seem reasonable (though the block-stacking domain would be more interesting if the action and state spaces weren't discrete) and the analysis is thorough.\n\nWhile the paper in general is written very clearly, it would be helpful to the reader to include an algorithm for the AP.\n\nOriginality and Significance:\n\nI am not an expert in hierarchical RL, but my understanding is that typically hierarchical RL approaches use high-level goals to make the task easier to learn in the first place, such as in tasks with long planning horizons (e.g. Montezuma's Revenge). The present work differs from this in that, as they state, \"the goal of the model is to be able to generalize to testing on complex tasks from training on simpler tasks\" (pg. 5). Most work I have seen does not explicitly test for this generalization capability, but this paper points out that it is important and worthwhile to test for.\n\nIt is difficult to say how much of an improvement this paper is on top of other related hierarchical RL works as there are no comparisons made. I think it would be worthwhile to include a comparison to other hierarchical RL architectures (such as [1] or [2]), as I expect they would perform better than the A3C baselines. I suspect that the AP would still have better generalization capabilities, but it is hard to know without seeing the results. That said, I still think that the contribution of the present paper stands on its own.\n\n[1] Vezhnevets, A. S., Osindero, S., Schaul, T., Heess, N., Jaderberg, M., Silver, D., & Kavukcuoglu, K. (2017). FeUdal Networks for Hierarchical Reinforcement Learning. Retrieved from http://arxiv.org/abs/1703.01161\n[2] Kulkarni, T. D., Narasimhan, K. R., Saeedi, A., & Tenenbaum, J. B. (2016). Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation. Advances in Neural Information Processing Systems.",
"We thank the reviewers for their helpful feedback. We respond to the issues raised by R2 and R3 below. We also thank R1 and R2 for their detailed comments on clarity; we have amended the paper to address their feedback.\n\n\n> “This framework is exactly the same as semi-MDPs (Precup, Sutton) and its several generalizations to function approximators as cited in the paper. The authors claim that the novelty is in using the framework for test generalization.” \n\nWe respectfully disagree. The framework discussed in the paper is not a semi-MDP. In our work, we have a Markovian environment which does not give any reward. Other than the attribute specification (either an explicit mapping or (state, attributes) training examples), all the agent’s interaction with the environment is unsupervised. \n\nThis is most visible in section 4.2 (Stacking Blocks). In this experiment, the agent acts randomly at train time, and its policy is trained by trying to regress the action it actually took given a (state, next_attribute) pair. This unsupervised training only sees 1-step transitions. At test time, the agent is asked to go from a state to relatively distant set of attributes, requiring multiple steps. \nThus, while as mentioned in the text, there is a part of the setup that could be trivially framed in terms of options, the framing does not help with the tasks we study.\n\nFundamentally, semi-MDP and options are methods for formalizing temporal abstraction in an RL setting. Our work is about using supervision to parameterize a space of tasks, independent of RL. We have tried to explain that the learning paradigm described in the paper is not really even RL at all, even though we use some of the same language. We will try to make the text clearer, but we also hope the reviewers can try to understand the learning paradigm we have studied, rather than trying to force our work into a hierarchical RL template. \n\n\n> “No comparison to other hierarchical RL approaches” \n\nAs discussed, our paradigm differs from RL, thus it is not trivial to adapt hierarchical RL for this setting. In particular, hierarchical RL still requires a reward signal in order to learn a meta-policy over options, which is not provided in our setting; and in particular these methods require access to rewards from full-length trajectories, which our method does not. Furthermore, most existing approaches to HRL require that a set of low-level options are provided (which they are not here); there has been little success for methods that learn options in complicated state spaces with function approximation.\n\n\n> “The author should give better account of the relation between the proposed method to other frameworks. The authors mentioned that the proposed method can be placed into the framework of option. However, the option frame is mainly dealing with temporal abstraction, whereas this work seems have much more to do state abstraction.”\n\nWe agree that the discussion of related work could be improved, particularly the connections to and differences with options frameworks. We will revise the paper to make this clearer. However, note also that we emphasize in the related work that our approach is most similar to Factored MDP and Relational MDP; these deal with state abstraction.\n\n\n> “In this work, the attributes are provided by human, which certainty can incur a significant amount of effort hence limit is generalizable of the proposed method.” “ For instance automatically discovering attributes is the next big open question and authors allude to it.”\n\nWe agree that automatically discovering attributes is an important and interesting problem, and would complement this work on using attributes for planning. However, we believe there is value in testing the building blocks of a system in isolation before trying to put the whole system together, and as learning attributes is challenging in its own right, it is not in the scope of this work. \nWe also believe that there are many situations where an attribute space can be readily provided externally, e.g. block stacking, starcraft, minecraft, house navigation, robotics, etc. Therefore, this approach has immediate value and doesn’t require the hard problem of representation learning to be solved. Furthermore, in the tasks we use for our experiments, we find that attribute specification is not overly onerous, requiring only a few thousand labeled examples to specify the attribute space for block stacking, and for grid worlds only a few hundred.\n\nOther work, such as Policy Sketch (https://arxiv.org/pdf/1611.01796.pdf ; ICML best paper) also assumes that external task supervision can be provided without being learned from scratch."
] | [
-1,
-1,
-1,
-1,
5,
4,
7,
-1
] | [
-1,
-1,
-1,
-1,
4,
5,
3,
-1
] | [
"rkxi5qA4G",
"r1cviMjgf",
"Hk10FQqef",
"B1CBt8ilG",
"iclr_2018_r154_g-Rb",
"iclr_2018_r154_g-Rb",
"iclr_2018_r154_g-Rb",
"iclr_2018_r154_g-Rb"
] |
iclr_2018_B1NOXfWR- | Neural Task Graph Execution | In order to develop a scalable multi-task reinforcement learning (RL) agent that is able to execute many complex tasks, this paper introduces a new RL problem where the agent is required to execute a given task graph which describes a set of subtasks and dependencies among them. Unlike existing approaches which explicitly describe what the agent should do, our problem only describes properties of subtasks and relationships between them, which requires the agent to perform a complex reasoning to find the optimal subtask to execute. To solve this problem, we propose a neural task graph solver (NTS) which encodes the task graph using a recursive neural network. To overcome the difficulty of training, we propose a novel non-parametric gradient-based policy that performs back-propagation over a differentiable form of the task graph to compute the influence of each subtask on the other subtasks. Our NTS is pre-trained to approximate the proposed gradient-based policy and fine-tuned through actor-critic method. The experimental results on a 2D visual domain show that our method to pre-train from the gradient-based policy significantly improves the performance of NTS. We also demonstrate that our agent can perform a complex reasoning to find the optimal way of executing the task graph and generalize well to unseen task graphs. In addition, we compare our agent with a Monte-Carlo Tree Search (MCTS) method showing that our method is much more efficient than MCTS, and the performance of our agent can be further improved by combining with MCTS. The demo video is available at https://youtu.be/e_ZXVS5VutM. | rejected-papers | Paper presents and interesting new direction, but the evaluation leaves many questions open, and situation with respect to state of the art is lacking | train | [
"Skjfeg5gG",
"B1d-B8jgz",
"r14EMwbzf",
"r13JqCBMz",
"Hy8rL0HMG",
"Hy3nSRHGM",
"By167CrfM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"In the context of multitask reinforcement learning, this paper considers the problem of learning behaviours when given specifications of subtasks and the relationship between them, in the form of a task graph. The paper presents a neural task graph solver (NTS), which encodes this as a recursive-reverse-recursive neural network. A method for learning this is presented, and fine tuned with an actor-critic method. The approach is evaluated in a multitask grid world domain.\n\nThis paper addresses an important issue in scaling up reinforcement learning to large domains with complex interdependencies in subtasks. The method is novel, and the paper is generally well written. I unfortunately have several issues with the paper in its current form, most importantly around the experimental comparisons.\n\nThe paper is severely weakened by not comparing experimentally to other learning (hierarchical) schemes, such as options or HAMs. None of the comparisons in the paper feature any learning. Ideally, one should see the effect of learning with options (and not primitive actions) to fairly compare against the proposed framework. At some level, I question whether the proposed framework is doing any more than just value function propagation at a task level, and these experiments would help resolve this.\n\nAdditionally, the example domain makes no sense. Rather use something more standard, with well-known baselines, such as the taxi domain.\n\nI would have liked to see a discussion in the related work comparing the proposed approach to the long history of reasoning with subtasks from the classical planning literature, notably HTNs.\n\nI found the description of the training of the method to be rather superficial, and I don't think it could be replicated from the paper in its current level of detail.\n\nThe approach raises the natural questions of where the tasks and the task graphs come from. Some acknowledgement and discussion of this would be useful.\n\nThe legend in the middle of Fig 4 obscures the plot (admittedly not substantially).\n\nThere are also a number of grammatical errors in the paper, including the following non-exhaustive list:\n2: as well as how to do -> as well as how to do it\nFig 2 caption: through bottom-up -> through a bottom-up\n3: Let S be a set of state -> Let S be a set of states\n3: form of task graph -> form of a task graph\n3: In addtion -> In addition\n4: which is propagates -> which propagates\n5: investigated following -> investigated the following",
"\nSummary: the paper proposes an idea for multi-task learning where tasks have shared dependencies between subtasks as task graph. The proposed framework, task graph solver (NTS), consists of many approximation steps and representations: CNN to capture environment states, task graph parameterization, logical operator approximation; the idea of reward-propagation policy helps pre-training. The framework is evaluated on a relevant multi-task problem.\n\nIn general, the paper proposes an idea to tackle an interesting problem. It is well written, the idea is well articulated and presented. The idea to represent task graphs are quite interesting. However it looks like the task graph itself is still simple and has limited representation power. Specifically, it poses just little constraints and presents no stochasticity (options result in stochastic outcomes).\n\nThe method is evaluated in one experiment with many different settings. The task itself is not too complex which involves 10 objects, and a small set of deterministic options. It might be only complex when the number of dependency layer is large. However, it's still more convinced if the paper method is demonstrated in more domains.\n\n\nAbout the description of problem statement in Section 3:\n\n- How the MDP M and options are defined, e.g. transition functions, are tochastic?\n\n- What is the objective of the problem in section 3\n\nRelated work: many related work in robotics community on the topic of task and motion planning (checkout papers in RSS, ICRA, IJRR, etc.) should also be discussed.",
"This paper proposes to train recursive neural network on subtask graphs in order to execute a series of tasks in the right order, as is described by the subtask graph's dependencies. Each subtask execution is represented by a (non-learned) option. Reward shaping allows the proposed model to outperform simpler baselines, and experiments show the model generalizes to unseen graphs.\n\nWhile this paper is as far as I can tell novel in how it does what it does, the authors have failed to convey to me why this direction of research is relevant.\n- We know finding options is the hard part about options\n- We already have good algorithms that take subtask graphs and execute them in the right order from the planning litterature\n\nAn interesting avenue would be if the subtask graphs were instead containing some level of uncertainty, or representing stochasticity, or anything that more traditional methods are unable to deal with efficiently, then I would see a justification for the use of neural networks. Alternatively, if the subtask graphs were learned instead of given, that would open the door to scaling an general learning. Yet, this is not discussed in the paper.\n\nAnother interesting avenue would be to learn the options associated with each task, possibly using the information from the recursive neural networks to help learn these options.\n\n\nThe proposed algorithm relies on fairly involved reward shaping, in that it is a very strong signal of supervision on what the next action should be. Additionaly, it's not clear why learning seems to completely \"fail\" without the pre-trained policy. The justification given is that it is \"to address the difficulty of training due to the complex nature of the problem\" but this is not really satisfying as the problems are not that hard. This also makes me question the generality of the approach since the pre-trained policy is rather simple while still providing an apparently strong score.\n\n\nIn your experiments, you do not compare with any state-of-the-art RL or hierarchical RL algorithm on your domain, and use a new domain which has no previous point of reference. It it thus hard to properly evaluate your method against other proposed methods.\n\nWhat the authors propose is a simple idea, everything is very clearly explained, the experiments are somewhat lacking but at least show an improvement over more a naive approach, however, due to its simplicity, I do not think that this paper is relevant for the ICLR conference. \n\nComments:\n- It is weird to use both a discount factor \\gamma *and* a per-step penalty. While not disallowed by theory, doing both is redundant because they enforce the same mechanism.\n- It seems weird that the smoothed logical AND/OR functions do not depend on the number of inputs; that is unless there are always 3 inputs (but it is not explained why; logical functions are usually formalised as functions of 2 inputs) as suggested by Fig 3.\n- It does not seem clear how the whole training is actually performed (beyond the pre-training policy). The part about the actor-critic learning seems to lack many elements (whole architecture training? why is the policy a sum of \"p^{cost}\" and \"p^{reward}\"? is there a replay memory? How are the samples gathered?). (On the positive side, the appendix provides some interesting details on the tasks generations to understand the experiments.)\n- The experiments cover different settings with different task difficulties. However, only one type of tasks is used. It would be good to motivate (in addition to the paragraph in the intro) the cases where using the algorithm described in the paper may be (or not?) the only viable option and/or compare it to other algorithms. Even tough not mandatory, it would also be a clear good addition to also demonstrate more convincing experiments in a different setting.\n- \"The episode length (time budget) was randomly set for each episode in a range such that 60% − 80% of subtasks are executed on average for both training and testing.\" --> this does not seem very precise: under what policy is the 60-80% defined? Is the time budget different for each new generated environment?\n- why wait until exactly 120 epochs for NTS-RProp before fine-tuning with actor-critic? It seems that much less would be sufficient from figure 4?\n- In the table 1 caption, it is written \"same graph structure with training set\" --> do you mean \"same graph structure than the training set\"?",
"Thank you for the constructive comment.\nWe’ve posted a common response to the all reviewers as a separate comment above.\nWe’d appreciate it if you go through the common response as well as this comment.\n\nQ1) Justification of why this direction of research is relevant\nA) We believe the first answer in the common response partially addresses this question. Even though options are pre-defined, the high-level planning problem itself is very challenging as discussed in the common response. We added Hierarchical Task Network (HTN) paragraph in section 2 to discuss how our problem is different from the planning literature. We also added new results in Section 5.6 that compare our method against a standard planning method (MCTS). We also show that our method can significantly improve MCTS by combining them together. We will motivate the problem better in the next revision.\n\n\nQ2) Future directions\nA) We appreciate you suggesting many interesting ways to extend our problem. We agree that it would be more interesting and challenging to have uncertainty in the task graph or to learn task graph or option itself. We are working on some of these directions. In this work, we focused on learning a generalizable agent which takes a richer and general form of task descriptions.\n\nQ3) Regarding the reward shaping used in RProp policy\nA) We clarify that the main idea of the RProp policy does not use any supervision and does not strongly benefit from human knowledge for the following reasons.\n1) Compared to the usual reward shaping which often involves human-knowledge, our method “smoothes out” the reward information in the task graph in order to propagate reward information between related subtasks. Thus, the term “reward shaping” means “smoothing” here, and we removed “reward shaping” from the paper as it is confusing.\n2) In fact, the agent always receives only the actual reward. The idea of the RProp policy is about how to form a “differentiable” task graph and how to backpropagate through it to get a reasonably good initial policy just from the task graph.\n\n\nQ4) Why NTS-scratch fails? \nA) Since this is related to the difficulty of the problem, please refer to the first answer in the common response. We found that even sophisticated search-based planning methods (e.g., MCTS) do not perform well compared to our method. Thus, it is not surprising that NTS-scratch fails to learn from scratch. \n\n\n\nQ5) Gamma with per-step penalty\nA) We agree that using gamma and per-step penalty together have a similar effect. However, many previous works [1-3] suggest that per-step penalty in addition to discount factor can be helpful for training especially in grid-world domain. For this reason, we used both discount factor and per-step penalty for better performance.\n[1] Konidaris et al., \"Building Portable Options: Skill Transfer in Reinforcement Learning.”, 2007.\n[2] Melo et al., \"Learning of coordination: Exploiting sparse interactions in multiagent systems.\", 2009.\n[3] Konidaris et al., \"Transfer in reinforcement learning via shared features.\", 2012.\n\nQ6) Why does AND/OR operation take more than two input?\nA) For notational simplicity, we defined AND and OR operation which can take multiple input (not necessarily three) and are different from logical AND and OR operation. We added mathematical definitions at Appendix C.\n\nQ7) Why smoothed AND/OR function does not depend on the number of inputs?\nA) Both the AND/OR operation and smoothed AND/OR operation depend on the number of inputs as formulated in Appendix C. Please let us know if you need further clarification.\n\nQ8) Lack of detail of training\nA) We added more details at the Appendix B and D. In brief, we followed actor-critic framework without replay memory. Please let us know if you find any missing details.\n\nQ9) More experiment and examples in a different setting\nA) Can you please clarify what you mean by “type of tasks”? To our best knowledge, the prior work on hierarchical RL (e.g., HAM, MAXQ) and hierarchical planning (e.g., HTN) cannot directly address our problem as discussed in the common response above. \n\nQ10) Regarding episode length\nA) We found that the episode length that allows executing approximately 60-80% of total subtasks is not too long or too short so that the agent should consider both short-term and long-term dependencies between subtask to solve the problem. Note that remaining time is given as additional input to the agent so that the agent can perform different strategies depending on the time limit. The episode length is randomly sampled from a range according to the performance of the Random policy.\n\nQ11) Why wait until exactly 120 epochs before fine-tuning?\nA) NTS-RProp indeed converged earlier than 120 epochs. We wanted to make sure to wait until everything converges. We could stop earlier than 120 epochs as you suggested. ",
"Thank you for the constructive comment.\nWe’ve posted a common response to the all reviewers as a separate comment above.\nWe’d appreciate it if you go through the common response as well as this comment.\n \n\nQ1) Task graph has limited representation power.\nA) We would like to point out that our task graph can represent any logical expression as it follows sum-of-product (SoP) form, which is widely used for logic circuit design. In addition, the task graph can be very expressive by forming a deep hierarchical structure of task dependencies. In other words, a precondition can be “deep” such as AND(A, B, NOT(OR(C, AND(D, E, OR(...))))). This provides a richer form of task descriptions and subsumes many existing tasks (e.g., Taxi domain, sequential instructions in [1]) \n\n[1] Oh, et.al. (2017). Zero-shot task generalization with multi-task deep reinforcement learning.\n\nQ2) Stochasticity and multiple domains\nA) We agree with the reviewer that it would be interesting to introduce stochasticity in the environment (e.g., stochastic options), and showing results on multiple domains would make the paper stronger. We are working on this extension.\n\nWe believe that the main contribution of this work is 1) to propose a richer and general form of task descriptions (task graph) compared to the previous work on multi-task RL and 2) to propose a deep RL architecture and reward-propagation policy for learning to find optimal solutions of any arbitrary task graphs and observations. \n\n\nQ3) How the MDP M and options are defined, e.g. transition functions, are stochastic?\nA) In our problem formulation, transition functions and reward functions of MDP can be either deterministic or stochastic. In our experiment, we focused on the case where both the transition and reward function are deterministic. Options used in the experiment are O = {pickup, transform} × X where X corresponds to 8 types of objects.\n\n\n\nQ4) What is the objective of the problem in section 3?\nA) The goal is to learn a multi-task policy \\pi: S x G -> O that maximizes the overall reward (r=r_{+} + r_{-}), where S is a set of observations (input pixel image, remaining number of step, subtask completion indicator x_t, and eligibility vector e_t) and G is a set of task graphs, and O is a set of options available to the agent. We clarified this in the paper.\n\n\n\nQ5) Related work on motion planning in robotics\nA) Thank you for pointing out the relevant work. We added more papers on motion planning in the related work section. Please let us know if there is missing relevant work.\n",
"Thank you for the constructive comment.\nWe’ve posted a common response to the all reviewers as a separate comment above.\nWe’d appreciate it if you go through the common response as well as this comment. \n\nQ1) Learning with/without options for fair comparison\nA) We assume that a pre-trained subtask executor (that can perform a variety of tasks) is available for all methods. Here, we can view each instantiation of the subtask as an option, and we consider learning an optimal policy to execute task graphs using such options. Since we used options for all methods including all baselines, we believe that this is a fair comparison.\n\n\n\nQ2) The proposed framework is doing any more than just value function propagation at a task level.\nA) We are not clear about what you mean by “value function propagation at a task level”. Would you please give us the specific reference on prior work and clarify this comment in more details? Intuitively speaking, our “reward-propagation policy” is indeed designed to propagate reward function in the task graph, and we believe showing how it can be done with a concrete algorithm is one of our contributions. Furthermore, our final NTS architecture improves over the “reward-propagation” method by combining all relevant information (e.g., observations, task graph embedding, and prior experience) together. \n\nIf the question is whether the learned policy is trivial, we demonstrate both qualitative and quantitative results showing that the learned strategy considers long-term dependencies between many subtasks. To our knowledge, this is infeasible for most traditional RL methods.\nWe would also appreciate if you take a look at the new results (Section 5.6) from the current revision. Specifically, we show how well our NTS performs compared to a sophisticated search/planning method (MCTS). It turns out that NTS (without any search) performs as well as MCTS with approximately 250 simulated search episodes. Combining NTS with MCTS, we further improve the performance. These results suggest that the learned policy of NTS is very strong and efficient.\n\n\n\nQ3) Related work on classical planning\nA) Thank you for pointing out a relevant work. We discussed HTN in the revision. In brief, HTN considers a similar planning problem in that a planner should find the optimal sequence of tasks to minimize the overall cost. The main differences are the following:\n1) Our problem does not have a particular goal task but is an episodic RL task with a finite horizon, so the agent should consider all possible sequences of tasks to maximize the reward within a time limit rather than computing an optimal path toward a particular goal task.\n2) HTN assumes that a task graph describes all necessary information for planning, whereas our task graph does not have cost information, and the agent should implicitly “infer” cost information from the observation. The observation module of our NTS plays a key role for this. \nDue to these differences, HTN is not directly applicable to our problem. \n\n\n\nQ4) Domain \nA) Regarding your comments on our example domain, we agree that it could have been made more interesting and practically relevant. At the same time, we believe that our framework is general enough to be applicable to more interesting scenarios (e.g., cooking, cleaning, assemblies) with small changes in semantics, which we plan as future work. Regarding your comments on the taxi domain, our domain is a richer superset of the taxi domain. However, our typical experimental setup is much more challenging, and traditional hierarchical RL baselines are not applicable due to changing task graphs during testing. For details, please see our common response to all reviewers.\n\n\nQ5) Source of task graphs\nA) In our paper, we assumed that the task graph is given. However, in the future work, we plan to extend to scenarios when the reward is unknown or when the task graph structure is unknown. Note that these settings are extremely challenging for complex task dependencies, but we hypothesize that such unknowns (e.g., rewards and/or graph structures) might be also learnable through experience. For example, in case of the household robot example in the introduction of our paper, they may be learned through interaction with a user in a trial-and-error manner. These are well beyond the scope of the current work.",
"Dear reviewers,\n\nThank you for the valuable comments. We revised our paper according to your comments. So, we would appreciate if you take a look at the current revision.\n\nWe put a common response here as many of you raised similar questions/comments about the simplicity of the problem and the lack of comparison. (In addition, we provide individual responses to specific reviewers in separate comment sections.)\n\nQ1) The proposed problem seems easy\nA) The fundamental challenge in our problem is that the agent needs to generalize to new task dependency structures in testing time. To the best of our knowledge, there is no existing method (other than search-based methods) that can address this problem.\n\nTo further help the readers better understand how complex the problem is and how well our method performs, we performed additional experiments (Section 5.6), which demonstrate that even a sophisticated search method (MCTS) performs much worse than our method even with much larger amounts of search time budget (e.g., hundreds of simulated episodes instead of a single episode in our method).\n\nIn more detail, we also summarize several reasons why this problem is challenging as follows.\n1) Our problem is essentially a combinatorial search problem where the optimal solution (i.e., optimal sequence of subtasks) cannot be computed in polynomial time. Given 15 subtasks, the number of valid solutions for each episode is approximately 600K. This is also the reason why we couldn’t scale up to a large number of subtasks and objects.\n2) The agent should “infer” cost information from the observation. The task graph does not describe the cost of each subtask, and the agent should implicitly learn to predict the cost from the observation without any supervision. \n3) Even without any dependencies between subtasks (no edges in the task graph), the agent should find the shortest path to execute subtasks, which becomes the infamous NP-hard Travelling Salesman Problem (TSP). \n4) The agent needs to consider the time limit, because the optimal solution can be completely different depending on the time limit even with the same observation and task graph.\n\n\nQ2) Lack of comparison to other hierarchical learning schemes (HAMs, options)\nA) Please note that we do not claim novelty in proposing a new hierarchical learning scheme as our work is built on options framework and policy gradient methods. Instead, the main contribution of this work is 1) to propose a richer and general form of task descriptions (task graph) compared to the previous work on multi-task RL and 2) to propose a deep RL architecture for learning to optimally execute any arbitrary task graphs and observations. We will make this more clear in the next revision.\n\nTo our best knowledge, the prior work on hierarchical RL (e.g., HAM, MAXQ) cannot directly address our problem where the task description is given as a form of graph. For example, a HAM uses finite state machines to specify a partial policy. So, it is not straightforward to specify a general strategy for solving “any task graphs” using HAM, as our problem is essentially a combinatorial search problem. More importantly, such a partial policy should be hand-designed and heuristic.\n\nAn important thing to note is that most of the prior work on hierarchical RL considered a single-task policy (e.g., a fixed task in Taxi domain), whereas our problem is a multi-task learning problem where the agent should deal with many different tasks depending on the given task graph. This motivated us to propose a new architecture that is capable of executing many different and even unseen task graphs. \n\n\nQ3) Domain is not standard\nA) If we had aimed to propose a new hierarchical learning scheme, the evaluation could have been done using a standard domain. However, as discussed above, we aim to address a new problem: solving a combinatorial search problem without explicit search. Thus, we chose Mazebase environment which is a standard domain for evaluating multi-task policy [1-4] and flexible enough to implement our task graph execution problem.\n\nWe would also like to point out that the Taxi domain can be completely subsumed by the Mazebase domain and task graphs in our paper. For example, we can define 2 subtasks as follows: A (pick up passenger) and B (go to destination), and A is the precondition of B. Note that task graphs used in our experiment are much more complex than task dependencies in the Taxi domain.\n[1] Sukhbaatar, et.al. (2016). Learning multiagent communication with backpropagation. \n[2] Kulkarni, et.al. (2016). Deep successor reinforcement learning.\n[3] Oh, et.al. (2017). Zero-shot task generalization with multi-task deep reinforcement learning.\n[4] Thomas, et.al. (2017). Independently Controllable Features.\n\n\nQ4) Details of training method\nA) We added more details at the Appendix B and D. Please let us know if you find any missing details. We also plan to release the code to make the result reproducible."
] | [
6,
6,
4,
-1,
-1,
-1,
-1
] | [
4,
3,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_B1NOXfWR-",
"iclr_2018_B1NOXfWR-",
"iclr_2018_B1NOXfWR-",
"r14EMwbzf",
"B1d-B8jgz",
"Skjfeg5gG",
"iclr_2018_B1NOXfWR-"
] |
iclr_2018_By3VrbbAb | Realtime query completion via deep language models | Search engine users nowadays heavily depend on query completion and correction to shape their queries. Typically, the completion is done by database lookup which does not understand the context and cannot generalize to prefixes not in the database. In the paper, we propose to use unsupervised deep language models to complete and correct the queries given an arbitrary prefix. We show how to address two main challenges that renders this method practical for large-scale deployment: 1) we propose a method for integrating error correction into the language model completion via a edit-distance potential and a variant of beam search that can exploit these potential functions; and 2) we show how to efficiently perform CPU-based computation to complete the queries, with error correction, in real time (generating top 10 completions within 16 ms). Experiments show that the method substantially increases hit rate over standard approaches, and is capable of handling tail queries.
| rejected-papers | This paper has some interesting ideas that have been implemented in a rather ad hoc way; the presentation focuses perhaps too much on engineering aspects. | train | [
"r1BkYcJeM",
"HJdsrHOxf",
"B1kHjjhlM",
"rkM1XV3mG",
"BksB6zhXf",
"rk8SBP6Jz",
"B16Bg6ARW",
"HJkXhY9Rb"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"author",
"public"
] | [
"This paper presents methods for query completion that includes prefix correction, and some engineering details to meet particular latency requirements on a CPU. Regarding the latter methods: what is described in the paper sounds like competent engineering details that those performing such a task for launch in a real service would figure out how to accomplish, and the specific reported details may or may not represent the 'right' way to go about this versus other choices that might be made. The final threshold for 'successful' speedups feels somewhat arbitrary -- why 16ms in particular? In any case, these methods are useful to document, but derive their value mainly from the fact that they allow the use of the completion/correction methods that are the primary contribution of the paper. \n\nWhile the idea of integrating the spelling error probability into the search for completions is a sound one, the specific details of the model being pursued feel very ad hoc, which diminishes the ultimate impact of these results. Specifically, estimating the log probability to be proportional to the number of edits in the Levenshtein distance is really not the right thing to do at all. Under such an approach, the unedited string receives probability one, which doesn't leave much additional probability mass for the other candidates -- not to mention that the number of possible misspellings would require some aggressive normalization. Even under the assumption that a normalized edit probability is not particularly critical (an issue that was not raised at all in the paper, let alone assessed), the fact is that the assumptions of independent errors and a single substitution cost are grossly invalid in natural language. For example, the probability p_1 of 'pkoe' versus p_2 of 'zoze' as likely versions of 'poke' (as, say, the prefix of pokemon, as in your example) should be such that p_1 >>> p_2, not equal as they are in your model. Probabilistic models of string distance have been common since Ristad and Yianlios in the late 90s, and there are proper probabilistic models that would work with your same dynamic programming algorithm, as well as improved models with some modest state splitting. And even with very simple assumptions some unsupervised training could be used to yield at least a properly normalized model. It may very well end up that your very simple model does as well as a well estimated model, but that is something to establish in your paper, not assume. That such shortcomings are not noted in the paper is troublesome, particularly for a conference like ICLR that is focused on learned models, which this is not. As the primary contribution of the paper is this method for combining correction with completion, this shortcoming in the paper is pretty serious.\n\nSome other comments:\n\nYour presentation of completion cost versus edit cost separation in section 3.3 is not particularly clear, partly since the methods are discussed prior to this point as extension of (possibly corrected) prefixes. In fact, it seems that your completion model also includes extension of words with end point prior to the end of the prefix -- which doesn't match your prior notation, or, frankly, the way in which the experimental results are described. \n\nThe notation that you use is a bit sloppy and not everything is introduced in a clear way. For example, the s_0:m notation is introduced before indicating that s_i would be the symbol in the i_th position (which you use in section 3.3). Also, you claim that s_0 is the empty string, but isn't it more correct to model this symbol as the beginning of string symbol? If not, what is the difference between s_0:m and s_1:m? If s_0 is start of string, the s_0:m is of length m+1 not length m.\n\nYou spend too much time on common, well-known information, such as the LSTM equations. (you don't need them, but also why number if you never refer to them later?) Also the dynamic programming for Levenshtein is foundational, not required to present that algorithm in detail, unless there is something specific that you need to point out there (which your section 3.3 modification really doesn't require to make that point).\n\nIs there a specific use scenario for the prefix splitting, other than for the evaluation of unseen prefixes? This doesn't strike me as the most effective way to try to assess the seen/unseen distinction, since, as I understand the procedure, you will end up with very common prefixes alongside less common prefixes in your validation set, which doesn't really correspond to true 'unseen' scenarios. I think another way of teasing apart such results would be recommended.\n\nYou never explicitly mention what your training loss is in section 5.1.\n\nOverall, while this is an interesting and important problem, and the engineering details are interesting and reasonably well-motivated, the main contribution of the paper is based on a pretty flawed approach to modeling correction probability, which would limit the ultimate applicability of the methods.",
"This paper focuses on solving query completion problem with error correction which is a very practical and important problem. The idea is character based. And in order to achieve three important targets which are auto completion, auto error correction and real time, the authors first adopt the character-level RNN-based modeling which can be easily combined with error correction, and then carefully optimize the inference part to make it real time.\n\nPros:\n(1) the paper is very well organized and easy to read.\n(2) the proposed method is nicely designed to solve the specific real problem. For example, the edit distance is modified to be more consistent with the task.\n(3) detailed information are provided about the experiments, such as data, model and inference.\n\nCons:\n(1) No direct comparisons with other methods are provided. I am not familiar with the state-of-the-art methods in this field. If the performance (hit rate or coverage) of this paper is near stoa methods, then such experimental results will make this paper much more solid.",
"The authors describe a method for performing query completion with error correction using a neural network that can achieve real-time performance. The method described uses a character-level LSTM, and modifies the beam search procedure with a an edit distance-based probability to handle cases where the prefix may contain errors. Details are also given on how the authors are able to achieve realtime completion.\n\nOverall, it’s nice a nice study of the query completion application. The paper is well explained, and it’s also nice that the runtime is shown for each of the algorithm blocks. Could imagine this work giving nice guidelines for others who also want to run query completion using neural networks. The final dataset is also a good size (36M search queries).\n\nMy major concerns are perhaps the fit of the paper for ICLR as well as the thoroughness of the final experiments. Much of the paper provides background on LSTMs and edit distance, which granted, are helpful for explaining the ideas. But much of the realtime completion section is also standard practice, e.g. maintaining previous hidden states and grouping together the different gates. So the paper feels directed to an audience with less background in neural net LMs.\n\nSecondly, the experiments could have more thorough/stronger baselines. I don’t really see why we would try stochastic search. And expected to see more analysis of how performance was impacted as the number of errors increased, even if errors were introduced artificially, and expected analysis of how different systems scale with varying amounts of data. The fact that 256 hidden dimension worked best while 512 overfit was also surprising, as character language models on datasets such as Penn Treebank with only 1 million words use hidden states far larger than that for 2 layers. More regularization required?",
"First, we admit that the value of this work is more engineering-oriented, but it successfully solves one of the most important problem to practically use DL for query completion: it reduces the running time from ~ 1 second [Sordoni et al. (2015), table 2] to 16 ms, using only CPU. The reason that the response time threshold is important is that the users always want responsive results, in realtime. Otherwise the query completion won't be that helpful. \n\nSecond, let me defense a bit about the error correction model. Because we are doing completion and correction at the same time, the prefix with zero edit won't dominate: the beam search always keeps some different prefixes, and only when the probability became too small will them be kicked out of the candidate set. Essentially, we are only assuming \"constant typo penalties\" in the prefix; using your example of completing \"pokemon\":\n\nWhen the user types \"zoze\" or \"pkoe\", the starting log likelihood are both -4*2. But when doing completion, the decrease in log likelihood of \"zoze\" will be much higher than \"pkoe\", so it will be kicked out of the candidate set.\n\nFurther, say that \\log P(pokemon|poke) = -1. When \\log P(pkoemon|pkoe)=-30, the beam search process with error correction should choose \n\\log P(pokemon|poke) + -4*2 = -9 instead of \n\\log P(pkoemon|pkoe) + 0 = -30.\n\nWe admit that the model is naive and we should have different penalties in different part of the prefix. But that should be tunable by changing the loss function in the edit distance and we tried the simplest first (and it works). For a demo of the error correction function, you can try it in our online website, but note that we fixed the first two characters in the prefix so the error should happen only after that. We appreciate the comment that we should use a learnt model and welcome references.\n\nReplies to other comments: we modified the notations in the revision. The training loss (categorical entropy) is mentioned in section 2; we made it clear in the revision.\n\nWe thank the reviewer again for the helpful comments.",
"(Table 1) The validation loss is better than training loss because the model is under-fitted.\n\n(Table 3) We are actually sampling the prefix-completion pair from the data instead of from our model. The reason we need to do such sampling is because AOL dataset only consists of whole queries instead of the prefix-completion pair. Thus, we assume that the user may stop typing uniformly and generate the prefix-completion pair by sampling from the data, which is completely independent of our model.\n\n(Table 4) Both the metrics only apply when the prefix appears in the whole dataset instead of the training data. The prefix for test evaluation might only appears in the test set but not in the training set, but we estimate the empirical probability coverage / hit rate from the whole dataset. For example, if we have the dataset\n\nabc\nabd\nace\n...\n\nthen the empirical probability for prefix \"ab\" should be P(abc|ab) = P(abd|ab) = 1/2. While in training, the model might never see the prefix \"ab\", but the probability coverage metric still work in this case. The reason we separate the probabilistic coverage from hit rate is that if error correction occurs, the prefix (prior) is different and the probability coverage doesn't work, and we must assume typo model to get probabilities. So we show hit rate (counts in the dataset) instead in the case of error correction. MRR also doesn't work in the case because of the data generation process (we don't have the correct user behavior from the AOL dataset). Surely we can \"simulate\" the typos and create a synthetic dataset, but that would be biased.\n\nLastly, we also confirm that we have comparable MRR to database lookup when not doing error correction. But we didn't use that because of the reason above.",
"I have some questions about your metrics. \n\n* In Table 1, why is the validation loss so much better than the training loss? Is that backwards?\n\n* In Table 3, I'm not sure how meaningful these numbers are. The traditional way of evaluating the language model would be to see how much probability it assigns to the true query completion. It seems like what you are doing is generating a completion by sampling from the model and then reporting the probability that the model assigned to it's own sample. The model could be terrible and still assign very high likelihood to whatever sequence it chooses to generate. As you said, obviously, beam search will give a better number than stochastic search.\n\n* In Table 4, you give two metrics: probabilistic coverage and hit rate. If one of the key advantages that you give for your model is that it can generate completions for prefixes that are not found in the training data then it seems you would want a metric that could capture that. My understanding is that probabilistic coverage and hit rate both only apply when the prefix is in the training data. Is that right? Additionally, other papers are query auto-completion on the AOL data seem to use mean reciprocal rank as the main metric. Have you considered using that as a metric as well? \n\nI tried to reproduce your results on my own and was able to confirm that the LSTM LM gives somewhat comparable MRR to previous approaches based on database lookups. I think putting that result in your paper would significantly strengthen your claims.",
"For each example in the data set, we choose a random cutting point (always after two characters in the string), as described in the experiment section. That is, we roll a dice for every sample in the testing set to simulate the user inputs (prefixes), which can be of different length. Consider the dataset\n\ngoogle map\napple\n\nthe user might input\n\ngoogle m\nap\n\nby choosing a random cutting point.",
"I might have missed this but where do you say what the length of the prefix you used is? I'm assuming you only used a single length prefix based on how you describe doing the train/test split."
] | [
4,
6,
5,
-1,
-1,
-1,
-1,
-1
] | [
5,
3,
3,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_By3VrbbAb",
"iclr_2018_By3VrbbAb",
"iclr_2018_By3VrbbAb",
"r1BkYcJeM",
"rk8SBP6Jz",
"iclr_2018_By3VrbbAb",
"HJkXhY9Rb",
"iclr_2018_By3VrbbAb"
] |
iclr_2018_BJ4prNx0W | Learning what to learn in a neural program | Learning programs with neural networks is a challenging task, addressed by a long line of existing work. It is difficult to learn neural networks which will generalize to problem instances that are much larger than those used during training. Furthermore, even when the learned neural program empirically works on all test inputs, we cannot verify that it will work on every possible input. Recent work has shown that it is possible to address these issues by using recursion in the Neural Programmer-Interpreter, but this technique requires a verification set which is difficult to construct without knowledge of the internals of the oracle used to generate training data. In this work, we show how to automatically build such a verification set, which can also be directly used for training. By interactively querying an oracle, we can construct this set with minimal additional knowledge about the oracle. We empirically demonstrate that our method allows automated learning and verification of a recursive NPI program with provably perfect generalization.
| rejected-papers | This paper is novel, but relatively incremental and relatively niche; the reviewers (despite discussion) are still unsure why this approach is needed. | train | [
"B10flwLgz",
"Sk-AwdKlf",
"HJ0ww5VbG",
"HkQ4xH9zf",
"Hke1lSqzz",
"ByOs1BcMM",
"H1ZeDgDJG",
"ByvPTmHkM",
"ryp3LaFA-"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public",
"author",
"public"
] | [
"Quality\nThe paper is well-written and clear, and includes relevant comparisons to previous work (NPI and recursive NPI).\n\nClarity\nThe paper is clearly written.\n\nOriginality\nTo my knowledge the method proposed in this work is novel. It is the first to study constructing minimal training sets for NPI given a black-box oracle. However, as pointed out by the authors, there is a lot of similar prior work in software testing.\n\nSignificance\nThe work could be potentially significant, but there are some very strong assumptions made in the paper that could limit the impact. If the NPI has access to a black-box oracle, it is not clear what is the use of training an NPI in the first place. It would be very helpful to describe a potential scenario where the proposed approach could be useful. Also, it is assumed that the number of possible inputs is finite (also true for the recursive NPI paper), and it is not clear what techniques or lessons of this paper might transfer to tasks with perceptual inputs. The main technical contribution is the search procedure to find minimal training sets and pare down the observation size, and the empirical validation of the idea on several algorithmic tasks.\n\nPros\n- Greatly improves the data efficiency of recursive NPI.\n- Training and verification sets are automatically generated by the proposed method.\n\nCons\n- Requires access to a black-box oracle to construct the dataset.\n- Not clear that the idea will be useful in more complex domains with unbounded inputs.\n",
"In this paper, the authors consider the problem of generating a training data set for the neural programmer-interpreter from an executable oracle. In particular, they aim at generating a complete set that fully specifies the behavior of the oracle. The authors propose a technique that achieves this aim by borrowing ideas from programming language and abstract interpretation. The technique systematically interacts with the oracle using observations, which are abstractions of environment states, and it is guaranteed to produce a data set that completely specifies the oracle. The authors later describes how to improve this technique by further equating certain observations and exploring only one in each equivalence class. Their experiments show that this improve technique can produce complete training sets for three programs.\n\nIt is nice to see the application of ideas from different areas for learning-related questions. However, there is one thing that bothers me again and again. Why do we need a data-generation technique in the paper at all? Typically, we are given a set of data, not an oracle that can generate such data, and our task is to learn something from the data. If we have an executable oracle, it is now clear to me why we want to replicate this oracle by an instance of the neural programmer-interpreter. One thing that I can see is that the technique in the paper can be used when we do research on the neural programmer-interpreter. During research, we have multiple executable oracles and need to produce good training data from them. The authors' technique may let us do this data-generation easily. But this benefit to the researchers does not seem to be strong enough for the acceptance at ICLR'18.\n\n ",
"Previous work by Cai et al. (2017) shows how to use Neural Programmer-Interpreter (NPI) framework to prove correctness of a learned neural network program by introducing recursion. It requires generation of a diverse training set consisting of execution traces which describe in detail the role of each function in solving a given input problem. Moreover, the traces need to be recursive: each function only takes a finite, bounded number of actions. In this paper, the authors show how training set can be generated automatically satisfying the conditions of Cai et al.'s paper. They iteratively explore all\npossible behaviors of the oracle in a breadth-first manner, and the bounded nature of the recursive\noracle ensures that the procedure converges. As a running example, they show how this can be be done for bubblesort. The training set generated in this process may have a lot of duplicates, and the authors show how these duplicates can possibly be removed. It indeeds shows a dramatic reduction in the number of training samples for the three experiments that have been shown in the paper. \n\nI am not an expert in this area, so it is difficult for me to judge the technical merit of the work. My feeling from reading the paper is that it is rather incremental over Cai et al. I am impressed by the results of the three experiments that have been shown here, specifically, the reduction in the training samples once they have been generated is significant. But these are also the same set of experiments performed by Cai et al. \n\nGiven the original number of traces generated is huge, I do not understand, why this method is at all practical. This also explains why the authors have just tested the performance on extremely small sized data. It will not scale. So, I am hesitant accepting the paper. I would have been more enthusiastic if the authors had proposed a way to combine the training space exploration as well as removing redundant traces together to make the whole process more scalable and done experiments on reasonably sized data. ",
"Thanks for your thoughtful comments!\n\nRegarding the motivation for training an NPI when we have a black-box oracle:\nTo our knowledge, prior work in learning algorithms with neural networks has largely assumed access to an executable oracle. We have added a section in the appendix summarizing how many training examples were used by past work in program learning; the vast majority of them randomly generate fresh problem instances at each training step, and then use an oracle to get the solutions. Indeed, as did many of these prior works, we used an executable oracle for the experiments in our work. However, the methods in our work do not require that the oracle be an executable computer program; it can be any source which provides the relevant demonstrations, such as a human. Furthermore, after we have learned an NPI program, we no longer need any access to the oracle in order to perform the same function as the oracle. This is highly useful if it is expensive to obtain responses from the oracle.\n\nRegarding perceptual inputs:\nWe agree that tasks with perceptual inputs are an important domain, and that it is difficult to apply techniques from this paper to such tasks. However, a central focus of this work is to be able to provide a complete and formal proof of the learned NPI program's correctness, so that we can be sure it is equivalent to an oracle in every relevant way. If the set of possible inputs is effectively infinite, it is not really feasible to test a black box oracle's response to all of them in order to be able to replicate the oracle exactly. We anticipate that it will be necessary to make a very different set of assumptions in order to provide similar guarantees for tasks using perceptual inputs, and that the nature of the guarantees will also be different.\n\nFor example, consider the following approach to working with perceptual inputs. Assuming that the oracle only relies upon some aspect of each perceptual input that lies within a finite space, we may be able to decompose the input encoder into two parts: one which extracts that aspect of the input, and the other which directly encodes the aspect for the core. If we were sorting or adding numbers represented as MNIST digits, we could use a digit classifier and then provide the output of that classifier to NPI. If we take the digit classifier as externally provided and assume that it is correct, we could proceed with techniques used in this paper.\n\nAlternatively, we can envision a class of approaches where we train the perceptual input encoder end-to-end on execution traces to only encode aspects of the input that are salient for reproducing the traces. While such approaches may lead to various interesting results, it will be hard to formally show that the trained model exactly matches the oracle in all situations.\n\nTo summarize, the techniques in the paper are geared towards thoroughly addressing a class of tasks that have been considered in many past papers. We leave methods for perceptual inputs to future work, especially given that such work will need different assumptions and lead to a qualitatively different result.\n",
"Thank you for your thoughtful comments!\n\nRegarding why we need a data generation technique:\nThe setting in this paper is closer to active learning, where we assume that we can query an oracle with previously unlabeled data points to obtain more labels. Our goal is to learn the true underlying program. However, if we are only given a fixed set of data, it could easily be that this data does not demonstrate all of the behaviors of the latent program. As an example, tables 2, 3, and 4 in the appendix demonstrate that when training the NPI architecture is trained on various manually constructed data sets, the resulting model can fail to generalize.\n\nIn the general case, it may be infeasible to devise a set of queries to an oracle such that we can exactly learn the underlying rule being employed by the oracle. However, for our paper, we were able to build upon a formulation of the program-learning problem from prior work (most importantly, recursive NPI from Cai et al). By making use of the underlying structure provided by recursive NPI, we show how to create a dataset that demonstrates all of the possible behaviors of the oracle. Using this dataset, we can obtain a trained NPI program which exhibits perfect generalization, and formally prove its generalization ability.\n\nRegarding why we would like to replicate an executable oracle:\nTo our knowledge, prior work in learning algorithms with neural networks has largely assumed access to an executable oracle. We have added a section in the appendix summarizing how many training examples were used by past work in program learning; the vast majority of them randomly generate fresh problem instances at each training step, and then use an oracle to get the solutions. Indeed, as did many of these prior works, we used an executable oracle for the experiments in our work. However, the methods in our work do not require that the oracle be an executable computer program; it can be any source which provides the relevant demonstrations, such as a human. Furthermore, after we have learned an NPI program, we no longer need any access to the oracle in order to perform the same function as the oracle. This is highly useful if it is expensive to obtain responses from the oracle.",
"Thank you for your thoughtful comments!\n\nRegarding incrementality:\nWe evaluate the same tasks as Cai et al. for purposes of comparison, and to show that our methods apply to a setting proposed in existing work; we did not want to create artificially simple programs that are tailored to the assumptions made in our approach.\n\nWe would argue that the experimental results are not the main point of the paper; after all, the previous work of Cai et al. already showed empirical perfect generalization. The main contribution of this work is that we no longer need to manually construct a training set that demonstrates the possible behaviors of a program to be learned. As shown in tables 2, 3, and 4 of the appendix, constructing such a training set is tricky; there exists an unknown threshold (depending on the program to be learned) in terms of how many demonstrations are needed, and how diverse they should be, before the model can learn the correct program.\n\nFurthermore, while a central contribution of Cai et al. is to formally prove that a given learned neural program will generalize to any example, the proof still requires substantial manual effort. In this work, we automate this proof of generalization, as the training set constructed by our method fully describes the oracle's behavior and therefore also serves as the verification set which certifies correctness of the learned NPI program. \n\nRegarding size of the data for the experiments:\nWe would like to emphasize that the data-generation method (a main contribution of our paper) is independent of the complexity of running the trained NPI program. Once we have trained a NPI program using the dataset generated by our method (i.e. learned the weights for LSTM and the environmental observation encoder), the computational complexity of running the NPI program is not any different from an equivalent NPI program.\n\nThis was our guess for what you meant by \"the authors have just tested the performance on extremely small sized data\", but we were not entirely sure. Could you clarify your comment so that we can see if it's possible to address it more completely?\n\nRegarding combining the training space exploration as well as removing redundant traces:\nUnfortunately, we do not believe it is possible (in general) to combine these two operations together due to the black-box nature of the oracle. If we exclude certain observation dimensions during training space exploration, we are necessarily not querying the oracle with some observation sequences which could arise during an actual execution of the program on that oracle. It could be that these unqueried observation sequences lead to unexpected behavior of the oracle, which we would not learn.\n",
"Thanks for the reply, though my main question remains unanswered. I understand that with your procedure, you can obtain a subset of the set of all traces that fully specifies the program behavior. But then, why does the NPI need to be trained on this? Wouldn't a method that just searches through the minimized trace set to find what the next operation should be work just as well, without requiring the whole RNN infrastructure? [and faster as well, without all the linear algebra...]",
"Thanks for your question! The high-level motivation for our work follows from the challenges unaddressed by previous work. Reed & de Freitas [1] showed that by providing structured supervision, it is possible to learn compositional models of program behavior. However, the learned programs fail to behave correctly when run on inputs of greater length than used during training. Cai et al. [2] addressed the problem of generalizability by adding recursive structure to the execution traces used as supervision, which ensured that the learned models can generalize to inputs of arbitrary length.\n\nHowever, these past works did not address the problem of what the training set should contain in order to learn a program successfully. Furthermore, while Cai et al. described how to verify that a learned neural program has perfect generalizability, the procedure described was fully manual. Our work addresses these challenges and fully automate the process of learning a NPI program with perfect generalization for a given task. As such, the training of NPI follows from the context set by the previous work.\n\n[1] Scott Reed and Nando de Freitas. Neural programmer-interpreters. ICLR 2016.\n[2] Jonathon Cai, Richard Shin, and Dawn Song. Making neural programming architectures generalize via recursion. ICLR 2017.",
"I'm a bit confused at what the role of the learned NPI component in the paper is. The authors describe a method to construct a method to build a set of examples that describes /all/ program behaviours. Then, they train an NPI on this. However, as /all/ behaviours are known already, it should be possible to derive a deterministic implementation (as a lookup table in the samples). What value does training the NPI add?"
] | [
5,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
2,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BJ4prNx0W",
"iclr_2018_BJ4prNx0W",
"iclr_2018_BJ4prNx0W",
"B10flwLgz",
"Sk-AwdKlf",
"HJ0ww5VbG",
"ByvPTmHkM",
"ryp3LaFA-",
"iclr_2018_BJ4prNx0W"
] |
iclr_2018_ryZElGZ0Z | Discovery of Predictive Representations With a Network of General Value Functions | The ability of an agent to {\em discover} its own learning objectives has long been considered a key ingredient for artificial general intelligence. Breakthroughs in autonomous decision making and reinforcement learning have primarily been in domains where the agent's goal is outlined and clear: such as playing a game to win, or driving safely. Several studies have demonstrated that learning extramural sub-tasks and auxiliary predictions can improve (1) single human-specified task learning, (2) transfer of learning, (3) and the agent's learned representation of the world. In all these examples, the agent was instructed what to learn about. We investigate a framework for discovery: curating a large collection of predictions, which are used to construct the agent's representation of the world. Specifically, our system maintains a large collection of predictions, continually pruning and replacing predictions. We highlight the importance of considering stability rather than convergence for such a system, and develop an adaptive, regularized algorithm towards that aim. We provide several experiments in computational micro-worlds demonstrating that this simple approach can be effective for discovering useful predictions autonomously. | rejected-papers | There was substantial disagreement between reviewers on how this paper contributes to the literature; it seems (having read the paper) that the problem tackled here is clearly quite interesting, but it is hard to tease out in the current version exactly what the contribution does to extend beyond current art. | train | [
"ByqXgftlz",
"HJf2G-cgf",
"Bk0FsiXbM",
"BJt9fysmz",
"BytVjsWGG",
"HJh6poWzM",
"S15W3ibfG",
"HJ4g2obGG",
"SyU8oj-zG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"I have to say that I do not have all the background of this paper, and the paper is not written very clearly. I think the major contribution of the paper is represented in a very vague way.",
"I really enjoyed reading this paper and stopped a few time to write down new ideas it brought up. Well written and very clear, but somewhat lacking in the experimental or theoretical results.\n\nThe formulation of AdaGain is very reminiscent of the SGA algorithm in Kushner & Yin (2003), and more generally gradient descent optimization of the learning rate is not new. The authors argue for the focus on stability over convergence, which is an interesting focus, but still I found the lack of connection with related work in this section a strange.\n\nHow would a simple RNN work for the experimental problems? The first experiment demonstrates that the regularization is using fewer features than without, which one could argue does not need to be compared with other methods to be useful. Especially when combined with Figure 5, I am convinced the regularization is doing a good job of pruning the least important GVFs. However, the results in Figure 3 have no context for us to judge the results within. Is this effective or terrible? Fast or slow? It is really hard to judge from these results. We can say that more GVFs are better, and that the compositional GVFs add to the ability to lower RMSE. But I do not think this is enough to really judge the method beyond a preliminary \"looks promising\".\n\nThe compositional GVFs also left me wondering: What keeps a GVF from being pruned that is depended upon by a compositional GVF? This was not obvious to me.\n\nAlso, I think comparing GVFs and AdaGain-R with an RNN approach highlights the more general question. Is it generally true that GVFs setup like this can learn to represent any value function that an RNN could have? There's an obvious benefit to this approach which is that you do not need BPTT, fantastic, but why not highlight this? The network being used is essentially a recurrent neural net, the authors restrict it and train it, not with backprop, but with TD, which is very interesting. But, I think there is not quite enough here.\n\nPros:\nWell written, very interesting approach and ideas\nConceptually simple, should be easy to reproduce results\n\nCons:\nAdaGain never gets analyzed or evaluated except for the evaluations of AdaGain-R.\nNo experimental context, we need a non-trivial baseline to compare with\n\n",
"This paper presents a suite of algorithmic ideas to learn a network of generalized value functions. The majority of the technical contribution is dedicated to a new algorithm, AdaGain, that adaptively selects a step size for an RL algorithm; the authors claim this algorithm has sparsifying properties. AdaGain is used in the context of a larger algorithm designed to search for GVFs by constructing a simple grammar over GVF components. By creating large numbers of random GVFs and then pruning away useless ones, the authors discover a state representation that is useful for prediction.\n\nWhile I am deeply sympathetic to the utility and difficulty of the discovery problem in this sort of state-space modeling, this paper ultimately felt a bit weak. \n\nOn the positive side, I felt that it was well-written. The work is well localized in the literature, and answers most questions one would naturally have. The AdaGain algorithm is, to the best of my knowledge, novel, and the focus on \"stability, not convergence\" seems like an interesting idea (although ultimately, not well fleshed-out).\n\nHowever, I felt that the central ideas were only thinly vetted. For example:\n\n* It seems that AdaGain is designed to tune a single parameter (\\alpha) adaptively. This raises several questions:\n - State-of-the-art stochastic optimizers (eg, Adam) typically introduce one step size per parameter; these are all tuned. Why wasn't that discussed? Would it be possible to apply something like Adam to this problem?\n - How does AdaGain compare to other adaptive gain algorithms?\n - There are ways to sparsify a representation - simple SGD + L1 regularization is a natural option. How do we know how well AdaGain compares to this more common approach?\n\n* The experiments seemed thin. While I appreciated the fact that it seems that AdaGain was pruning away something, I was left wondering:\n - How generalizable are these results? To be honest, the CompassWorld seems utterly uninteresting, and somewhat simplistic. \n - I am convinced that AdaGain is learning. But it would be interesting to know *what* it is learning. Do the learned GVFs capture any sort of intuitive structure in the domains?",
"Here we provide a synopsis of the results relating to the RNN experiments as mentioned in the previous response. We will look at the cycle world domain as a first test in using RNNs to make predictions using the squared TD loss function. The architecture we use in the experiments is an RNN using 8 GRU cells and truncated back propagation through time (BPTT) with various sequence lengths. Truncated BPTT is used for efficiency of both time and memory, which is necessary for continual learning agents. In the plots provided by an anonymous account here (https://drive.google.com/drive/folders/1n98u7_yFWgv1FL_ztCAznz8kELmQHmgD?usp=sharing) we see the state is difficult to learn when the training input doesn’t encapsulate the entire sequence of the cycle world. This is more apparent for the prediction at the current state (V(S_{t+1})), as the previous time step is more reliable. The single step look ahead prediction for the GVF network is shown as well. When running RNNs in a much larger space, a 100 state cycle world, we see similar behaviour. Learning using RNNs and truncated back propagation through time in these domains is difficult when the used input does not encapsulate the sequence in its entirety and the signal is sparse. As of now we have been unable to use RNNs to create a representation of the compass world capable of answering the evaluation questions from the paper.",
"We would like to thank the reviewers for their helpful comments.\n\nThe overall consensus from the reviews was that the paper was well-written, and presented interesting ideas, but that empirical results did not suggest a clear contribution, compared to existing work. We believe the confusion stems primarily because we have positioned this paper as an exploration and demonstration---not the usual case, and we will try to rectify the confusions here.\n\nFirstly, we would like to remind the reviewers that the current state-of-the-art for predictive-question or auxiliary-task generation, is for a human designer to specify each. The goal of our paper is to provide a first investigation and successful demonstration of an autonomous discovery system---reducing the need for greatly human prior knowledge. We have provided reasonable choices for each component of our system—without suggesting that they are the best choices—to demonstrate the larger discovery framework. In this paper, we demonstrated that a discovery approach with random generation of GVFs, and a filtering strategy to prune GVFs and stabilize learning, was surprisingly effective for the discovery problem.\n\nThere are two specific concerns raised by the reviewers: the lack of justification for AdaGain and the use of micro-worlds. We admit that the relationship of AdaGain to other methods in the literature was unclear; we would like to clarify it now. Stepsize selection is an important problem in many areas, but for learned value functions in RL it is particularly problematic. Most stepsize selection strategies do not easily extend to algorithms that learn value functions, such as temporal difference (TD) algorithms. TD itself is not a gradient-based algorithm, and so it is not particularly suitable to use AdaDelta or AdaGrad. Even gradient TD (GTD) is not a standard SGD algorithm, because the gradients themselves are biased, due to the fact that the auxiliary weights only provide a (poor) estimate of a part of the gradient. We did not mean to imply that this was the first time gradient descent approaches have been used for setting stepsizes; in fact, the cited work, Benveniste (1990) provides a relatively general treatment of stochastic gradient adaptive (SGA) stepsize methods, and provides more specific algorithms for certain settings. These algorithms, and the ones cited by Sutton et al., are similar to the suggestions by Kushner and Yin; we will, however, include Kushner and Yin in the citations, for more on such approaches. \n\n(continues in \"Author Response Part 2\")",
"We would like to thank you for your time in reading our paper. While you feel you don't have much to contribute in regards to feedback, we hope you will participate in the conversation above.",
"We would like to thank you for your helpful review and want to point you towards the author response thread for further discussion into the concerns you mention.",
"We would like to thank you for your helpful review and want to point you towards the author response thread for further discussion into the concerns you mention.",
"However, those approaches cannot be easily applied for the same reasons that AdaDelta and AdaGrad cannot be applied: the objectives for policy evaluation (i.e., learning value functions) make it difficult to apply standard stochastic approximation techniques. Instead, we provide a general formulation to guide the selection of the stepsize, which defines the objective to be the norm of the update. This scheme can be applied even if the update is not a gradient-descent update, and so is more suitable for the TD algorithms used in this work. Nonetheless, the suggestion to provide a baseline is a good one, and we are currently running experiments with AdaDelta, AdaGrad and Adam with GTD. Initial results with AdaDelta in CycleWorld show serious convergence issues, likely because the features change over time and the algorithm is not designed to be robust to either this change nor to application for learning value functions. We are still investigating why AdaDelta fails in this setting, but we hope to include more comprehensive results in a follow-up comment.\n\nThe second concern is the simplicity of the domains used for evaluation. We chose these domains very carefully to analyze the design choices and highlight algorithm challenges. Microworlds will continue to play an important role in reinforcement learning research. Comparing algorithms on benchmark challenge problems is useful and important, but make it very difficult to understand the parts of complex agent architecture. For example, we know a set of non-trivial predictive questions which provide the agent with rough knowledge where it is in the world (approximating compass directions---hence the name). This gives a clear baseline for comparing approaches, like ours, that search the space of predictive questions---not possible in more complex domains where the set of questions is unclear. The CycleWorld and CompassWorld microworlds, were specifically designed for investigating partial observability. They may seem simplistic, but, for example, running a standard feedforward NN on these problems fails, as does any fixed history based methods. These domains have been used in several highly cited papers. Before Atari benchmarking became popular, issue-oriented research conducted in micro-worlds was one of the gold standards of scientific progress in RL. Pursuing only benchmarks has well documented limitations, and we feel that for projects like ours clear understanding is of paramount importance and best illustrated with targeted experiments. \n\nFinally, there is a comment about comparison to RNNs. We had previously avoided further expanding the scope of the paper, by motivating Predictive Representations to handle partial observability, rather than the alternative strategy of using history-based methods, such as RNNs. The Predictive Representation community is sufficiently large (c.f. PSRs, OOMs, TD-nets, TPSRs, General value functions, Auxiliary Tasks, etc.), that it warranted only investigating within that setting. Nonetheless, our next steps were to compare to RNNs, and provide more justification for Predictive Representations. We are currently running experiments with RNNs. Preliminary results indicate some issues with using RNNs, which we will report in a follow-up comment.\n"
] | [
4,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1
] | [
1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ryZElGZ0Z",
"iclr_2018_ryZElGZ0Z",
"iclr_2018_ryZElGZ0Z",
"SyU8oj-zG",
"iclr_2018_ryZElGZ0Z",
"ByqXgftlz",
"HJf2G-cgf",
"Bk0FsiXbM",
"BytVjsWGG"
] |
iclr_2018_SyvCD-b0W | Autostacker: an Automatic Evolutionary Hierarchical Machine Learning System | This work provides an automatic machine learning (AutoML) modelling architecture called Autostacker. Autostacker improves the prediction accuracy of machine learning baselines by utilizing an innovative hierarchical stacking architecture and an efficient parameter search algorithm. Neither prior domain knowledge about the data nor feature preprocessing is needed. We significantly reduce the time of AutoML with a naturally inspired algorithm - Parallel Hill Climbing (PHC). By parallelizing PHC, Autostacker can provide candidate pipelines with sufficient prediction accuracy within a short amount of time. These pipelines can be used as is or as a starting point for human experts to build on. By focusing on the modelling process, Autostacker breaks the tradition of following fixed order pipelines by exploring not only single model pipeline but also innovative combinations and structures. As we will show in the experiment section, Autostacker achieves significantly better performance both in terms of test accuracy and time cost comparing with human initial trials and recent popular AutoML system. | rejected-papers | The reviewers have pointed out that there is a substantial amount of related work that this paper should be acknowledging and building on. | val | [
"HkIzq6Vez",
"SkYou4teG",
"HJePBM9lz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The author present Autostacker, a new algorithm for combining the strength of different learning algorithms during hyper parameter search. During the first step, the hyperparameter search is done in a conventional way. At the second step, the output of each primitives is added to the features of the original dataset and the training and hyperparameter search starts again. This process is repeated for some number of steps. The experiments are performed on 15 small scale dataset and show that Autostacker is performing better than random forest almost systematically and better than TPOT, the external baseline, 13 times out of 15. Also the speed comparison favor Autostacker vs TPOT.\n\nThis algorithm is not highly innovative. Using the output of some algorithms as the input of another one for learning was seen numerous time in the literature. The novelty here is how exactly it is performed, which is a bit ad hoc. \n\nWhile testing on numerous dataset is important to verify the strength of a learning algorithm, final statistical significance test should be provided e.g. Sign Test, Wilcoxon Signed Rank Test.\n\nThe experiment compares with a weak baseline and a baseline that is unknown to me. Also, the datasets are all small scale which is not representative of modern machine learning. This leaves me very uncertain about the actual quality of the proposed algorithm. \n\nThe strength of the Random Forest baseline could easily be augmented by simply considering the best learning algorithm over validation across the hyper parameter search (i.e. the choice of the learning algorithm is also a hyperparameter). Also a very simple and fast ensemble could be considered by using Agnostic Bayesian Learning of Ensembles (Lacoste et. al.). It is also common to simply consider a linear combination of the output of the different estimator obtained during cross validation and very simple to implement. This would provide other interesting baselines.\n\nFinally the writing of the paper could be highly improved. Many typos, including several badly formatted citations (consider using \\citet and \\citep for a proper usage of parenthesis).\n",
"In this work, the authors propose to apply parallel hill climbing to learn a stacked machine learning model architecture for a particular machine learning problem. Modest experimental results suggests it compares favorably to another evoluation-inspired AutoML algorithm.\n\nWhile the idea of Autostacker is presented clearly, the paper has two severe limitations (detailed comments below). First, the contribution itself is fairly minimal; second, even if the contribution were more substantial, the current presentation does not relate to “learning representations” in any meaningful way.\n\n=== Major comments\n\nFirst, as mentioned above, this paper leans heavily on existing work. Stacking and ensemble methods have become standard approaches in practical machine learning settings (for example, in Kaggle challenges). Likewise, parallel hill climbing (and the closely-related beam search) are common local search strategies for difficult optimization problems. However, it is unclear that combining these yields any unexpected synergies. \n\nIndeed, very similar approaches have been proposed in the literature already. For example, [Welchowski, T. & Schmidt, M. A framework for parameter estimation and model selection in kernel deep stacking networks. Artificial Intelligence in Medicine, 2016, 70, 31-40], propose a very similar model, including search with hill climbing and using the original data at each layer. While they do restrict the considered primitive model type, neither paper offers any compelling theoretical results, so this is largely an implementation detail in terms of novelty.\n\nAdditionally, the paper lacks any discussion about how the architectures may change during search, as well as what sorts of architectures are learned. For example, the given number of layers and nodes are maximums; however, the text just above Algorithm 1 points out that the first step in the algorithm is to “generate N completed pipelines.” What exactly does this mean? If PHC randomly changes one of the architecture hyperparameters, what happens? e.g., which layer is removed? Ultimately, what types of architectures are selected?\n\nFinally, ICLR does not seem like a good venue for this work. As presented, the work does not discuss learning representations in any way; likewise, none of the primitive models in Table 1 are typically considered “representation learning models.” Thus, it is not obvious that Autostacker would be especially effective at optimizing the hyperparameters of those models. Experimental results including these types of models could, in principle, demonstrate that Autostacker is applicable, but the current work does not show this.\n\n=== Minor comments\n\nSection 3.3 seems to advocate training on the testing data. Even if the described approach is common practice (e.g., looking at 10-fold CV results, updating the model, and running CV again), selecting among the models using inner- and outer-validation sets would avoid explicitly using information about the testing set for improving the model.\n\nHow sensitive is the approach to the choice of the number of layers and nodes, both in terms of accuracy and resource usage?\n\nIt would be helpful to include basic characteristics of the datasets used in this study, perhaps as a table in the appendix.\n\n=== Typos, etc.\n\nThe paper has numerous typos and needs thorough editing. The references in the text are not formatted correctly. I do not believe this affects understanding the paper, but it definitely disrupts reading.\n\nThe references are inconsistently and incorrectly formatted (e.g., “Bayesian” should be capitalized).\n",
"The authors introduce a simple hill climbing approach to (very roughly) search in the space of cascades of classifiers.\nThey first reinvent the concept of cascades of classifiers as an extension of stacking (https://en.wikipedia.org/wiki/Cascading_classifiers). Cascading is like stacking but carries over all original model inputs to the next classifier.\nThe authors cast this nicely into a network view with nodes that are classifiers and layers that use the outputs from previous layers. However, other than relating this line of work to the ICLR community, this interpretation of cascading is not put to any use. \nThe paper incorrectly claims that existing AutoML frameworks only allow using a specific single model. In fact, Auto-sklearn (Feurer et al, 2015) automatically constructs ensembles of up to 50 models, helping it to achieve more robust performance.\n\nI have some questions about the hillclimbing approach: \n- How is the \"one change\" implemented in the hill climber? Does this evaluate results for each of several single changes and pick the best one? Or does it simply change one classifier and continue? Or does it evaluate all possible individual changes and pick the best one? I note that the term \"HillClimber\" would suggest that some sort of improvement has to be made in each step, but the algorithm description does not show any evaluation step at this point. The hill climbing described in the text seems to make sense, but the pseudocode appears broken.\n\nSection 4.2: I am surprised that there is only a comparison to TPOT, not one to Auto-sklearn. Especially since Auto-sklearn constructs ensembles posthoc this would be an interesting comparison.\nAs the maximum range of number of layers is 5, I assume that scaling is actually an issue in practice after all, and the use of hundreds of primitive models alluded to in the introduction are not a reality at this point.\n\nThe paper mentions guarantees twice:\n- \"This kind of guarantee of not being worse on average comes from the the characteristic of AutoStacked\"\n- \"can be guaranteed to do better on average\"\nI am confident that this is a mistake / an error in choosing the right expression in English. I cannot see why there should be a guarantee of any sort.\n\nEmpirically, Autostacker appears better than RandomForest, but that is not a big feat. The improvements vs. TPOT are more relevant. One question: the data sets used in Olson et al are very small. Does TPOT overfit on these? Since AutoStacker does not search as exhaustively, could this explain part of the performance difference? How many models are evaluated in total by each of the methods?\n\nI am unsure about the domain for the HillClimber. Does it also a search over which classifiers are used where in the pipeline, or only about their hyperparameters?\n\nMinor issues:\n- The authors systematically use citations wrongly, apparently never using citep but only having inline citations.\n- Some parts of the paper feel unscientific, such as using phrases like \"giant possible search space\". \n- There are also several English grammar mistakes (e.g., see the paragraph containing \"for the discover\") and typos.\n- Why exactly would a small amount of data be more likely to be unbalanced?\n- The data \"cleaning\" method of throwing out data with missing values is very unclean. I hope this has only been applied to the training set and that no test set data points have been dropped?\n- Line 27 of Algorithm 1: sel_pip has not been defined here\n\nOverall, this is an interesting line of work, but it does not seem quite ready for publication.\n\nPros: \n- AutoML is a topic of high importance to both academia and industry\n- Good empirical results \n\nCons: \n- Cascading is not new\n- Unclear algorithm: what exactly does the Hillclimber function do?\n- Missing baseline comparison to Auto-sklearn\n- Incorrect statements about guarantees"
] | [
4,
3,
4
] | [
5,
4,
5
] | [
"iclr_2018_SyvCD-b0W",
"iclr_2018_SyvCD-b0W",
"iclr_2018_SyvCD-b0W"
] |
iclr_2018_H1Ww66x0- | Lifelong Learning with Output Kernels | Lifelong learning poses considerable challenges in terms of effectiveness (minimizing prediction errors for all tasks) and overall computational tractability for real-time performance. This paper addresses continuous lifelong multitask learning by jointly re-estimating the inter-task relations (\textit{output} kernel) and the per-task model parameters at each round, assuming data arrives in a streaming fashion. We propose a novel algorithm called \textit{Online Output Kernel Learning Algorithm} (OOKLA) for lifelong learning setting. To avoid the memory explosion, we propose a robust budget-limited versions of the proposed algorithm that efficiently utilize the relationship between the tasks to bound the total number of representative examples in the support set. In addition, we propose a two-stage budgeted scheme for efficiently tackling the task-specific budget constraints in lifelong learning. Our empirical results over three datasets indicate superior AUC performance for OOKLA and its budget-limited cousins over strong baselines. | rejected-papers | The output kernel idea for lifelong learning is interesting, but insufficiently developed in the current draft. | test | [
"SyZQxkmxG",
"SySnNRUxz",
"rklBKmcgG",
"SJBuJy0Xz",
"S1Ikdi67G",
"Skv3vop7M",
"SJ0_PiTXM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"CONTRIBUTION\nThe main contribution of the paper is not clearly stated. To the reviewer, It seems “life-long learning” is the same as “online learning”. However, the whole paper does not define what “life-long learning” is.\nThe limited budget scheme is well established in the literature. \n1. J. Hu, H. Yang, I. King, M. R. Lyu, and A. M.-C. So. Kernelized online imbalanced learning with fixed budgets. In AAAI, Austin Texas, USA, Jan. 25-30 2015.
\n2. Y. Engel, S. Mannor, and R. Meir. The kernel recursive least-squares algorithm. IEEE Transactions on Signal Processing, 52(8):2275–2285, 2004.\nIt is not clear what the new proposal in the paper.\n\nWRITING QUALITY\nThe paper is not well written in a good shape. Many meanings of the equations are not stated clearly, e.g., $phi$ in eq. (7). Furthermore, the equation in algorithm 2 is not well formatted. \n\nDETAILED COMMENTS\n1. The mapping function $phi$ appears in Eq. (1) without definition.\n2. The last equation in pp. 3 defines the decision function f by an inner product. In the equation, the notation x_t and i_t is not clearly defined. More seriously, a comma is missed in the definition of the inner product.\n3. Some equations are labeled but never referenced, e.g., Eq. (4).\n4. The physical meaning of Eq.(7) is unclear. However, this equation is the key proposal of the paper. For example, what is the output of the Eq. (7)? What is the main objective of Eq. (7)? Moreover, what support vectors should be removed by optimizing Eq. (7)? One main issue is that the notation $phi$ is not clearly defined. The computation of f-y_r\\phi(s_r) makes it hard to understand. Especially, the dimension of $phi$ in Eq.(7) is unknown. \n\nABOUT EXPERIMENTS\n1.\tIt is unclear how to tune the hyperparameters.\n2.\tIn Table 1, the results only report the standard deviation of AUC. No standard deviations of nSV and Time are reported.\n",
"Summary: The paper proposed a two-dimensional approach to lifelong learning, in the context of multi-task learning. It receives instances in an online setting, where both the prediction model and the relationship between the tasks are learnt using a online kernel based approach. It also proposed to use budgeting techniques to overcome computational costs. In general, the paper is poorly written, with many notation mistakes and inconsistencies. The idea does not seem to be novel, technical novelty is low, and the execution in experiments does not seem to be reliable. \n\nQuality: No obvious mistakes in the proposed method, but has very low novelty (as most methods follows existing studies in especially for online kernel learning). Many mistakes in the presentation and experiments. \n\nOriginality: The ideas do not seem to be novel, and are mostly (trivially) using existing work as different components of the proposed technique. \n\nClarity: The paper makes many mistakes, and is difficult to read. [N] is elsewhere denoted as \\mathbb{N}. The main equation of Algorithm 2 merges into Algorithm 3. Many claims are made without justification (e.g. 2.2. “Cavallanti 2012 is not suitable for lifelong learning”… why?; “simple removal scheme … highest confidence” – what is the meaning of highest confidence?), etc. The removal strategy is not at all well explained – the objective function details and solving it are not discussed. \n\nSignificance: There is no theoretical guarantee on the performance, despite the author’s claiming this as a goal in the introduction itself (“goal of lifelong learner … computation”). The experiments are not reliable. Perceptron obtains a better performance than PA algorithms – which is very odd. Moreover, many of the multi-task baselines obtain a worse performance than a simple perceptron (which does not account for multi-task relationships). \n",
"The paper proposes a budgeted online kernel algorithm for multi-task learning. The main contribution of the paper is an online update of the output kernel, which measures similarity between pairs of tasks. The paper also proposes a removal strategy that bounds the number of support vectors in the kernel machine. The proposed algorithm is tested on 3 data sets and compared with several baselines.\n Positives:\n- the output kernel update is well justified\n- experimental results are encouraging\n Negatives:\n- the methodological contribution of the paper is minimal\n- the proposed approach to maintain the budget is simplistic\n- no theoretical analysis of the proposed algorithm is provided\n- there are issues with the experiments: the choice of data sets is questionable (all data sets are very small so there is not need for online learning or budgeting; newsgroups is a multi-class problem, so we would want to see comparisons with some good multi-class algorithms; spam data set might be too small), it is not clear what were hyperparameters in different algorithms and how they were selected, the budgeted baselines used in the experiments are not state of the art (forgetron and random removal are known to perform poorly in practice, projectron usually works much better), it is not clear how a practitioner would decide whether to use update (2) or(3)",
"We thank all the reviewers for the helpful comments, which we will take into account in our revision. We give our major clarifications points here.\n\n “Minimal contribution”:\nIn essence, our paper has three main contributions:\n1) proposing simple update rules for learning task relationship \nUnlike in (Jawanpuria 2015), we proposed simple sequential updates for learning task relationship matrix in addition to satisfying its positive semi-definite constraint at each iteration. These update equations are unique to online lifelong learning setting and doesn’t require access to the entire input kernel matrix. The proposed algorithm can easily scales to large datasets with many tasks.\n\n2) incorporating task relationship in the budgeted scheme \nOur proposed scheme consists of a simple removal step that utilizes the task relationship. Unlike in the previous work, we remove an example from the support set S by considering both the similarity between the examples (via confidence of the models) and the relationship between the tasks with less runtime per removal step. The proposed method empirically outperforms (both in terms of AUC and Time taken) other multitask budgeted learning schemes.\n\n3) two-stage budgeted approach for lifelong learning\nTo the best of our knowledge, our paper proposed the first practical budgeted scheme for lifelong multitask learning. The two-stage budgeted scheme allows us to use expensive budgeted schemes with best retention policy such as Projectron on task-specific budget T_k instead of S. \n\nWe will clarify these points along with additional details on update equations and budgeted removal step.\n\n“Smaller datasets”\nAll the datasets in our experiments are widely accepted benchmarks in online multi-task and lifelong learning evaluations (See Pentina 2016, Murugesan 2016). We chose these 3 datasets for two main reasons: 1) for a fair comparison with the current online multi-task learning methods such as OSMTL, OMTRL, etc. 2) to consider different type of tasks that one may encounter in many practical applications such as spam detection, sentiment analysis, etc. \n\nWe plan to include additional experiments on datasets with large number of tasks in the revised version.\n\n“Hyper-parameters”\nWe tuned all the hyper-parameters via 5-fold cross validation. We will include additional details on the hyper-parameters of the baselines for clarity.\n\n“Theoretical analysis”\nWe are currently working on the theoretical bounds for the proposed lifelong learning approach. We will derive the generalization bounds for lifelong learning setting with respect to some unknown task-generating probability distribution.\n",
"“Difference between online multitask learning and lifelong learning”\nAs mentioned in Page 2 Paragraph 3, the key difference is that the online multitask learning, unlike in the lifelong learning, may require that the number of tasks be specified beforehand. Most existing online multitask learning algorithms utilize this additional knowledge for learning the task relationship such as FOML, OMTRL, OSMTL, etc.\n\n“References for budgeted schemes”\nThank you for the additional references. The budget schemes from Hu et al. use the similarity between the examples for removal, on the other hand, our proposed budget schemes consider both the similarity between the examples and the relationship between the tasks to identify an example to remove. The sparsification procedure considered in Engel et al. will suffer from scalability issues similar to the Multitask Projectron as discussed in the paper.\n\n\\phi(.) is the feature function used in kernel learning. We will make this clear with all other clarifications in our revised version.\n",
"“Cavallanti 2012 is impractical for lifelong learning”\nThe multitask variants of the budgeted schemes in Cavallanti 2012 assumes that the relationship between the tasks are known a priori. In addition to the unknown number of tasks, they don’t scale to lifelong learning setting since the tasks arrive sequentially. \n\n“Confidence in removal step”\nThe confidence is measured using the margin i.e., how far an example x_r is from the margin after removing it from S. \n\n“multitask baselines worse than perceptron”\nPerceptron in Table 1 shows the results for single-task setting where it builds one models for all the tasks, whereas PA shows the results for independent task learning where it learns independent model for each task. \n\nSince Perceptron, FOML and OSMTL cannot learn the negative correlation between the tasks, the results of Perceptron, FOML and OSMTL are similar in newsgroup datasets. Note that the results for Perceptron is comparable to that of FOML and OSMTL as it sets \\Omega_{ij}=1 for all i,j. In case of sentiment dataset, we can see that FOML and OSMTL outperform Perceptron as they consider the task relationship. We will fix this in the revised version.\n",
"“all data sets are very small so” \n*) “there is no need for online learning”\nOur focus in this paper is on the scenario where training examples are insufficient for each single task. In other words, we are interested in a lifelong learning setting where we see large number of tasks with limited set of labeled examples per task. \n\n*) “or budgeting”\nThe budget/support set S contains examples from all the tasks. Even though the number of examples per task is small, the tasks arrive sequentially (with unknown horizon). The new examples are added to S over several rounds. Without any bound on the size of S, we will face the memory explosion problem as discussed in the Introduction section.\n\n“newsgroups is a multi-class problem”\nWe use newsgroup dataset to demonstrate the effectiveness of the proposed algorithm to learn the (positive, negative, no) correlation between the tasks with our simple update rules. In this experiment, each task identifies the subject group (comp and talk.politics) rather than the class.\n\n“budgeted baselines are not state-of-the-art”\nOur baselines in Table 2 are specific to multitask and lifelong learning setting that considers relationship between the tasks for removal step (See Cavallanti 2012). In addition, Projectron has one of the best retention policies for budgeted learning algorithms. \n"
] | [
3,
2,
4,
-1,
-1,
-1,
-1
] | [
4,
5,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H1Ww66x0-",
"iclr_2018_H1Ww66x0-",
"iclr_2018_H1Ww66x0-",
"iclr_2018_H1Ww66x0-",
"SyZQxkmxG",
"SySnNRUxz",
"rklBKmcgG"
] |
iclr_2018_SkFvV0yC- | Network Iterative Learning for Dynamic Deep Neural Networks via Morphism | In this research, we present a novel learning scheme called network iterative learning for deep neural networks. Different from traditional optimization algorithms that usually optimize directly on a static objective function, we propose in this work to optimize a dynamic objective function in an iterative fashion capable of adapting its function form when being optimized. The optimization is implemented as a series of intermediate neural net functions that is able to dynamically grow into the targeted neural net objective function. This is done via network morphism so that the network knowledge is fully preserved with each network growth. Experimental results demonstrate that the proposed network iterative learning scheme is able to significantly alleviate the degradation problem. Its effectiveness is verified on diverse benchmark datasets. | rejected-papers | The paper presents a variant of network morphism (Wei et al., 2016) for dynamically growing deep neural networks. There are some novel contributions (such as OptGD for finding a morphism given the parent network layer). However, in the current form, the experiments mostly focus on comparisons against fixed network structure (but this doesn't seem like a strong baseline, given Wei et al.'s work), so the paper should provide more comparisons against Wei et al. (2016) to highlight the contribution of this work. In addition, the results will be more convincing if the state-of-the-art performance can be demonstrated for large-scale problems (such as ImageNet classification). | train | [
"S1YX1ptgz",
"BkdUwGqgz",
"By4qQIQ-f"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This submission develops a learning scheme for training deep neural networks with adoption of network morphism (Wei et al., 2016), which optimizes a dynamic objective function in an iterative fashion capable of adapting its function form when being optimized, instead of directly optimizing a static objective function. Overall, the idea looks interesting and the manuscript is well-written. The shown experimental results should be able to validate the effectiveness of the learning scheme to some extent.\n\nIt would be more convincing to include the performance evaluation of the learning scheme in some representative applications, since the optimality of the training objective function is not necessarily the same as that of the trained network in the application of interest.\n\nBelow are two minor issues:\n\n- In page 2, it is stated that Fig. 2(e) illustrates the idea of the proposed network iterative learning scheme for deep neural networks based on network morphism. However, the idea seems not clear from Fig. 2(e).\n\n- In page 4, “such network iterative learning process” should be “such a network iterative learning process”.",
"This paper proposes an iterative approach to train deep neural networks based on morphism of the network structure into more complex ones. The ideas are rather simple, but could be potentially important for improving the performance of the networks. On the other hand, it seems that an important part of the work has already been done before (in particular Wei et al. 2016), and that the differences from there are very ad-hoc and intuition for why they work is not present. Instead, the paper justifies its approach by arguing that the experimental results are good. Personally, I am skeptical with that, because interesting ideas with great added value usually have some cool intuition behind them. The paper is easy to read, and there does not seem to exist major errors. Because I am not an active researcher in the topic, I cannot judge if the benefits that are shown in the experiments are enough for publication (the theoretical part is not the strongest of the paper).",
"This paper proposed an iterative learning scheme to train a very deep convolutional neural network. Instead of learning a deep network from scratch, the authors proposed to gradually increase the depth of the network while transferring the knowledge obtained from the shallower network by applying network morphism. \n\nOverall, the paper is clearly written and the proposed ideas are interesting. However, many parts of the ideas discussed in the paper (Section 3.3) are already investigated in Wei et al., 2016, which limits the novel contribution of the paper. Besides, the best performances obtained by the proposed method are generally much lower than the ones reported by the existing methods (e.g. He et al., 2016) except cifar-10 experiment, which makes it hard for the readers to convince that the proposed method is superior than the existing ones. More thorough discussions are required.\n"
] | [
7,
5,
5
] | [
4,
2,
3
] | [
"iclr_2018_SkFvV0yC-",
"iclr_2018_SkFvV0yC-",
"iclr_2018_SkFvV0yC-"
] |
iclr_2018_SJQO7UJCW | Adversarial Learning for Semi-Supervised Semantic Segmentation | We propose a method for semi-supervised semantic segmentation using the adversarial network. While most existing discriminators are trained to classify input images as real or fake on the image level, we design a discriminator in a fully convolutional manner to differentiate the predicted probability maps from the ground truth segmentation distribution with the consideration of the spatial resolution. We show that the proposed discriminator can be used to improve the performance on semantic segmentation by coupling the adversarial loss with the standard cross entropy loss on the segmentation network. In addition, the fully convolutional discriminator enables the semi-supervised learning through discovering the trustworthy regions in prediction results of unlabeled images, providing additional supervisory signals. In contrast to existing methods that utilize weakly-labeled images, our method leverages unlabeled images without any annotation to enhance the segmentation model. Experimental results on both the PASCAL VOC 2012 dataset and the Cityscapes dataset demonstrate the effectiveness of our algorithm. | rejected-papers | The paper presents a reasonable idea, probably an improved version of method (combination of GAN and SSL for semantic segmentation) over the existing works. Novelty is not ground-breaking (e.g., discriminator network taking only pixel-labeling predictions, application of self-training for semantic segmentation---each of this component is not highly novel by itself). It looks like a well-engineered model that manages to get a small improvement with a semi-supervised learning setting. However, given that the focus of the paper is on semi-supervised learning, the improvement from the proposed loss (L_semi) is fairly small (0.4-0.8%). | train | [
"r1RDwROeG",
"H1Op4eqlM",
"SJRdYLhgM",
"ByPmwO0bf",
"B1B1w_AWf",
"ByN58_AWf",
"HkbN6Aplz",
"B1exQRalz",
"BkXRaopxz",
"HydNvqQgf",
"BJ_zPCggz",
"BJJhDZakf",
"B1YQsg61M",
"H1wcIfh1z"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public",
"author",
"public",
"author",
"public",
"public",
"author",
"public"
] | [
"This paper describes techniques for training semantic segmentation networks. There are two key ideas:\n\n- Attach a pixel-level GAN loss to the output semantic segmentation map. That is, add a discriminator network that decides whether each pixel in the label map belongs to a real label map or not. Of course, this loss alone is unaware of the input image and would drive the network to produce plausible label maps that have no relation to the input image. An additional cross-entropy loss (the standard semantic segmentation loss) is used to tie the network to the input and the ground-truth label map, when available.\n\n- Additional unlabeled data is utilized by using a trained semantic segmentation network to produce a label map with associated confidences; high-confidence pixels are used as ground-truth labels and are fed back to the network as training data.\n\nThe paper is fine and the work is competently done, but the experimental results never quite come together. The technical development isn’t surprising and doesn’t have much to teach researchers working in the area. Given that the technical novelty is rather light and the experimental benefits are not quite there, I cannot recommend the paper for publication in a first-tier conference.\n\nSome more detailed comments:\n\n1. The GAN and the semi-supervised training scheme appear to be largely independent. The GAN can be applied without any unlabeled data, for example. The paper generally appears to present two largely independent ideas. This is fine, except they don’t convincingly pan out in experiments.\n\n2. The biggest issue is that the experimental results do not convincingly indicate that the presented ideas are useful.\n2a. In the “Full” condition, the presented approach does not come close to the performance of the DeepLab baseline, even though the DeepLab network is used in the presented approach. Perhaps the authors have taken out some components of the DeepLab scheme for these experiments, such as multi-scale processing, but the question then is “Why?”. These components are not illegal, they are not cheating, they are not overly complex and are widely used. If the authors cannot demonstrate an improvement with these components, their ideas are unlikely to be adopted in state-of-the-art semantic systems, which do use these components and are doing fine.\n2b. In the 1/8, 1/4, and 1/2 conditions, the performance of the baselines is not quoted. This is wrong. Since the authors are evaluating on the validation sets, there is no reason not to train the baselines on the same amount of labeled data (1/8, 1/4, 1/2) and report the results. The training scripts are widely available and such training of baselines for controlled experiments is commonly done in the literature. The reviewer is left to suspect, with no evidence given to the contrary, that the presented approach does not outperform the DeepLab baseline even in the reduced-data conditions.\n\nA somewhat unflattering view of the work would be that this is another example of throwing a GAN at everything to see if it sticks. In this case, the experiments do not indicate that it did.",
"This paper proposed an approach for semi-supervised semantic segmentation based on adversarial training. Built upon a popular segmentation network, the paper integrated adversarial loss to incorporate unlabeled examples in training. The outputs from the discriminator are interpreted as indicators for the reliability of label prediction, and used to filter-out non-reliable predictions as augmented training data from unlabeled images. The proposed method achieved consistent improvement over existing state-of-the-art on two challenging segmentation datasets.\n\nAlthough the motivation is reasonable and the results are impressive, there are some parts that need more clarification/discussion as described below.\n\n1) Robustness of discriminator output:\nThe main contribution of the proposed model is exploiting the outputs from the discriminator as the confidence score maps of the predicted segmentation labels. However, the outputs from the discriminator indicate whether its inputs are from ground-truth labels or model predictions, and may not be directly related to ‘correctness’ of the label prediction. For instance, it may prefer per-pixel score vectors closed to one-hot encoded vectors. More thorough analysis/discussions are required to show how outputs from discriminator are correlated with the correctness of label prediction. \n\n2) Design of discriminator\nI wonder if conditional discriminator fits better for the task. i.e. D(X,P) instead of D(P). It may prevent the model generating label prediction P non-relevant to input X by adversarial training, and makes the score prediction from the discriminator more meaningful. Some ablation study or discussions would be helpful.\n\n3) Presentations\nThere are several abused notations; notations for the ground-truth label P and the prediction from the generator S(X) should be clearly separated in Eq. (1) and (4). Also, it would better to find a better notation for the outputs from D instead of D^(*,0) and D^(*,1). \nTraining details in semi-supervised learning would be helpful. For instance, the proposed semi-supervised learning strategy based on Eq. (5) may be suffered by noise outputs from the discriminator in early training stages. I wonder how authors resolved the issues (e.g. training the generator and discriminator are with the labeled example first and extending it to training with unlabeled data.) \n",
"The paper presents an alternative adversarial loss function for image segmentation, and an additional loss for unlabeled images.\n\n+ well written\n+ good evaluation\n+ good performance compared to prior state of art\n- technical novelty\n- semi-supervised loss does not yield significant improvement\n- missing citations and comparisons\n\nThe paper is well written, structured, and easy to read.\nThe experimental section is extensive, and shows a significant improvement over prior state of the art in semi-supervised learning.\nUnfortunately, it is unclear what exactly lead to this performance increase. Is it a better baseline model? Is the algorithm tuned better, or is there something fundamentally different compared to prior work (e.g. Luc 2016).\n\nFinally, it would help if the authors could highlight their technical difference compared to prior work. The presented adversarial loss is similar to Luc 2016 and \"Image-to-Image Translation with Conditional Adversarial Networks, Isola etal 2017\". What is different, and why is it important?\nThe semi-supervised loss is similar to Pathak 2015a, it would help to highlight the difference, and show experimentally why it matters.\n\nIn summary, the authors should highlight the difference to prior work, and show why the proposed changes matter.",
"We thank the comments and address the raised questions below. \n\nQ1. Why do the authors present two largely independent ideas?\n\nThe novelty of this work is to incorporate adversarial learning for dense predictions under the semi-supervised setting without image synthesis. The adversarial learning and semi-supervised learning are not independent in our work. Without the successfully trained discriminator network, the proposed semi-supervised learning does not work well. The ablation study in Table 6 shows that without adversarial loss, the discriminator would treat most of the prediction pixels with low confidence of, providing noisy masks and leading to degenerated performance (drops from 68.8% to 65.7%).\n\nQ2. Why don’t the author use the full DeepLab model?\n\nWe implement our baseline model based on the DeepLab in PyTorch for the flexibility in training the adversarial network. We did not use the multi-scale mode in the DeepLab due to the memory concern in section 4.2, in which the modern GPU cards such as Nvidia TitanX with 12 GB memory are not affordable to train the network with a proper batch size. Although this issue may be addressed by the accumulated gradient (e.g., iter_size in Caffe), in PyTorch the accumulated gradient implementation still has issues (ref: https://discuss.pytorch.org/t/how-to-implement-accumulated-gradient/3822/12). We have also verified that it does not work in the current PyTorch version.\n\nHowever, our main point of the paper is to demonstrate the effectiveness of proposed method against our baseline model shown in Table 1 and 2. In fact, our baseline model already performs better than other existing works in Table 3 and 4.\n",
"We thank the comments and address the raised questions below. \n\nQ1. How are outputs from discriminator correlated with the correctness of label prediction?\n\nT_semi, # of Selected Pixels (%), Average Pixel Accuracy (%)\n0, 100%, 92.65%\n0.1, 36%, 99.84%\n0.2, 31%, 99.91%\n0.3, 27%, 99.94% \n\nIn the above table on the Cityscapes dataset, we show the average numbers and the average accuracy rates of the selected pixels with different values of T_semi as in (5) of the paper. With a higher T_semi, the discriminator outputs are more confident (similar to ground truth label distributions) and lead to more accurate pixel predictions. Also, as a trade-off, the higher threshold (T_semi), the fewer pixels are selected for back-propagation. This trade-off could also be observed in Table 5 in the paper. We will add more analysis to the paper.\n\nQ2. What’s the performance of D(X,P) compared to D(P)?\n\nWe conduct the experiment using D(X,P) instead of D(P) by concatenating the RGB channels with the class probability maps as the input to the discriminator. However, the performance drops to 72.6% on the PASCAL dataset (baseline: 73.6%). We observe that the discriminator loss stays high during the optimizing process and could not produce meaningful gradients. One reason could be that the RGB distributions between real and fake ones are highly similar, and adding this extra input could lead to optimization difficulty for the discriminator network. Therefore, it is reasonable to let the segmentation network consider RGB inputs for segmentation predictions, while the discriminator focuses on distinguishing label distributions. Note that, in Luc2016, similarly the discriminator structure on PASCAL does not include RGB images as inputs. We will add more results and discussions in the paper.\n\nQ3. The notation P in (1) and (4) is not clear.\n\nThanks for the recommendation. We revise (1) and (4) in the paper for better presentations.\n\nQ4. What are the training details in semi-supervised learning?\n\nWe include the details of semi-supervised training algorithm in the revised paper. As the reviewer points out, initial inputs may be noisy, and we tackle this issue by applying the semi-supervised learning after 5k iterations.\n",
"We thank the comments and address the raised questions below. \n\nQ1. What is the major novelty of this work?\n\nThe novelty of this work is to incorporate adversarial learning for dense predictions under the semi-supervised setting without image synthesis. To facilitate the semi-supervised learning, we propose a fully-convolutional discriminator network that provides confident predictions spatially for training the segmentation network, thereby allowing us to better model the uncertainty of unlabeled images in the pixel level. Our model achieves improvement over the baseline model by incorporating this semi-supervised strategy.\n\nQ2. What are the major differences between this work and Luc2016?\n\nThe major differences between our work and Luc2016 are listed below:\n- We propose a unified discriminator network structure for various datasets, while Luc2016 designs one network for each dataset.\n- We show that the simplest one-hot encoding of ground truth works well with adversarial learning. The “scale” encoding proposed in Luc2016 does not lead to a performance gain in our experiments.\n- We propose a semi-supervised method coupled with adversarial learning using unlabeled data.\n- We conduct extensive parameter analysis on both adversarial learning and semi-supervised learning, showing that our proposed method performs favorably against Luc2016 with the proper balance between supervised loss, adversarial loss, and semi-supervised loss. \n\nQ3. Differences between this work and Pix2Pix (Isola 2017)?\n\nOur discriminator network works on probability space, while Pix2Pix and other GAN works are on the RGB space. In addition, the target task of Pix2Pix is image translation, and ours is semantic segmentation.\n\nQ4. Difference between this work and constrained CNN (Pathak 2015a)?\n\nIn Constrained CNN (CCNN), the setting is weak supervision where image labels are required during training. In our work, we use completely unlabeled images in a semi-supervised setting. Thus, the constraints used by CCNN are not applicable to our scenario where image labels are not available. \n\nIn CCNN, they design a series of linear constraints on the label maps, such as those on the segment size and foreground/background ratio, to iteratively re-train the segmentation network. Our framework is more general than CCNN in the sense that we do not impose any hand-designed constraints that need careful designs for specific datasets. Take the Cityscapes dataset as an example, the fg/bg constraint in CCNN does not work in this dataset since there is no explicit background label. The minimum segment size constraint does not make sense either, especially for thin and small objects that frequently appear in road scenes. In contrast, we propose a discriminator with adversarial learning to automatically generate the confident maps, thereby providing useful information to train the segmentation network using unlabeled data.\n",
"Oh! That makes total sense. Thanks a lot for taking time to go through my code. \nI will make the change. \n\nAlso, did you use any strategies like, one-sided label smoothing, label flipping etc for stabilizing the GAN training? Or it should work with the settings mentioned in the paper?\n\n",
"Hi Mohit,\n\nI found an issue with your implementation. When generating the probability maps, we use SoftMax() instead of LogSoftmax(). If you use LogSoftmax(), the output range will not be 0-1, and the discriminator could easily judge whether the input comes from ground truth or prediction. You can observe the loss of the discriminator whether it is stabilized or not. In our case, the discriminator loss ranges from 0.2-0.4 throughout the training process.",
"Thanks for your comments.\n\n I was working on stabilizing the GAN training. I couldn't reproduce a significant improvement in mIoU by incorporating adversarial training. I was only able to go up from 68.86% to 68.96% for one of the baseline model. From my side, I have tried to include all the details from the paper. \n\nThis is my training scheme if you want to have a look. https://gist.github.com/mohitsharma916/c950864e68f719d69a4fbcae3077cf8f\n\nand the complete implementation is here\nhttps://github.com/mohitsharma916/Adversarial-Semisupervised-Semantic-Segmentation\n\nIn the meanwhile, I will move on to the semi-supervised training.\n\nLooking forward to getting my hands on your implementation to see what I missed. Thanks again for your work.\n",
"Hi Mohit,\n\nThanks for the suggestion. We will add the upsampling details in the following revision. For your information, we will release the source code after the review process.\n\nRegarding your questions:\n\n1. Yes, we think the way you are implementing it is the same to ours.\n\n2. Yes, the weight decay/momentum of the discriminator are the same with the generator.\n\nThanks.",
"Based on your suggestions, I changed my upsampling layers from learnable transposed convolution to simple bilinear upsampling and achieved a mIoU of 69.78. ( As far as I know, now the only difference I have from your submission is using MS COCO pre-trained weights for segmentation network instead of Imagenet. I think I have good enough baseline to continue to the adversarial and semi-supervised training and see if I get a boost by incorporating them on top of my current baseline.) I feel that, because the choice of the upsampling method was so critical in achieving the reported performance of the segmentation network, it would be really helpful if this detail is included in the paper. Anyways, thanks again for giving out the details. \n\nI would like to ask a few things about the adversarial training used in the paper. \n\n1> What scheme did you use for the adversarial-training? \nMy current idea is something along this line: Take a minibatch of the training set. Perform one forward pass of the segmentation network on this minibatch and update the segmentation-network parameters. For discriminator, calculate the discriminator loss on the class-probability map produced by the segmentation network for the current mini-batch. Then, calculate the discriminator loss on the ground-truth label for the same minibatch. Aggregate the two loss (sum or mean?) and update the discriminator parameters. \n\n2> I am not sure about the parameters for the discriminator optimizer. Did you use Nesterov acceleration with Adam? What is the weight decay used (same as generator?)? (I only have a superficial understanding of Adam optimizer. So, I might be missing something obvious. )\n\nThanks.",
"Thanks for your reply. I'll get back to you if I need more help with the experiments. ",
"Hi Mohit,\n\nThanks for interesting in our work. Here are some details that can help yo reproduce our baseline:\n\n1. Upsampling module: We use 2D bilinear upsampling in our segmentation model (essentially nn.upsample in PyTorch). We use one upsampling module with 8x instead of using three 2x layers. In your case, it would be equivalent to 3 ConvTranspose2D layers with their coefficients initialized as the bilinear kernel with zero learning rate. Intuitively, having upsampling layers with learnable parameters might have better performance due to larger model capacity. But in both our experiments and the original FCN paper from Long et al., learning upsampling does not show significant improvement but introduce much computational overhead in training process.\n\n2. As mentioned in the paper, we use the Resnet-101 model that is pretrained on the ImageNet. We use the mean and variance for data normalization as the same during pretraining. If you choose to use the torchvision models from PyTorch, the standard data processing transforms are listed in their official docs.\n\nWe wish the information can help you in your experiments. Let us know if you encounter any issue. Good luck on the challenge!",
"Thanks a lot for your work. I was trying to reproduce the results of your submission as part of the Reproducibility Challenge. For the baseline model, I have achieved a 52 % mIoU so far. I would like to clarify a few details that might be helpful in replicating the results:\n\n1> What method have you used during the training for upsampling the output map of the DeepLab-v2 network to size 321x321 (input image size for training in PASCALVOC). Currently, I have 3 ConvTranspose2D layers (corresponding to each downsampling layer in the DeepLap-v2 network), each upsampling by a factor of 2. \n\n2> Did you use any other common data preprocessing (like Normalization to 0 mean and 1 variance) ?\n\nIs there any other significant detail that would be helpful in improving the results to match those in the paper?\n\nThanks again for your work. "
] | [
5,
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJQO7UJCW",
"iclr_2018_SJQO7UJCW",
"iclr_2018_SJQO7UJCW",
"r1RDwROeG",
"H1Op4eqlM",
"SJRdYLhgM",
"B1exQRalz",
"BkXRaopxz",
"HydNvqQgf",
"BJ_zPCggz",
"BJJhDZakf",
"B1YQsg61M",
"H1wcIfh1z",
"iclr_2018_SJQO7UJCW"
] |
iclr_2018_SyVVXngRW | Deep Asymmetric Multi-task Feature Learning | We propose Deep Asymmetric Multitask Feature Learning (Deep-AMTFL) which can learn deep representations shared across multiple tasks while effectively preventing negative transfer that may happen in the feature sharing process. Specifically, we introduce an asymmetric autoencoder term that allows reliable predictors for the easy tasks to have high contribution to the feature learning while suppressing the influences of unreliable predictors for more difficult tasks. This allows the learning of less noisy representations, and enables unreliable predictors to exploit knowledge from the reliable predictors via the shared latent features. Such asymmetric knowledge transfer through shared features is also more scalable and efficient than inter-task asymmetric transfer. We validate our Deep-AMTFL model on multiple benchmark datasets for multitask learning and image classification, on which it significantly outperforms existing symmetric and asymmetric multitask learning models, by effectively preventing negative transfer in deep feature learning. | rejected-papers | The paper proposes a multitask deep learning method (called Deep-AMFTL) for preventing negative transfer. Despite some positive experimental results, the contribution of the paper is not sufficient for publication at ICLR due to several issues: similarity between the proposed method and existing method (e.g., AMTL), unclear rationale/intuition of the proposed model, clarity of presentation, technical formulation, and limited empirical evaluations (see reviewer comments for details). No author rebuttal was submitted.
| train | [
"ryAf2-ugz",
"H11NN0KgG",
"S1T4ik9ef"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a deep asymmetric multi-task feature learning method (Deep-AMTFL).\n\nOne concern is that the high similarity between the proposed Deep-AMTFL and an existing AMTL method. Even though AMTL operates on task relations and Deep-AMTFL is on feature learning, the main ideas of both methods are very similar, that is, tasks with higher training losses will contribute less to other tasks' model or feature representations. Even though the regularizers seem a bit different, the large similarity with AMTL decreases the novelty of this work.\n\nIn real-world experiments, it is better to show the difference of learned features among the proposed Deep-AMTFL and other baselines.\n\nA minor problem: the last sentence in page 3 is incomplete.\n\n",
"Summary: The paper proposes a multi-task feature learning framework with a focus on avoiding negative transfer. The objective has two kinds of terms to minimise: (1) The reweighed per-task loss, and (2) Regularisation. The new contribution is an asymmetric reconstruction error in the regularisation term, and one parameter matrix in the regulariser influences the reweighing of the pre-task loss. \n\nStrength: \nThe method has some contribution in dealing with negative transfer. The experimental results are positive.\nWeakness:\nSeveral issues in terms of concept, methodology, experiments and analysis.\n\nDetails:\n1. Overall conceptual issues.\n1.1. Unclear motivation re prior work. The proposed approach is motivated by the claim that GO-MTL style models assumes symmetric transfer where bad tasks can hurt good tasks. This assertion seems flawed. The point of grouping/overlap in “GO”-MTL is that a “noisy”, “hard”, or “unrelated\" task can just take its own latent predictor that is disjoint from the pool of predictors shared by the good/related tasks. \nCorrespondingly, Fig 2 seems over-contrived. A good GO-MTL solution would assign the noisy task $w_3$ its own latent basis, and let the two good tasks share the other two latent bases. \n\n1.2 Very unclear intuition of the algorithm. In the AMTFL, task asymmetry is driven by the per-task loss. The paper claims this is because transfer must go from easy=>hard to avoid negative transfer. But this logic relies on several questionable assumptions surrounding conflating the distinct issues of difficulty and relatedness: (i) There could be several easy tasks that are totally un-related. One could construct synthetic examples with data that are trivially separable (easy) but require unrelated or orthogonal classifiers. (ii) A task could appear to be “easy\" just by severe overfitting, and therefore still be detrimental to transfer despite low loss. (iii) A task could be very \"difficult\" in the sense of high loss, but it could still be perfectly learned in the sense of finding the ideal \"ground-truth” classifier, but for a dataset that is highly non-separable in the provided feature-space. Such a perfectly learned classifier may still be useful to transfer despite high loss. (iv) Analogous to point (i), there could be several “difficult” tasks that are indeed related and should share knowledge. (Since difficult/high loss != badly learned as mentioned before). Overall there are lots of holes in the intuitive justification of the algorithm.\n\n2. Somewhat incremental method. \n3.1 It’s a combination of AMTL (Lee 2016) and vanilla auto encoder. \n\n3. Methodology issues: \n3.1 Most of the explanation (Sec 3-3.1) is given re: Matrix B in Eq.(4) (AMTL method’s objective function). However the final proposed model uses matrix A in Eq.(6) for the same purpose of measuring the amount of outgoing transfers from task $t$ to all other tasks. However in the reconstruction loss, they work in very different ways: matrix B is for the reconstruction of model parameters, while matrix A is for the reconstruction of latent features. This is a big change of paradigm without adequate explanation. Why is it still a valid approach?\n3.2 Matrix B in the original paper of AMTL (Eq.(1) of Lee et al., 2016) has a constraint $B \\geq 0$, should matrix A have the same constraint? If not, why?\n3.3 Question Re: the |W-WB| type assumption for task relatedness. A bad task could learn an all-zero vector of outgoing related ness $b^0_t$ so it doesn’t directly influence other tasks in feed-forward sense. But hat about during training? Does training one task’s weights endup influencing other tasks’s weights via backprop? If a bad task is defined in terms of incoming relatedness from good tasks, then tuning the bad task with backprop will eventually also update the good tasks? (presumably detrimentally).\n\n4. Experimental Results not very strong.\n4.1 Tab 1: Neural Network NN and MT-NN beat the conventional shallow MTL approaches decisively for AWA and MNIST. The difference between MT-NN and AMTFL is not significant. The performance boost is more likely due to using NNs rather than the proposed MTL module. For School, there is not significant difference between the methods. For ImageNet-Room AMTL and AMTFL have overlapping errors. Also, a variant of AMTL (AMTL-imbalance) was reported in Lee’2016, but not here where the number is $40\\pm1.71$. \n4.2 Tab 2: The “real” experiments are missing state of the art competitors. Besides a deep GO-MTL alternative, which should be a minimum, there are lots of deep MTL state of the art: Misra CVPR’16 , Yang ICLR’17, Long arXiv/NIPS’17 Multilinear Relationship Nets, Ruder arXiv’17 Sluice Nets, etc.\n\n5. Analysis\n5.1 The proposed method revolves around the notion of “noisy”/“unrelated”/“difficult” tasks. Although the paper conflates them, it may still be a useful algorithm in practice. But it in this case it should devise much better analysis to provide insight and convince us that this is not a fatal oversimplification: What is the discovered relatedness matrix in some benchmarks? Does the discovered relatedness reflect expert knowledge where this is available? Is there a statistically significant correlation between relatedness and task difficulty in practice? Or between relatedness and degree of benefit from transfer, etc? But this is hard to do cleanly as even if the results show a correlation between difficulty and relatedness, it may just be because that’s how relatedness is defined in the proposed algorithm.\n",
"This paper addresses multi-task feature learning, i.e. learning representations that are common across multiple related supervised learning tasks. The paper is not clearly written, so I outline my interpretation on what is the main idea of the manuscript. \n\nThe authors rely on two prior works in multi-task learning that explore parameter sharing (Lee et al, 2016) and subspace learning (Kumar & Daume III 2012) for multi-task learning. \n1) The work of Lee et al 2016 is based on the idea of transferring information through weight vectors, where each task parameter can be represented as a sparse combination of other related task parameters. The interpretation is that negative transfer is avoided because only subset of relevant tasks is considered for transfer. The drawback is the scalability of this approach. \n2) The second prior work is Kumar & Daume III 2012 (and also an early work of Argyrio et al 2008) that is based on learning a common feature representation. Specifically, the main assumption is that tasks parameters lie in a low-dimensional subspace, and parameters of related tasks can be represented as linear combinations of a small number of common/shared latent basis vectors in such subspace. Subspace learning could help to scale up to many tasks.\n\nThe authors try to combine together the ideas/principles in these previous works and propose a sparse auto encoder model for multi-task feature learning with (6) (and (7)) as the main learning objectives for training an autoencoder. \n\n- I couldn’t fully understand the objective in (6) and how exactly it is related to the previous works, i.e. how the relatedness and easyness/hardness of tasks is measured; where does f enter in the autoencoder network structure?\n- The empirical evaluations are not convincing. In the real experiments with image data, only decaf features were used as input to the autoencoder model. Why not using raw input image? Moreover all input features where projected to a lower dimensional space using PCA before inputing to the autoencoder. Why? In fact, linear PCA can be viewed as an autoencoder model with linear encoder and decoder (so that the squared error reconstruction loss between a given sample and the sample reconstructed by the autoencoder is minimal (Bishop, 2006)). Then doing PCA before training an autoencoder is not motivated. \n\n-Writing can be improved. The introduction primarily criticizes the approach of Lee et al, 2016 called Assymetric Multi-task Learning. It would be nicer if the introduction sets the background and covers different approaches/aspects/conditions of negative transfer in transfer learning/multi-task learning setting. The main learning objective (6) should be better explained. \n\n-Conceptual picture is a bit lacking. Striped hyena is used as an example of unreliable noisy data (source of negative transfer) when learning the attribute classifier \"stripes\". One might argue that visually, striped hyena is as informative as white tigers. Perhaps one could use a different (less striped) animal, e.g. raccoon. \n"
] | [
6,
3,
5
] | [
4,
4,
4
] | [
"iclr_2018_SyVVXngRW",
"iclr_2018_SyVVXngRW",
"iclr_2018_SyVVXngRW"
] |
iclr_2018_Hy_o3x-0b | Feature Map Variational Auto-Encoders | There have been multiple attempts with variational auto-encoders (VAE) to learn powerful global representations of complex data using a combination of latent stochastic variables and an autoregressive model over the dimensions of the data. However, for the most challenging natural image tasks the purely autoregressive model with stochastic variables still outperform the combined stochastic autoregressive models. In this paper, we present simple additions to the VAE framework that generalize to natural images by embedding spatial information in the stochastic layers. We significantly improve the state-of-the-art results on MNIST, OMNIGLOT, CIFAR10 and ImageNet when the feature map parameterization of the stochastic variables are combined with the autoregressive PixelCNN approach. Interestingly, we also observe close to state-of-the-art results without the autoregressive part. This opens the possibility for high quality image generation with only one forward-pass.
| rejected-papers | The paper proposes a VAE variant by embedding spatial information with multiple layers of latent variables. Although the paper reports state-of-the-art results on multiple datasets, some results may be due to a bug. This has been discussed, and the author acknowledges the bug. We hope the problem can be fixed, and the paper reconsidered at another venue.
| train | [
"HkYC48PxG",
"SyjePb9gz",
"Hkk3C-5lM",
"ByZxuMamf",
"ryUdttWzG",
"BJZQRBZMf",
"BkUk6H-zz",
"SyaUOmUZf",
"rJNRG94lM",
"HkWqOPVgf",
"SkjKJ5Nlf",
"HyIvKY4ez",
"Hy0Pcd4gf",
"HkHWP_Vef",
"Syq-KRzlz",
"HksGNSfgG",
"H19apQuyM",
"ryo_Mw0Cb",
"SJbqRnO0b",
"S183EOuC-",
"S1RfdFwRb",
"HJc6qSwCZ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"public",
"public",
"public",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"public",
"official_reviewer",
"author",
"public",
"public",
"public",
"public",
"public"
] | [
"The paper combines several recent advances on generative modelling including a ladder variational posterior and a PixelCNN decoder together with the proposed convolutional stochastic layers to boost the NLL results of the current VAEs. The numbers in the tables are good but I have several comments on the motivation, originality and experiments.\n\nMost parts of the paper provide a detailed review of the literature. However, the resulting model is quite like a combination of the existing advances and the main contribution of the paper, i.e. the convolution stochastic layer, is not well discussed. Why should we introduce the convolution stochastic layers? Could the layers encode the spatial information better than a deterministic convolutional layer with the same architecture? What's the exact challenge of training VAEs addressed by the convolution stochastic layer? Please strengthen the motivation and originality of the paper.\n\nThough the results are good, I still wonder what is the exact contribution of the convolutional stochastic layers to the NLL results? Can the authors provide some results without the ladder variational posterior and the PixelCNN decoder on both the gray-scaled and the natural images?\n\nAccording to the experimental setting in the Section 3 (Page 5 Paragraph 2), \"In case of gray-scaled images the stochastic latent layers are dense with sizes 64, 32, 16, 8, 4 (equivalent to Sønderby et al. (2016)) and for the natural images they are spatial (cf. Table 1). There was no significant difference when using feature maps (as compared to dense layers) for modelling gray-scaled images.\" there is no stochastic convolutional layer. Then is there anything new in FAME on the gray images? Furthermore, how could FAME advance the previous state-of-the-art? It seems because of other factors instead of the stochastic convolutional layer. \n\nThe results on the natural images are not complete. Please present the generation results on the ImageNet dataset and the reconstruction results on both the CIFAR10 and ImageNet datasets. The quality of the samples on the CIFAR10 dataset seems not competitive to the baseline papers listed in the table. Though the visual quality does not necessarily agree with the NLL results but such large gap is still strange. Besides, why FAME can obtain both good NLL and generation results on the MNIST and OMNIGLOT datasets when there is no stochastic convolutional layer? Meanwhile, why FAME cannot obtain good generation results on the CIFAR10 dataset? Is it because there is a lot randomness in the stochastic convolutional layer? It is better to provide further analysis and it is not safe to say that the stochastic convolutional layer helps learn better latent representations based on only the NNL results.\n\nMinor things:\n\nPlease rewrite the sentence \"When performing reconstructions during training ... while also using the stochastic latent variables z = z 1 , ..., z L.\" in the caption of Figure 1.",
"The description of the proposed method is very unclear. From the paper it is very difficult to make out exactly what architecture is proposed. I understand that the prior on the z_i in each layer is a pixel-cnn, but what is the posterior? Equations 8 and 9 would suggest it is of the same form (pixel-cnn) but this would be much too slow to sample during training. I'm guessing it is just a factorized Gaussian, with a separate factorized Gaussian pseudo-prior? That is, in figure 1 all solid lines are factorized Gaussians and all dashed lines are pixel-cnns?\n\n* The word \"layers\" is sometimes used to refer to latent variables z, and sometimes to parameterized neural network layers in the encoder and decoder. E.g. \"The top stochastic layer z_L in FAME is a fully-connected dense layer\". No, z_L is a vector of latent variables. Are you saying the encoder produces it using a fully-connected layer?\n* Section 2.2 starts talking about \"deterministic layers h\". Are these part of the encoder or decoder? What is meant by \"number of layers connecting the stochastic latent variables\"?\n* Section 2.3: What is meant by \"reconstruction data\"?\n\nIf my understanding of the method is correct, the novelty is limited. Autoregressive priors were used previously in e.g. the Lossy VAE by Chen et al. and IAF-VAE by Kingma et al. The reported likelihood results are very impressive though, and would be reason for acceptance if correct. However, the quality of the sampled images shown for CIFAR-10 doesn't match the reported likelihood. There are multiple possible reasons for this, but after skimming the code I believe it might be due to a faulty implementation of the variational lower bound. Instead of calculating all quantities in the log domain, the code takes explicit logs and exponents and stabilizes them by adding small quantities \"eps\": this is not guaranteed to give the right result. Please fix this and re-run your experiments. (I.e. in _loss.py don't use x/(exp(y)+eps) but instead use x*exp(-y). Don't use log(var+eps) with var=softplus(x), but instead use var=softplus(x)+eps or parameterize the variance directly in the log domain).",
"Update: In light of Yoon Kim's retraction of replication, I've downgraded my score until the authors provide further validation (i.e. CIFAR and ImageNet samples).\n\nSummary\n\nThis paper proposes VAE modifications that allow for the use multiple layers of latent variables. The modifications are: (1) a shared en/decoder parametrization as used in the Ladder VAE [1], (2) the latent variable parameters are functions of a CNN, and (3) use of a PixelCNN decoder [2] that is fed both the last layer of stochastic variables and the input image, as done in [3]. Negative log likelihood (NLL) results on CIFAR 10, binarized MNIST (dynamic and static), OMNIGLOT, and ImageNet (32x32) are reported. Samples are shown for CIFAR 10, MNIST, and OMNIGLOT. \n\n\nEvaluation\n\nPros: The paper’s primary contribution is experimental: SOTA results are achieved for nearly every benchmark image dataset (the exception being statically binarized MNIST, which is only .28 nats off). This experimental feat is quite impressive, and moreover, in the comments on OpenReview, Yoon Kim claims to have replicated the CIFAR result. I commend the authors for making their code available already via DropBox. Lastly, I like how the authors isolated the effect of the concatenation via the ‘FAME No Concatenation’ results. \n\nCons: The paper provides little novelty in terms of model or algorithmic design, as using a CNN to parametrize the latent variables is the only model detail unique to this paper. In terms of experiments, the CIFAR samples look a bit blurry for the reported NLL (as others have mentioned in the OpenReview comments). I find the authors’ claim that FAME is performing superior global modeling interesting. Is there a way to support this experimentally? Also, I would have liked to see results w/o the CNN parametrization; how important was this choice? \n\n\nConclusion\n\nWhile the paper's conceptual novelty is low, the engineering and experimental work required (to combine the three ideas discussed in the summary and evaluate the model on every benchmark image dataset) is commendable. I recommend the paper’s acceptance for this reason.\n\n\n[1] C. Sonderby et al., “Ladder Variational Autoencoders.” NIPS 2016.\n[2] A. van den Oord et al., “Conditional Image Generation with PixelCNN Decoders.” ArXiv 2016.\n[3] I. Gulrajani et al., “PixelVAE: A Latent Variable Model for Natural Images.” ICLR 2017.\n",
"Dear reviewers,\n\nThank you for all of your useful feedback. We have used this rebuttal period to investigate our results and have found that\n\n1) the grayscale MNIST and OMNIGLOT result hold and\n2) we too had a bug in the AR model part for the natural color images.\n \nWe have corrected the bug by now and the samples look much better. However, we won’t be able to update the results in due time, which is why we completely understand you not accepting the paper in its current format. We plan to submit to ICML and apologize for the inconvenience.",
"Sure, it's incredibly silly/embarrassing: I didn't realize that unlike MNIST/OMNIGLOT, CIFAR numbers were in bits and not nats!",
"Thank you for the update. Could you say what the bug was, even if it was silly? This would allow other researchers (including the authors of the paper) to make sure that they don't have this bug in their code as well.",
"In light of the independent replication claim being retracted below, could the authors comment whether they still believe that the results reported in the paper are correct? If so, could you post some CIFAR10 and ImageNet samples obtained after fixing the sampling bug mentioned below?",
"[EDIT]: I just realized I had a really silly bug in my implementation. Please disregard my previous posts regarding successful replication.\n\nSorry for adding noise to the process!",
"ok, perhaps I was too quick there.\n\nCould you try evaluating a trained model with eps=0 in the variational bound instead of eps=1e-8? Since both the prior and posterior are learned, the model might learn to take advantage of your stability measures (this is not the best way of implementing this). If this matters (i.e. if the eps actually does something) the bound would be bad.",
"Dear anonymous and AnonReviewer3,\n\n* Thank your for spotting the sub-quality sample. We have identified a possible error. We had forgotten to sample the softmax from the auto-regresssive part of the model. This of course have a negative influence on the sample quality but does not affect the test log-likelihood calculation. We will provide a follow-up on this a bit later with new samples. \n\n* We provide the complete code here (Python 3 & Tensorflow 1.2): https://www.dropbox.com/s/wjhhxff0b0np6xi/FAME-implementation.zip?dl=0. We will also provide the code in a Github repo later. \n\n* We have trained a PixelCNN with an equivalent architecture as the one used for FAME (no R->G->B dependency) and achieved a NLL at 3.34 bits/dim. So we are confident that our code is working properly.\n\n* Finally we would like to note that we forgot to add a “log” in the equation following Eq. 6 in the paper.\n",
"Yes. In _fame.py lines: 60, 99 and 105, you can see that I call the function using the default arguments mean=0. and var=1.\n\nPlease note that there is a difference between self.mean, self.var and input_mean, input_var.",
"this is line 47: eps = tf.random_normal(tf.shape(input_mean),mean=self.mean, stddev=np.sqrt(self.var), seed=self.seed, name=self.name)\n\nAre you saying self.mean=0 and self.var=1 always?\n\n",
"Dear AnonReviewer3,\n\nFirst we generate a random tensor N(0,I)->'eps' (line 47) that has the same shape as the input then we calculate z (line 51) by applying the reparameterization trick: https://arxiv.org/pdf/1312.6114.pdf.",
"Thanks for sharing the code! I went through some of the files super quickly, and I seem to spot at least 1 bug: In StochasticGaussian() you create a random distribution z ~ N(m,s), and then you produce output z' = m + s*z. Seems like you're transforming the standard normal variable twice? Let me know if I'm wrong.\n",
"Thank you for your detailed response. The MNIST and Omniglot samples you provided look reasonable and appear consistent with the scores you report on the datasets.\n\nI'm still puzzled by the apparent discrepancy between the sample quality and the test log-likelihood estimates on CIFAR10. To me it looks like the top-left pixel in all the samples in Figure 2 is white, which suggests that something is wrong with either training or sampling. You might want to check whether your PixelCNN implementation is correct for RGB data, e.g. conditioning is consistent between training and sampling. What do the samples look like if you remove the latent variables and train just the PixelCNN component?",
"Would it be possible to somehow share the code before the review period ends? Currently I also have a very hard time believing the reported 2.75 bits per dim number on CIFAR-10.",
"Dear Yoon Kim and Anonymous,\n\nThank you for your interest in the paper. We have taken the comments seriously and thoroughly reviewed the code without finding any bugs. We have retrained the models and are confident in the results given in the Tables. We actually also found slight improvements compared to the results reported.\n\nFirst of all we would like to answer the questions formulated on the 1st of November by anonymous:\n\nQ1. The Omniglot samples in Figure 4 don't look binary. Are you showing the probabilities instead of the actual samples? Which version of the Omniglot dataset did you use and how did you preprocess it?\n\nYes, we are showing the probabilities. We will include the stochastically binarized samples in the revised version. Unfortunately it is not possible to include them in this OpenReview format, so please see: https://imgur.com/gallery/NRvxO. From the plots we can see comparable results to VLAE. It is hard to distinguish whether one is better over the other by only evaluating the samples, hence the log-likelihood results in the tables should tell the full story.\n\nQ2. The MNIST samples in Figure 4 do look binary, but their edges are far too smooth for stochastically binarized MNIST. In other words, they don't actually look like the data: just compare them to samples on Figure 1 in the VLAE paper (https://arxiv.org/abs/1611.02731). Did you sample each pixel or did you just use the most probable value?\n\nYou are right, we used the most probable value. We will include the stochastically binarized samples in the revised version. Find them here: https://imgur.com/gallery/NRvxO .\n\nQ3. The CIFAR10 samples in Figure 2 have very little local detail and are not nearly sharp and structured enough to correspond to the 2.75 bits/dim result reported in the paper. In my experience, a model generating samples like this should get 3.4 bits/dim at best. What kind of test NLL estimates do you get with a single sample on CIFAR10 and ImageNet?\n\nWe do agree that these samples do not have as much local detail as the samples in VLAE and PixelCNN++. However, we have a very simple PixelCNN parameterization without the R->G->B dependency and all of the additional contributions as the PixelCNN++/VLAE papers have. In our approach the local structure is worse but the global modeling is better. We interpret the better test likelihood score as a sign that visual appeal is not the ultimate way to judge the model. We expect that including a better autoregressive model in FAME will give us the best of both worlds.\n\nFor the camera-ready version we will train FAME with a more complex autoregressive model and visualize the generated images in the final paper for both ImageNet and CIFAR10. We didn't do this experiment, since we did not have the time before the deadline and we were more interested in answering the question to why the additional VAE parameterization in PixelVAE and VLAE didn't give a better bound.\n\nLast but not least we will publish the code on Github upon publishing the paper.\n\n\n",
"Interestingly, I do find that the samples are quite a bit blurrier than other papers that achieve higher bits/dim (e.g. VLAE/PixelCNN++, etc.). \n\nAuthors: As the above poster suggested, I would be curious to see the ImageNet samples as well.\nAlso, how does the KL look for CIFAR10? What about reconstructions?\n\nIt could also mean that our intuition regarding bits/dim translating to higher quality samples is not necessarily true, e.g. due to teacher-forced training vs sampling-based generation. Or it could simply be a bug on my part... \n\nIncidentally, I was able to get the bits/dim down ~2.7 by playing around with the hyperparameters a bit more. For me these seemed to help:\n\n- learning the prior (log) variances\n- using a higher dimensional latent dimension at each stage (I use 32 at each stage, and my first latent map is at the 8 x 8 resolution)\n",
"No problem! I haven't checked the samples and I use the batchnorm statistics from the training set only.\n",
"Thanks a lot for sharing your replication experience. Do your CIFAR10 samples look substantially different from the ones in the paper? When computing the test NLL estimate, do you use the batchnorm statistics from the training set or the test set? ",
"I too found the CIFAR results remarkable, so I replicated it, and I was able to match the ~2.8 number (with slightly different architecture/hyperparameters than was used in the paper).\n\nReally nice work! ",
"Intrigued by the claim of state-of-the-art results, I read the paper and noticed several things about the results that don't look right.\n\nThe Omniglot samples in Figure 4 don't look binary. Are you showing the probabilities instead of the actual samples? Which version of the Omniglot dataset did you use and how did you preprocess it?\n\nThe MNIST samples in Figure 4 do look binary, but their edges are far too smooth for stochastically binarized MNIST. In other words, they don't actually look like the data: just compare them to samples on Figure 1 in the VLAE paper (https://arxiv.org/abs/1611.02731). Did you sample each pixel or did you just use the most probable value?\n\nThe CIFAR10 samples in Figure 2 have very little local detail and are not nearly sharp and structured enough to correspond to the 2.75 bits/dim result reported in the paper. In my experience, a model generating samples like this should get 3.4 bits/dim at best. What kind of test NLL estimates do you get with a single sample on CIFAR10 and ImageNet?\n\nFinally, given that you seem to have the best 32x32 ImageNet result by a huge margin it seems odd not to include any samples from this model. Why did you omit them? While sample quality is just one aspect of model performance, in my experience a large discrepancy between sample quality and NLL in a VAE-like model usually means that there's a bug in the code."
] | [
5,
3,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_Hy_o3x-0b",
"iclr_2018_Hy_o3x-0b",
"iclr_2018_Hy_o3x-0b",
"iclr_2018_Hy_o3x-0b",
"BJZQRBZMf",
"SyaUOmUZf",
"iclr_2018_Hy_o3x-0b",
"SyjePb9gz",
"SkjKJ5Nlf",
"HksGNSfgG",
"HyIvKY4ez",
"Hy0Pcd4gf",
"HkHWP_Vef",
"HkWqOPVgf",
"H19apQuyM",
"H19apQuyM",
"SJbqRnO0b",
"S183EOuC-",
"S183EOuC-",
"S1RfdFwRb",
"HJc6qSwCZ",
"iclr_2018_Hy_o3x-0b"
] |
iclr_2018_HJIhGXWCZ | Prediction Under Uncertainty with Error Encoding Networks | In this work we introduce a new framework for performing temporal predictions
in the presence of uncertainty. It is based on a simple idea of disentangling com-
ponents of the future state which are predictable from those which are inherently
unpredictable, and encoding the unpredictable components into a low-dimensional
latent variable which is fed into the forward model. Our method uses a simple su-
pervised training objective which is fast and easy to train. We evaluate it in the
context of video prediction on multiple datasets and show that it is able to consi-
tently generate diverse predictions without the need for alternating minimization
over a latent space or adversarial training. | rejected-papers | The paper proposes a novel predictive model (e.g., from videos), called error encoding networks, by first learning a deterministic prediction model and then learning to minimize the residual error using latent variables. The latent variables given the sample are estimated by sampling from the prior then updating via gradient descent. The proposed method shows improved performance over the baselines. However, the qualitative results are not fully convincing, possibly because of (1) the limitation of the architecture, (2) suboptimal implementation/tuning of baselines (such as GAN and cVAE). | train | [
"Hk1_7QS4z",
"SykHOLdxz",
"B1YP4d5lf",
"Byn8p7V-z",
"BJEMFNXEG",
"H1sUBupmf",
"BkNb8pXyM",
"SJb-YQmyG",
"Sko_8QFAb",
"H1MzsbDRW"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"author",
"public"
] | [
"Thank you for the revised version of the paper and adding new experiments. Even though the core idea is interesting I am not still convinced of the architecture and the experiment results. One reason could be the fact that video generation might have been wrong choice of task. As there has been many recent works in video sequence generation with recurrent structure that with much better quality than those shown in this work (just one recent example e.g. https://arxiv.org/pdf/1705.10915.pdf). The propose model could be applied to recurrent, and then one can evaluate the true effectiveness of the results here.\nAs for the metric, I am not convinced how this could reflect the quality of multi-modality? It could be that all samples are from one mode, and just one of them happens to be of a higher quality than others? \nI increased my score for for better quality of the revisioned paper but I still find the results and arguments weak. ",
"This paper introduce a times-series prediction model that works in two phases. First learns a deterministic mapping from x to y. And then train another net to predict future frames given the input and residual error from the first network. And does sampling for novel inputs by sampling the residual error collected from the training set. \n\nPros:\nThe paper is well written and easy to follow.\nGood cover of relevant work in sec 3.\n\nCons\nThe paper emphasis on the fact the their modeling multi-modal time series distributions, which is almost the case for most of the video sequence data. But unfortunately doesn’t show any results even qualitative like generated samples for other work on next frame video prediction. The shown samples from model looks extremely, low quality and really hard to see the authors interpretations of it.\n\nThere are many baselines missing. One simple one would be what if they only used the f and draw z samples for N(0,1)? VAE is very power latent variable model which also not being compared against. It is not clear what implantation of GAN they are using?.Vanilla GAN is know to be hard to train and there has been many variants recently that overcome some of those difficulties and its mode collapse problem. \n",
"The paper proposes a model for prediction under uncertainty where the separate out deterministic component prediction and uncertain component prediction.\nThey propose to have a predictor for deterministic information generation using a standard transformer trained via MSE.\nFor the non-deterministic information, they have a residual predictor that uses a low-dimensional latent space. This low-dim latent space is first predicted from the residual of the (deterministic prediction - groundtruth), and then the low-dim encoding goes into a network that predicts a corrected image.\nThe subtleness of this work over most other video prediction work is that it isn't conditioned on a labeled latent space (like text to video prediction, for example). Hence inferring a structured latent space is a challenge.\nThe training procedure follows an alternative minimization in EM style.\n\nThe biggest weakness of the paper (and the reason for my final decision) is that the paper completely goes easy on baseline models. It's only baseline is a GAN model that isn't even very convincing (GANs are finicky to train, so is this a badly tuned GAN model? or did you spend a lot of time tuning it?).\n\nBecause of the plethora of VAE models used in video prediction [1] (albeit, used with pre-structured latent spaces), there has to be atleast one VAE baseline. Just because such a baseline wasn't previously proposed in literature (in the narrow scope of this problem) doesn't mean it's not an obvious baseline to try. In fact, a VAE would be nicely suited when proposing to work with low-dimensional latent spaces.\n\nThe main signal I lack from reading the paper is whether the proposed model actually does better than a reasonable baseline.\nIf the baselines are stronger and this point is more convincing, I am happy to raise my rating of the paper.\n\n[1] http://openaccess.thecvf.com/content_ICCV_2017/papers/Marwah_Attentive_Semantic_Video_ICCV_2017_paper.pdf",
"Summary: \n\nI like the general idea of learning \"output stochastic\" noise models in the paper, but the idea is not fully explored (in terms of reasonable variations and their comparative performance). I don't fully understand the rationale for the experiments: I cannot speak to the reasons for the GAN's failure (GANs are not easy to train and this seems to be reflected in the results); the newly proposed model seems to improve with samples simply because the evaluation seems to reward the best sample. I.e., with enough throws, I can always hit the bullseye with a dart even when blindfolded.\n\nComments:\n\nThe model proposes to learn a conditional stochastic deep model by training an output noise model on the input x_i and the residual y_i - g(x_i). The trained residual function can be used to predict a residual z_i for x_i. Then for out-of-sample prediction for x*, the paper appears to propose sampling a z uniformly from the training data {z_i}_i (it is not clear from the description on page 3 that this uniformly sampled z* = z_i depends on the actual x* -- as far as I can tell it does not). The paper does suggest learning a p(z|x) but does not provide implementation details nor experiment with this approach.\n\nI like the idea of learning an \"output stochastic\" model -- it is much simpler to train than an \"input stochastic\" model that is more standard in the literature (VAE, GAN) and there are many cases where I think it could be quite reasonable. However, I don't think the authors explore the idea well enough -- they simply appear to propose a non-parametric way of learning the stochastic model (sampling from the training data z_i's) and do not compare to reasonable alternative approaches. To start, why not plot the empirical histogram of p(z|x) (for some fixed x's) to get a sense of how well-behaved it is as a distribution. Second, why not simply propose learning exponential family models where the parameters of these models are (deep nets) conditioned on the input? One could even start with a simple Gaussian and linear parameterization of the mean and variance in terms of x. If the contribution of the paper is the \"output stochastic\" noise model, I think it is worth experimenting with the design options one has with such a model.\n\nThe experiments range over 4 video datasets. PSNR is evaluated on predicted frames -- PSNR does not appear to be explicitly defined but I am taking it to be the metric defined in the 2nd paragraph from the bottom on page 7. The new model \"EEN\" is compared to a deterministic model and conditional GAN. The GAN never seems to perform well -- the authors claim mode collapse, but I wonder if the GAN was simply hard to train in the first place and this is the key reason? Unsurprisingly (since the EEN noise does not seem to be conditioned on the input), the baseline deterministic model performs quite well. If I understand what is being evaluated correctly (i.e., best random guess) then I am not surprised the EEN can perform better with enough random samples. Have we learned anything?\n",
"Overall I am still somewhat confused by the evaluation metric and whether it is testing variance or actually testing mode coverage as the authors claim. However, this could be due to my misunderstanding and not the fault of the authors.\n\nThe paper has improved from the initial submission; I will raise my score to reflect this. I will also lower my review confidence since I do not feel I have the expertise to judge all of the (new) experimental details of the paper; I defer to other reviewers for their opinion on how much the new experiments bring the paper closer to acceptance.",
"We would like to thank the reviewers for the thoughtful reviews and suggestions, they will help make the paper stronger. We have made several changes to the paper which we hope address the reviewers’ concerns. \n\n*We have added three baselines: \n-The model proposed by reviewer 2 where we keep f and sample z in N(0, 1), which we call CNN + noise in the paper. \n-A conditional VAE model, where the z distribution is a function of phi(x, y). \n-A conditional autoencoder model, which is like a VAE but the z’s are deterministic and rather than sampling from N(0, 1) at test time, we use the same non-parametric sampling approach as for the EEN. \nWe found that the CNN+noise and VAE models experienced mode collapse, which has been observed before in the literature when doing conditional generation when the conditioning variable x contains a lot of signal: the model can achieve large improvements in the prediction loss by learning a deterministic function of x, letting the z distribution go to N(0, 1) to lower the KL term in the loss, and having the rest of the network ignore z. The conditional autoencoder and EEN do not experience this since they do not have a KL term and instead rely on the non-parametric sampling procedure which does not place any assumptions on the latent distribution. We find that the EEN produces generations which are either similar or better (in terms of our performance metric and visual inspection) than the autoencoder.\n \n*We replaced the L2 loss by the L1 loss on the Poke dataset, which we found to improve the generation quality. We also used L1/L2 in the quantitative evaluation, rather than PSNR, so we could evaluate the models on the Poke dataset using the same loss they were trained on.\n\n*Concerning the evaluation metric, we would like to clarify what it does and does not measure (we have updated the description in the paper). It measures whether there exists a sample in the set of generated samples which is close to the test sample in L1/L2 norm. Assuming the conditional distribution P(y|x) is multimodal, the metric does reflect whether the model can generate outputs which cover several modes. However, it does not say whether all the generated samples are of good quality. For example, if model 1 generates several good samples and then a slightly less good one, and model 2 generates several good samples and then a terrible one, this would not be reflected strongly in this metric. Another way of seeing it is the following: say P(y|x) has modes M1, M2 and the model generates samples in M1, M2 and a third mode M3. This would still get a fairly good score by our metric (although the curves would likely improve a bit more slowly with more samples). However if the model only generates samples in M1, it would get a bad score. We would also like to note that although the evaluation does reward the best guess, all models are given the same number of guesses, therefore we believe it is a fair way to compare models. Models which experience mode collapse will always make the same guess, which leads to poor performance - even being given a huge number of guesses will not improve performance. \n\n*In the previous version of the paper, the results for Flappy Bird and TORCS were for 1 predicted frame, rather than 4 predicted frames. We only include results for 4 predicted frames in the updated paper, as the effects of multi-modality are more pronounced when we look further into the future. \n\n*We found that by increasing the size of the deterministic model we were able to get much better performance on Seaquest (both in terms of loss and generated images). This indicates that there is actually less uncertainty to model in this dataset than previously thought (for example, the agent may be following a policy which is nearly deterministic) , so we removed it to save space.\n\n* We changed the formatting for the Flappy Bird Generations, which we hope makes them more readable. ",
"Thank you for the comment. Although there are indeed many works on video prediction, these are generally deterministic and we are not aware of other work on video data that performs multi-modal prediction other than (Goroshin et. al, 2015) and (Vondrick et. al 2015) mentioned in related work, who perform alternating minimization over latent variables. The focus of these works is different from ours in that Vondrick et. al perform prediction of high-level representations using a pre-trained network (rather than pixels) with the goal of predicting future actions or objects appearing in the video, and Goroshin et. al focus primarily on learning linearized representations and only apply the latent variable version of their model to very simple settings. We tried alternating minimization in early experiments on simple tasks, and found that it performed similarly or worse than our method (in terms of loss) while being considerably slower due to the inner optimization loop and also introduced new hyperparameters to tune such as the learning rate and number of iterations in the inner loop. Comparing to GANs seemed appropriate, since they are a widely used method which can in principle perform multi-modal generations (although as noted in the paper, they can suffer from mode collapse especially in the conditional setting). \n\nTo our knowledge, PSNR (along with SSIM) is one of the more common metrics for evaluating video generations, and is used in several works, for example:\n\nhttps://arxiv.org/pdf/1511.05440.pdf\nhttps://arxiv.org/pdf/1605.08104.pdf\nhttps://arxiv.org/pdf/1605.07157.pdf\nhttps://arxiv.org/pdf/1706.08033.pdf\n\nWe also computed SSIM, and found that the EEN performance also increases with the number of samples, although the difference is less pronounced than with PSNR or MSE. Note that the SSIM contains terms which compare statistics taken over windows of the image, meaning that small changes in object location between two images (for example, the paddle moving in Breakout) may not be reflected as much with this metric. However, we can include this metric along with MSE or others if the reviewers think it is appropriate.\n\n\n",
"While the authors provide some demonstration on Youtube, I'm unsure that the approach really improve the performance, \nas there is speculation about evaluation setting.\nAlthough there are lot of works for this area, the authors compares the method with GAN only.\nPNSR, the criterion author showed, is not common.\n\n(This is minor comment)\nYou need not to show all the URLs for each paper in the reference section.",
"Thank you for the question. We actually do want *some* information from the target to seep through the latent variable z, i.e. we would like z to encode information about y which is not predictable from x. For example, if x and y are consecutive images and a new object which was not present in x appears in y from outside the frame (and cannot be predicted from x), we would like this information to be encoded in the latent variable z. \n\nHowever, it is true that we do not want z to encode information about y which could be predicted from x. The fact that z is of much lower dimension than y forces the network to compress the inherently unpredictable part of y in such a way that z must be combined with x to reconstruct y. This low dimensionality of z prevents the network from learning \\phi^{-1}(z) = g(x) - y. In most of our experiments, y is a set of 4 images of dimensions ranging in size from 84x84 to 240x240 (i.e. high-dimensional) whereas z has between 2 and 32 dimensions. In our video generations we condition on a set of frames from the test set, but use z vectors that are extracted from the disjoint training set. If a z vector encoded a lot of information about the specific target used to compute it (rather than general features such as \"the paddle moves left\" or \"a new pipe appears at this height\"), then there would be a mismatch between the conditioning frames and the generated frames (for example, different backgrounds), which does not appear to be the case. \n\nWe will clarify this in the updated paper.",
"How can you be sure that no information of the target seeps through the latent variable $z$? After all, f only needs to learn to undo $\\phi$ and that $\\phi^{-1}(z) = x - y \\Leftrightarrow y = x - \\phi^{-1}(z)$."
] | [
-1,
4,
5,
5,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
3,
2,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"H1sUBupmf",
"iclr_2018_HJIhGXWCZ",
"iclr_2018_HJIhGXWCZ",
"iclr_2018_HJIhGXWCZ",
"H1sUBupmf",
"iclr_2018_HJIhGXWCZ",
"SJb-YQmyG",
"iclr_2018_HJIhGXWCZ",
"H1MzsbDRW",
"iclr_2018_HJIhGXWCZ"
] |
iclr_2018_rJa90ceAb | Learning to Generate Filters for Convolutional Neural Networks | Conventionally, convolutional neural networks (CNNs) process different images with the same set of filters. However, the variations in images pose a challenge to this fashion. In this paper, we propose to generate sample-specific filters for convolutional layers in the forward pass. Since the filters are generated on-the-fly, the model becomes more flexible and can better fit the training data compared to traditional CNNs. In order to obtain sample-specific features, we extract the intermediate feature maps from an autoencoder. As filters are usually high dimensional, we propose to learn a set of coefficients instead of a set of filters. These coefficients are used to linearly combine the base filters from a filter repository to generate the final filters for a CNN. The proposed method is evaluated on MNIST, MTFL and CIFAR10 datasets. Experiment results demonstrate that the classification accuracy of the baseline model can be improved by using the proposed filter generation method. | rejected-papers | The paper proposes a method for learning convolutional networks with dynamic input-conditioned filters. There are several prior work along this idea, but there is no comparison agaist them. Overall, experimental results are not convincing enough. | test | [
"HygXOMDxf",
"Syy4M8qxf",
"BJFxOpcez",
"SJzHA_zyf",
"rknWFdWyf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"The authors propose an approach to dynamically generating filters in a CNN based on the input image. The filters are generated as linear combinations of a basis set of filters, based on features extracted by an auto-encoder. The authors test the approach on recognition tasks on three datasets: MNIST, MTFL (facial landmarks) and CIFAR10, and show a small improvement over baselines without dynamic filters.\n\nPros:\n1) I have not seen this exact approach proposed before.\n2) There method is evaluated on three datasets and two tasks: classification and facial landmark detection.\n\nCons:\n1) The authors are not the first to propose dynamically generating filters, and they clearly mention that the work of De Brabandere et al. is closely related. Yet, there is no comparison to other methods for dynamic weight generation. \n2) Related to that, there is no ablation study, so it is unclear if the authors’ contributions are useful. I appreciate the analysis in Tables 1 and 2, but this is not sufficient. Why the need for the autoencoder - why can’t the whole network be trained end-to-end on the goal task? Why generate filters as linear combination - is this just for computational reasons, or also accuracy? This should be analyzed empirically.\n3) The experiments are somewhat substandard:\n- On MNIST the authors use a tiny poorly-performance network, and it is no surprise that one can beat it with a bigger dynamic filter network.\n- The MTFL experiments look most convincing (although this might be because I am not familiar with SoTA on the dataset), but still there is no control for the number of parameters, and the performance improvements are not huge\n- On CIFAR10 - there is a marginal improvement in performance, which, as the authors admit, can also be reached by using a deeper model. The baseline models are far from SoTA - the authors should look at more modern architecture such as AllCNN (not particularly new or good, but very simple), ResNet, wide ResNet, DenseNet, etc.\n\nAs a comment, I don’t think classification is a good task for showcasing such an architecture - classification is already working extremely well. Many other tasks - for instance, detection, tracking, few-shot learning - seem much more promising.\n\nTo conclude, the authors propose a new approach to learning convolutional networks with dynamic input-conditioned filters. Unfortunately, the authors fail to demonstrate the value of the proposed method. I therefore recommend rejection.",
"This paper proposes a two-pathway neural network architecture. One pathway is an autoencoder that extracts image features from different layers. The other pathway consists of convolutional layers to solve a supervised task. The kernels of these convolutional layers are generated dynamically based on the autoencoder features of the corresponding layers. Directly mapping the autoencoder features to the convolutional kernels requires a very large matrix multiplication. As a workaround, the proposed method learns a dictionary of base kernels and maps the features to the coefficients on the dictionary. \n\nThe proposed method is an interesting way of combining an unsupervised learning objective and a supervised one. \n\nWhile the idea is interesting, the experiments are a bit weak. \nFor MNIST (Table 1), only epoch 1 and epoch 20 results are reported. However, the results of a converged model (train for more epochs) are more meaningful. \nFor Cifar-10 (Figure 4b), the final accuracy is less than 90%, which is several percentages lower than the state-of-the-art method.\nFor MTFL, I am not sure how significant the final results are. It seems a more commonly used recent protocol is to train on MTFL and test on AFLW. \nIn general, the experiments are under controlled settings and are encouraging. However, promising results for comparing with the state-of-the-art methods are necessary for showing the practical importance of the proposed method. \n\nA minor point: it is a bit unnatural to call the proposed method “baseline” ... \n\nIf the model is trained in an end-to-end manner. It will be helpful to perform ablative studies on how critical the reconstruction loss is (Note that the two pathway can be possibly trained using a single supervised objective function). \n\nIt will be interesting to see if the proposed model is useful for semi-supervised learning. \n\nA paper that may be related regarding dynamic filters:\nImage Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction\n\nSome paper that may be related regarding combine supervised and unsupervised learning:\nStacked What-Where Auto-encoders\nSemi-Supervised Learning with Ladder Networks\nAugmenting Supervised Neural Networks with Unsupervised Objectives for Large-Scale Image Classification\n\n",
"This paper explores learning dynamic filters for CNNs. The filters are generated by using the features of an autoencoder on the input image, and linearly combining a set of base filters for each layer. This addresses an interesting problem which has been looked at a lot before, but with some small new parts. There is a lot of prior work in this area that should be cited in the area of dynamic filters and steerable filters. There are also parallels to ladder networks that should be highlighted. \n\nThe results indicate improvement over baselines, however baselines are not strong baselines. \nA key question is what happens when this method is combined with VGG11 which the authors train as a baseline? \nWhat is the effect of the reconstruction loss? Can it be removed? There should be some ablation study here.\nFigure 5 is unclear what is being displayed, there are no labels.\n\nOverall I would advise the authors to address these questions and suggest this as a paper suitable for a workshop submission.\n",
"Thank you for your comment. I just read your paper. It is very interesting. I will cite your work in the next version.",
"Nice work, you might be interested in our recent paper on Dynamic Filter Networks with alternative bases: https://arxiv.org/abs/1706.00598"
] | [
4,
5,
4,
-1,
-1
] | [
4,
4,
5,
-1,
-1
] | [
"iclr_2018_rJa90ceAb",
"iclr_2018_rJa90ceAb",
"iclr_2018_rJa90ceAb",
"rknWFdWyf",
"iclr_2018_rJa90ceAb"
] |
iclr_2018_HkCvZXbC- | 3C-GAN: AN CONDITION-CONTEXT-COMPOSITE GENERATIVE ADVERSARIAL NETWORKS FOR GENERATING IMAGES SEPARATELY | We present 3C-GAN: a novel multiple generators structures, that contains one conditional generator that generates a semantic part of an image conditional on its input label, and one context generator generates the rest of an image. Compared to original GAN model, this model has multiple generators and gives control over what its generators should generate. Unlike previous multi-generator models use a subsequent generation process, that one layer is generated given the previous layer, our model uses a process of generating different part of the images together. This way the model contains fewer parameters and the generation speed is faster. Specifically, the model leverages the label information to separate the object from the image correctly. Since the model conditional on the label information does not restrict to generate other parts of an image, we proposed a cost function that encourages the model to generate only the succinct part of an image in terms of label discrimination. We also found an exclusive prior on the mask of the model help separate the object. The experiments on MNIST, SVHN, and CelebA datasets show 3C-GAN can generate different objects with different generators simultaneously, according to the labels given to each generator. | rejected-papers | The paper presents a layered image generation model (e.g., foreground vs background) using GANs. The high-level idea is interesting, but novelty is somewhat limited. For example, layered generation with VAE/GAN has been explored in Yan et al. 2016 (VAEs) and Vondrick et al. 2016 (GANs). In addition, there are earlier works for unsupervised learning of foreground/background generative models (e.g., Le Roux et al., Sohn et al.). Another critical problem is that only qualitative results on relatively simple datasets (e.g., MNIST, SVHN, CelebA) are provided as experimental results. More quantitative evaluations and additional experiments on more challenging datasets will strengthen the paper.
* N. Le Roux, N. Heess, J. Shotton, J. Winn; Learning a generative model of images by factoring appearance and shape; Neural Computation 23(3): 593-650, 2011.
** Sohn, K., Zhou, G., Lee, C., & Lee, H. Learning and selecting features jointly with point-wise gated Boltzmann machines. ICML 2013.
| train | [
"By9jZQukf",
"ByfJW0vlf",
"SygpIWqlM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary: This paper studied the conditional image generation with two-stream generative adversarial networks. More specifically, this paper proposed an unsupervised learning approach to generate (1) foreground region conditioned on class label and (2) background region without semantic meaning in the label. During training, two generators are competing against each other to hallucinate foreground region and background region with a physical gating operation. An auxiliary “label difference cost” was further introduced to encourage class information captured by the foreground generator. Experiments on MNIST, SVHN, and CelebA datasets demonstrated promising generation results with the unsupervised two-stream generation pipeline.\n\n== Novelty/Significance ==\nControllable image generation is an important task in representation learning and computer vision. I also like the unsupervised learning through gating function and label difference cost. However, considering many other related work mentioned by the paper, the novelty in this paper is quite limited. For example, layered generation (Section 2.2.1) has been explored in Yan et al 2016 (VAEs) and Vondrick et al 2016 (GANs).\n\n== Detailed comments ==\nThe proposed two-stream model is developed with the following two assumptions: (1) Single object in the scene; and (2) Class information is provided for the foreground/object region. Although the proposed method learns to distinguish foreground and background in an unsupervised fashion, it is limited in terms of applicability and generalizability. For example, I am not convinced if the two-stream generation pipeline can work well on more challenging datasets such as MS-COCO, LSUN, and ImageNet. \n\nGiven the proposed method is controllable image generation, I would assume to see the following ablation studies: keeping two latent variables from (z_u, z_l, z_v) fixed, while gradually changing the value of the other latent variable. However, I didn’t see such detailed analysis as in the other papers on controllable image generation.\n\nIn Figure 7 and Figure 10, the boundary between foreground and background region is not very sharp. It looks like equation (5) and (6) are insufficient for foreground and background separation (triplet/margin loss could work better). Also, in CelebA experiment, it is not a well defined experimental setting since only binary label (smiling/non-smiling) is conditioned. Is it possible to use all the binary attributes in the dataset.\n\nAlso, please either provide more qualitative examples or provide some type of quantitative evaluations (through user study , dataset statistics, or down-stream recognition tasks).\n\nOverall, I believe the paper is interesting but not ready for publication. I encourage authors to investigate (1) more generic layered generation process and (2) better unsupervised boundary separation. Hopefully, the suggested studies will improve the quality of the paper in the future submission.\n\n== Presentation ==\nThe paper is readable but not well polished. \n\n-- In Figure 1, the “G1” on the right should be “G2”;\n-- Section 2.2.1, “X_f” should be “x_f”;\n-- the motivation of having “z_v” should be introduced earlier;\n-- Section 2.2.4, please use either “alpha” or “\\alpha” but not both;\n-- Section 3.3, the dataset information is incorrect: “20599 images” should be “202599 images”;\n\nMissing reference:\n-- Neural Face Editing with Intrinsic Image Disentangling, Shu et al. In CVPR 2017.\n-- Domain Separation Networks, Bousmalis et al. In NIPS 2016.\n-- Unsupervised Image-to-Image Translation Networks, Liu et al. In NIPS 2017.\n",
"[Overview]\n\nThis paper proposed a new generative adversarial network, called 3C-GAN for generating images in a composite manner. In 3C-GAN, the authors exploited two generators, one (G1) is for generating context images, and the other one (G2) is for generating semantic contents. To generate the semantic contents, the authors introduced a conditional GAN scheme, to force the generated images to match the annotations. After generating both parts in parallel, they are combined using alpha blending to compose the final image. This generated image is then sent to the discriminator. The experiments were conducted on three datasets, MNIST, SVHN and MS-CelebA. The authors showed qualitative results on all three datasets, demonstrating that AC-GAN could disentangle the context part from the semantic part in an image, and generate them separately.\n\n[Strenghts]\n\nThis paper introduced a layered-wise image generation, which decomposed the image into two separate parts: context part, and semantic part. Corresponding to these two parts are two generators. To ensure this, the authors introduced three strategies:\n\n1. Adding semantic labels: the authors used image semantic labels as the input and then exploited a conditional GAN to enforce one of the generators to generate semantic parts of images. As usual, the label information was added as the input of generator and discriminator as well.\n\n2. Adding label difference cost: the intuition behind this loss is that changing the label condition should merely affect the output of G2. Based on this, outputs of Gc should not change much when flipping the input labels.\n\n3. Adding exclusive prior: the prior is that the masks of context part (m1) and semantic part (m2) should be exclusive to each other. Therefore, the authors added another loss to reduce the sum of component-wise multiplication between m1 and m2.\n\nDecomposing the semantic part from the context part in an image based on a generative model is an interesting problem. However, to my opinion, completing it without any supervision is challenging and meaningless. In this paper, the authors proposed a conditional way to generate images compositionally. It is an interesting extension of previous works, such as Kwak & Zhang (2016) and Yang (2017).\n\n[Weaknesses]\n\nThis paper proposed an interesting and intuitive image generation model. However, there are several weaknesses existed:\n\n1. There is no quantitative evaluation and comparisons. From the limited qualitative results shown in Fig.2-10, we can hardly get a comprehensive sense about the model performance. The authors should present some quantitative evaluations in the paper, which are more persuasive than a number of examples. To do that, I suggest the authors exploited evaluation metrics, such as Inception Score to evaluate the overall generation performance. Also, in Yang (2017) the authors proposed adversarial divergence, which is suitable for evaluating the conditional generation. Hence, I suggest the authors use a similar way to evaluate the classification performance of classification model trained on the generated images. This should be a good indicator to show whether the proposed 3C-GAN could generate more realistic images which facilitate the training of a classifier.\n\n2. The authors should try more complicated datasets, like CIFAR-10. Recently, CIFAR-10 has become a popular dataset as a testbed for evaluating various GANs. It is easy to train since its low resolution, but also means a lot since it a relative complicated scene. I would suggest the authors also run the experiments on CIFAR-10.\n\n3. The authors did not perform any ablation study. Apart from several generation results based on 3C-GAN, iIcould not found any generation results from ablated models. As such, I can hardly get a sense of the effects of different losses and know about the relative performance in the whole GAN spectrum. I strongly suggest the authors add some ablation studies. The authors should at least compare with one-layer conditional GAN. \n\n4. The proposed model merely showed two-layer generation results. There might be two reasons: one is that it is hard to extend it to more layer generation as I know, and the other one reason is the inflexible formulation to compose an image in 2.2.1 and formula (6). The authors should try some datasets like MNIST-TWO in Yang (2017) for demonstration.\n\n5. Please show f1, m1, f2, m2 separately, instead of showing the blending results in Fig3, Fig4, Fig6, Fig7, Fig9, and Fig10. I would like to see what kind of context image and foreground image 3C-GAN has generated so that I can compare it with previous works like Kwak & Zhang (2016) and Yang (2017).\n\n6. I did not understand very well the label difference loss in (5). Reducing the different between G_c(z_u, z_v, z_l) and G_c(z_u, z_v, z_l^f) seems not be able to force G1 and G2 to generate different parts of an image. G2 takes all the duty can still obtain a lower L_ld. From my point of view, the loss should be added to G1 to make G1 less prone to the variation of label information.\n\n7. Minor typos and textual errors. In Fig.1, should the right generator be G2 rather than G1? In 2.1.3 and 2.2.1, please add numbers to the equations.\n\n[Summary]\n\nThis paper proposed an interesting way of generating images, called 3C-GAN. It generates images in a layer-wise manner. To separate the context and semantic part in an image, the authors introduced several new techniques to enforce the generators in the model undertake different duties. In the experiments, the authors showed qualitative results on three datasets, MNIST, SVHN and CelebA. However, as I pointed out above, the paper missed quantitative evaluation and comparison, and ablation study. Taking all these into account, I think this paper still needs more works to make it solid and comprehensive before being accepted.",
"\n- Paper summary\n\nThe paper proposes a label-conditional GAN generator architecture and a GAN training objective for the image modeling task. The proposed GAN generator consists of two components where one focuses on generating foreground while the other focuses on generating background. The GAN training objective function utilizing 3 conditional classifier. It is shown that through combining the generator architecture and the GAN training objective function, one can learn a foreground--background decomposed generative model in an unsupervised manner. The paper shows results on the MNIST, SVHN, and Celebrity Faces datasets.\n\n- Poor experimental validation\n\nWhile it is interesting to know that a foreground--background decomposed generative model can be learned in an unsupervised manner, it is clear how this capability can help practical applications, especially no such examples are shown in the paper. The paper also fails to provide any quantitative evaluation of the proposed method. For example, the paper will be more interesting if inception scores were shown for various challenging datasets. In additional, there is no ablation study analyzing impacts of each design choices. As a result, the paper carries very little scientific value."
] | [
5,
4,
4
] | [
5,
4,
5
] | [
"iclr_2018_HkCvZXbC-",
"iclr_2018_HkCvZXbC-",
"iclr_2018_HkCvZXbC-"
] |
iclr_2018_rkQsMCJCb | Generative Adversarial Networks using Adaptive Convolution | Most existing GANs architectures that generate images use transposed convolution or resize-convolution as their upsampling algorithm from lower to higher resolution feature maps in the generator. We argue that this kind of fixed operation is problematic for GANs to model objects that have very different visual appearances. We propose a novel adaptive convolution method that learns the upsampling algorithm based on the local context at each location to address this problem. We modify a baseline GANs architecture by replacing normal convolutions with adaptive convolutions in the generator. Experiments on CIFAR-10 dataset show that our modified models improve the baseline model by a large margin. Furthermore, our models achieve state-of-the-art performance on CIFAR-10 and STL-10 datasets in the unsupervised setting. | rejected-papers | The paper proposes a GAN model with adaptive convolution kernels. The proposed idea is reasonable, but the novelty is somewhat minor and the experimental results are limited. More comprehensive experiments (e.g., other evaluation metrics) will strengthen the future revision of paper. No rebuttal was submitted.
| train | [
"SkzQ2hFxf",
"H1JNBZTef",
"Sy686Qplz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper operates under the hypothesis that the rigidity of the convolution operator is responsible in part for the poor performance of GANs on diverse visual datasets. The authors propose to replace convolutions in the generator with an Adaptive Convolution Block, which learns to generate the convolution weights and biases of the upsampling operation adaptively for each pixel location. State-of-the-art Inception scores are presented for the CIFAR-10 and STL-10 datasets.\n\nI think the idea of leveraging adaptive convolutions in decoder-based models is compelling, especially given its success in video frame interpolation, which makes me wonder why the authors chose to restrict themselves to GANs. Wouldn't the arguments used to justify replacing regular convolutions in the generator with adaptive convolution blocks apply equally well to any other decoder-based generative model, like a VAE, for instance?\n\nI find the paper lacking on the evaluation front. The evaluation of GANs is still very much an open research problem, which means that making a compelling case for the effectiveness of a proposed method requires nuance and contextualization. The authors claim a state-of-the-art Inception score but fail to explain what argument this claim supports. This is important, because the Inception score is not a universal measure of GAN performance: it provides a specific view on the ability of a generator to cover human-defined modes in the data distribution, but it does not inform on intra-class mode coverage and is blind to things like the generator collapsing on one or a few template samples per class.\n\nI am also surprised that the relationship with HyperNetworks [1] is not outlined, given that both papers leverage the idea of factoring network parameters through a second neural network.\n\nSome additional comments:\n\n- Figure 1 should be placed much earlier in the paper, preferably above Section 3. In its current state, the paper provides a lot of mathematical notation to digest without any visual support.\n- \"[...] a transposed convolution is equivalent to a convolution [...]\": This is inaccurate. A convolution's backward pass is a transposed convolution and vice versa, but they are not equivalent (especially when non-unit strides are involved).\n- \"The difficulties of training GANs is well known\": There is a grammatical error in this sentence.\n- \"If [the discriminator] is too strong, log(1 - D(G(z))) will be close to zero and there would be almost no gradient [...]\": This is only true for the minimax GAN objective, which is almost never used in practice. The non-saturating GAN objective does not exhibit this issue, as [2] re-iterated recently.\n- \"Several works have been done [...]\": There is a grammatical error here.\n- The WGAN-GP citation is wrong (Danihelka et al. rather than Gulrajani et al.).\n\nOverall, the paper's lack of sufficient convincing empirical support prevents me from recommending its acceptance.\n\nReferences:\n\n[1] Ha, D., Dai, A., and Le, Q. V. (2016). HyperNetworks. arXiv:1609.09106.\n[2] Fedus, W., Rosca, M., Lakshminarayanan, B., Dai, A. M., Mohamed, S., and Goodfellow, I. (2017). Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step. arXiv:1710.08446.",
"The paper proposes to use Adaptive Convolution (Niklaus 2017) in the context of GANs. A simple paper with: idea, motivation, experiments\n\nIdea:\nIt proposes a block called AdaConvBlock that replaces a regular Convolution with two steps:\nstep 1: regress convolution weights per pixel location conditioned on the input\nstep 2: do the convolution using these regressed weights\nSince local convolutions are generally expensive ops, it provides a few modifications to the size and shape of convolutions to make it efficient (like using depthwise)\n\nMotivation:\n- AdaConvBlock gives more local context per kernel weight, so that it can generate locally flexible objects / pixels in images\n\nMotivation is hand-wavy, the claim would need good experiments.\n\nExperiments:\n- Experiments are very limited, only overfit to inception score.\n- The experiments are not constructed to support the motivation / claim, but just to show that model performance improves.\n\nInception score experiments as the only experiments of a paper are woefully inadequate. The inception score is computed using a pre-trained imagenet model. It is not hard to overfit to.\nThe experiments need to support the motivation / claim better.\nIdeally the experiments need to show:\n- inception score improvements\n- actual samples showing that this local context helped produced better local regions / shapes\n- some kind of human evaluation supporting claims\n\nThe paper's novelty is also quite limited.",
"This manuscript proposes the use of \"adaptive convolutions\", previously proposed elsewhere, in GAN generators. The authors motivate this combination as allowing for better modeling of finer structure, conditioning the filter used for upsampling on the local neighbourhood beforehand.\n\nWhile Inception scores were the only proposed metric available for a time, other metrics have now been introduced in the literature (AIS log likelihood bounds, MS-SSIM, FID) and reporting Inception scores (with all of their problems) falls short for this reviewer. Because this is just the combination of two existing ideas, a more detailed analysis is warranted. Not only is the quantitative analysis lacking but also absent is any qualitative analysis of what exactly these adaptive convolutions are learning, whether this additional modeling power is well used, etc."
] | [
4,
4,
4
] | [
5,
4,
5
] | [
"iclr_2018_rkQsMCJCb",
"iclr_2018_rkQsMCJCb",
"iclr_2018_rkQsMCJCb"
] |
iclr_2018_HJrJpzZRZ | Self-Supervised Learning of Object Motion Through Adversarial Video Prediction | Can we build models that automatically learn about object motion from raw, unlabeled videos? In this paper, we study the problem of multi-step video prediction, where the goal is to predict a sequence of future frames conditioned on a short context. We focus specifically on two aspects of video prediction: accurately modeling object motion, and producing naturalistic image predictions. Our model is based on a flow-based generator network with a discriminator used to improve prediction quality. The implicit flow in the generator can be examined to determine its accuracy, and the predicted images can be evaluated for image quality. We argue that these two metrics are critical for understanding whether the model has effectively learned object motion, and propose a novel evaluation benchmark based on ground truth object flow. Our network achieves state-of-the-art results in terms of both the realism of the predicted images, as determined by human judges, and the accuracy of the predicted flow. Videos and full results can be viewed on the supplementary website: \url{https://sites.google.com/site/omvideoprediction}. | rejected-papers | The paper proposes adversarial flow-based neural network architecture with adversarial training for video prediction. Although the reported experimental results are promising, the paper seems below ICLR threshold due to limited novelty and issues in evaluation (e.g., mechanical turk experiment). No rebuttal was submitted. | train | [
"BkGsQ4Ixz",
"ByW0MJqlM",
"BJAg3e7ZM",
"HJ99sdVbG",
"rJrjsrIkf",
"rk2BBWM1z"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public"
] | [
"This is a fine paper that generally reads as a new episode in a series on motion-based video prediction with an eye towards robotic manipulation [Finn et al. 2016, Finn and Levine 2017, Ebert et al. 2017]. The work is rather incremental but is competently executed. It is in line with current trends in the research community and is a good fit for ICLR. The paper is well-written, reasonably scholarly, and contains stimulating insights.\n\nI recommend acceptance, despite some reservations. My chief criticism is a matter of research style: instead of this deluge of barely distinguishable least-publishable-unit papers on the same topic, in every single conference, I wish the authors didn’t slice so thinly, devoted more time to each paper, and served up a more substantial dish.\n\nSome more detailed comments:\n\n- The argument for evaluating visual realism never quite gels and is not convincing. The paper advocates two primary metrics: accuracy of the predicted motion and perceptual realism of the synthesized images. The argument for motion accuracy is clear and is clearly stated: it’s the measure that is actually tied to the intended application, which is using action-conditional motion prediction for control. A corresponding argument for perceptual realism is missing. Indeed, a skeptical reviewer may suspect that the authors needed to add perceptual realism to the evaluation because that’s the only thing that justifies the adversarial loss. The adversarial loss is presented as the central conceptual contribution of the paper, but doesn’t actually make a difference in terms of task-relevant metrics. A skeptical perspective on the paper is that the adversarial loss just makes the images look prettier but makes no difference in terms of task performance (control). This is an informative negative result. It's not how the paper is written, though.\n\n- The “no adversary”/“no adv” condition in Table 1 and Figure 4 is misleading. It’s not properly controlled. It is not the case that the adversarial loss was simply removed. The regression loss was also changed from l_1 to l_2. This is not right. The motivation for this control is to evaluate the impact of the adversarial loss, which is presented as the key conceptual contribution of the paper. It should be a proper control. The other loss should remain what it is in the full “Ours” condition (i.e., l_1).\n\n- The last sentence in the caption of Table 1 -- “Slight improvement in motion is observed by training with an adversary as well” -- should be removed. The improvement is in the noise.\n\n- Generally, the quantitative impact of the adversarial loss never comes together. The only statistically significant improvement is on perceptual image realism. The relevance of perceptual image realism to the intended task (control) is not substantiated, as discussed earlier.\n\n- In the perceptual evaluation procedure, the “1 second” restriction is artificial and makes the evaluated methods appear better than they are. If we are serious about evaluating image realism and working towards passing the visual Turing test, we should report results without an artificial time limit. They won’t look as flattering, but will properly report our progress on this journey. If desired, the results of timed comparisons can also be reported, but reporting just a timed comparison with an artificial limit of 1 second may mislead some readers into thinking that we are farther along than we actually are.\n\n\nThere are some broken sentences that mar an otherwise well-written paper:\n\n- End of Section 1, “producing use a learned discriminator and show improvements in visual quality”\n\n- Beginning of Section 3, “We first present the our overall network architecture”\n\n- page 4, “to choose to copy pixels from the previous frame, used transformed versions of the previous frame”\n\n- page 4, “convolving in the input image with”\n\n- page 5, “is know to produce”\n\n- page 5, “an additional indicating”\n\n- page 5, “Adam Kingma & Ba (2015)” (use the other cite command)\n\n- page 5, “we observes”\n\n- page 5, “smaller batch sizes degrades”\n\n- page 5, “larger batch sizes provides”\n",
"This paper is concerned with video prediction, for use in robotic motion planning. The task is performed on tabletop videos of a robotic arm manipulator interacting with various small objects. They use a prior model proposed in Finn et al. 2016, make several incremental architectural improvements, and use an adversarial loss function instead of an L2 loss. They also propose a new metric, motion accuracy, which uses the accuracy of the predicted position of the object instead of conventional metrics like PSNR, which is more relevant for robotic motion planning.\n\nThey obtain significant quantitative improvements over the previous 2 papers in this domain (video prediction on tabletop with robotic arm and objects) on both type of metrics - image assessment and motion accuracy. They also evaluate realism images using AMT fooling - asking turks to chose the fake between between real and generated images, and obtain substantial improvements on this metric as well. \n\nA major point of concern is that they do not use the public dataset proposed in Finn et al. 2016, but use their own (smaller) dataset. They do not mention whether they train the previous methods on the new dataset, and some of their reported improvements may be because of this. They also do not report results on unseen objects, when occlusions are present, and on human motion video prediction, unlike the other papers.\n\nThe adversarial loss helps significantly only with AMT fooling or realism of images, as expected because GANs produce sharp images rather than distributions, and is not very relevant for robot motion planning. The incremental architectural changes, different dataset and training are responsible for most of the other improvements.",
"\n1) Summary\nThis paper proposes a flow-based neural network architecture and adversarial training for multi-step video prediction. The neural network in charge of predicting the next frame in a video implicitly generates flow that is used to transform the previously observed frame into the next. Additionally, this paper proposes a new quantitative evaluation criteria based on the observed flow in the prediction in comparison to the groundtruth. Experiments are performed on a new robot arm dataset proposed in the paper where they outperform the used baselines.\n\n\n2) Pros:\n+ New quantitative evaluation criteria based on motion accuracy.\n+ New dataset for robot arm pushing objects.\n\n3) Cons:\nOverall architectural prediction network differences with baseline are unclear:\nThe differences between the proposed prediction network and [1] seem very minimal. In Figure 3, it is mentioned that the network uses a U-Net with recurrent connections. This seems like a very minimal change in the overall architecture proposed. Additionally, there is a paragraph of “architecture improvements” which also are minimal changes. Based on the title of section 3, it seems that there is a novelty on the “prediction with flow” part of this method. If this is a fact, there is no equation describing how this flow is computed. However, if this “flow” is computed the same way [1] does it, then the title is misleading.\n\n\nAdversarial training objective alone is not new as claimed by the authors:\nThe adversarial objective used in this paper is not new. Works such as [2,3] have used this objective function for single step and multi-step frame prediction training, respectively. If the authors refer to the objective being new in the sense of using it with an action conditioned video prediction network, then this is again an extremely minimal contribution. Essentially, the authors just took the previously used objective function and used it with a different network. If the authors feel otherwise, please comment on why this is the case.\n\n\nIncomplete experiments:\nThe authors only show experiments on videos containing objects that have already been seen, but no experiments with objects never seen before. The missing experiment concerns me in the sense that the network could just be memorizing previously seen objects. Additionally, the authors present evaluation based on PSNR and SSIM on the overall predicted video, but not in a per-step paradigm. However, the authors show this per-step evaluation in the Amazon Mechanical Turk, and predicted object position evaluations.\n\n\nUnclear evaluation:\nThe way the Amazon Mechanical Turk experiments are performed are unclear and/or not suited for the task at hand.\nBased on the explanation of how these experiments are performed, the authors show individual images to mechanical turkers. If we are evaluating the video prediction task for having real or fake looking videos, the turkers need to observe the full video and judge based on that. If we are just showing images, then they are evaluating image synthesis, which do not necessarily contain the desired properties in videos such as temporal coherence.\n\n\nAdditional comments:\nThe paper needs a considerable amount of polishing.\n\n\n4) Conclusion:\nThis paper seems to contain very minimal changes in comparison to the baseline by [1]. The adversarial objective is not novel as mentioned by the authors and has been used in [2,3]. Evaluation is unclear and incomplete.\n\n\nReferences:\n[1] Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through video prediction. In NIPS, 2016.\n[2] M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error. In ICLR, 2016.\n[3] Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, Honglak Lee. Decomposing Motion and Content for Natural Video Sequence Prediction. In ICLR, 2017\n",
"In this paper a neural-network based method for multi-frame video prediction is proposed. It builds on the previous work of [Finn et al. 2016] that uses a neural network to predict transformation parameters of an affine image transformation for future frame prediction, an idea akin to the Spatial Transformer Network paper of [Jaderberg et al., 2015]. What is new compared to [Finn et al. 2016] is that the authors managed to train the network in combination with an adversarial loss, which allows for the generation of more realistic images. Time series modelling is performed via convolutional LSTMs. The authors evaluate their method based on a mechanical turk survey, where humans are asked to judge the realism of the generated images; additionally, they propose to measure prediction quality by the distance between the manually annotated positions of objects within ground truth and predicted frames.\n\nMy main concerns with this paper are novelty, reproducibility and evaluation.\n\n* Novelty. The network design builds heavily on the work of [Finn et al., 2106]. A number of design decisions (such as instance normalization) seem to help yield better results, but are minor contributions. A major contribution is certainly the combination with an adversarial loss, which is a non-trivial task. However, the authors claim that their method is the first to combine multi-frame video prediction with an adversarial loss, which is not true. A recent work, presented at CVPR this year also does multi-frame prediction featuring an adversarial loss and explicitly models and captures the full dense optical flow (though in the latent space) that allows non-trivial motion extrapolation to future frames. This work is neither mentioned in the related work nor compared to. \n \nLu et al. , Flexible Spatio-Temporal Networks for Video Prediction, CVPR 2017\n\nThis recent work builds on another highly relevant work, that is also not mentioned in the paper:\n\nPatraucean et al. Spatio-temporal video autoencoder with differentiable memory, arxiv 2017\n\nSince this is prior state-of-the-art and directly applicable to the problem, a comparison is a must. \n\n* Reproducibility and evaluation\nThe description of the network is quite superficial. Even if the authors released their code used for training (which is not mentioned), I think the authors should aim for a more self-contained exposition. I doubt that a PhD student would be able to reimplement the method and achieve comparable results given the paper at hand only. It is also not mentioned whether the other methods that the authors compare to are re-trained on their newly proposed training dataset. Hence, it remains unclear to what extend the achieved improvements are due to the proposed network design changes or the particular dataset they use for training. The authors also don't show any results on previous datasets, which would allow for a more objective comparison to existing state of the art. Another point of criticism is the way the Amazon Mechanical Turk evaluation was performed. Since only individual images were shown, the evaluation mainly measures the quality of the generated images. Since the authors combine their method with a GAN, it is not surprising that the generated images look more realistic. However, since the task is *video* prediction, it seems more natural to show small video snippets rather than individual images, which would also evaluate temporal consistency.\n\n* Further comments:\nThe paper contains a number of broken sentences, typos and requires a considerable amount of polishing prior to publication.\n",
"The dataset will be released upon publication. In the meantime, there are similar datasets that are publicly available such as the following:\nhttps://sites.google.com/site/brainrobotdata/home/push-dataset\nhttps://sites.google.com/view/sna-visual-mpc",
"Hello,\nwe want to take part in the ICLR 2018 Reproducibility Challenge (http://www.cs.mcgill.ca/~jpineau/ICLR2018-ReproducibilityChallenge.html) and replicate the experiments described in this paper. Would it be possible to access the trajectories dataset that was described in a paper and used for training?\n\nThank you!"
] | [
7,
3,
3,
3,
-1,
-1
] | [
5,
4,
5,
5,
-1,
-1
] | [
"iclr_2018_HJrJpzZRZ",
"iclr_2018_HJrJpzZRZ",
"iclr_2018_HJrJpzZRZ",
"iclr_2018_HJrJpzZRZ",
"rk2BBWM1z",
"iclr_2018_HJrJpzZRZ"
] |
iclr_2018_Hy8hkYeRb | A Deep Predictive Coding Network for Learning Latent Representations | It has been argued that the brain is a prediction machine that continuously learns how to make better predictions about the stimuli received from the external environment. For this purpose, it builds a model of the world around us and uses this model to infer the external stimulus. Predictive coding has been proposed as a mechanism through which the brain might be able to build such a model of the external environment. However, it is not clear how predictive coding can be used to build deep neural network models of the brain while complying with the architectural constraints imposed by the brain. In this paper, we describe an algorithm to build a deep generative model using predictive coding that can be used to infer latent representations about the stimuli received from external environment. Specifically, we used predictive coding to train a deep neural network on real-world images in a unsupervised learning paradigm. To understand the capacity of the network with regards to modeling the external environment, we studied the latent representations generated by the model on images of objects that are never presented to the model during training. Despite the novel features of these objects the model is able to infer the latent representations for them. Furthermore, the reconstructions of the original images obtained from these latent representations preserve the important details of these objects. | rejected-papers | The paper attempts to develop a method for learning latent representations using deep predictive coding and deconvolutional networks. However, the theoretical motivation for the proposed model in relation to existing methods (such as original predictive coding, deconvolutional networks, ladder networks, etc.), as well as the empirical comparison against them is unclear. The experimental results on the CIFAR10 dataset do not provide much insight on what kind of meaningful/improved representations can be learned in comparison to existing methods, both qualitatively and quantitatively. No rebuttal was provided. | train | [
"SJpl9FSlz",
"ryJ6sRYlf",
"HkrTEd9gf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Quality\n\nThe authors introduce a deep network for predictive coding. It is unclear how the approach improves on the original predictive coding formulation of Rao and Ballard, who also use a hierarchy of transformations. The results seem to indicate that all layers are basically performing the same. No insight is provided about the kinds of filters that are learned.\n\nClarity\n\nIn its present form it is hard to assess if there are benefits to the current formulation compared to already existing formulations. The paper should be checked for typos.\n\nOriginality\n\nThere exist alternative deep predictive coding models such as https://arxiv.org/abs/1605.08104. This work should be discussed and compared.\n\nSignificance \n\nIt is hard to see how the present paper improves on classical or alternative (deep) predictive coding results.\n\nPros\n\nRelevant attempt to develop new predictive coding architectures\n\nCons\n\nUnclear what is gained compared to existing work.",
"The paper \"A Deep Predictive Coding Network for Learning Latent Representations\" considers learning of a generative neural network. The network learns unsupervised using a predictive coding setup. A subset of the CIFAR-10 image database (1000 images horses and ships) are used for training. Then images generated using the latent representations inferred on these images, on translated images, and on images of other objects are shown. It is then claimed that the generated images show that the network has learned good latent representations.\n\nI have some concerns about the paper, maybe most notably about the experimental result and the conclusions drawn from them. The numerical experiments are motivated as a way to \"understand the capacity of the network with regards to modeling the external environment\" (abstract). And it is concluded in the final three sentences of the paper that the presented network \"can infer effective latent representations for images of other objects\" (i.e., of objects that have not been used for training); and further, that \"in this regards, the network is better than most existing algorithms [...]\".\n\nI expected the numerical experiments to show results instructive about what representations or what abstractions are learned in the different layers of the network using the learning algorithm and objectives suggested. Also some at least quantifiable (if not benchmarked) outcomes should have been presented given the rather strong claims/conclusions in abstract and discussion/conclusion sections. As a matter of fact, all images shown (including those in the appendix) are blurred versions of the original images, except of one single image: Fig. 4 last row, 2nd image (and that is not commented on). If these are the generated images, then some reconstruction is done by the network, fine, but also not unsurprising as the network was told to do so by the used objective function. What precisely do we learn here? I would have expected the presentation of experimental results to facilitate the development of an understanding of the computations going on in the trained network. How can the reader conclude any functioning from these images? Using the right objective function, reconstructions can also be obtained using random (not learned) generative fields and relatively basic models. The fact that image reconstruction for shifted images or new images is evidence for a sophisticated latent representations is, to my mind, not at all shown here. What would be a good measure for an \"effective latent representation\" that substantiates the claims made? The reconstruction of unseen images is claimed central but as far as I could see, Figures 2, 3, and 4 are not even referred to in the text, nor is there any objective measure discussed. Studying the relation between predictive coding and deep learning makes sense, but I do not come to the same (strong) conclusions as the author(s) by considering the experimental results - and I do not see evidence for a sophisticated latent representation learned by the network. I am not saying that there is none, but I do not see how the presented experimental results show evidence for this.\n\nFurthermore, the authors stress that a main distinguishing feature of their approach (top of page 3) is that in their network information flows from latent space to observed space (e.g. in contrast to CNNs). That is a true statement but also one which is true for basically all generative models, e.g., of standard directed graphical models such as wake-sleep approaches (Hinton et al., 1995), deep SBNs and more recent generative models used in GANs (Goodfellow et al, 2014). Any of these references would have made a lot of sense.\n\nWith my evaluation I do not want to be discouraging about the general approach. But I can not at all give a good evaluation given the current experimental results (unless substantial new evidence which make me evaluate these results differently is provided in a discussion).\n\n\nMinor:\n\n- no legend for Fig. 1\n\n-notes -> noted\n\nhave focused\n\n\n\n\n",
"The paper attempts to extend the predictive coding model to a multilayer network. The math is developed for a learning rule, and some demonstrations are shown for reconstructions of CIFAR-10 images.\n\nThe overall idea and approach being pursued here is a good one, but the model needs further development. It could also use better theoretical motivation - i.e., what sorts of representations do you expect to emerge in higher layers? Can you demonstrate this with a toy example and then extend to real data?\n\nThat the model can reconstruct images per se is not particularly interesting. What we would like to see is that it has somehow learned a more useful or meaningful representation of the data. For example, what do the learned weights look like? That would tell you something about what has been learned.\n"
] | [
4,
3,
3
] | [
4,
4,
5
] | [
"iclr_2018_Hy8hkYeRb",
"iclr_2018_Hy8hkYeRb",
"iclr_2018_Hy8hkYeRb"
] |
iclr_2018_ry4S90l0b | A Self-Training Method for Semi-Supervised GANs | Since the creation of Generative Adversarial Networks (GANs), much work has been done to improve their training stability, their generated image quality, their range of application but nearly none of them explored their self-training potential. Self-training has been used before the advent of deep learning in order to allow training on limited labelled training data and has shown impressive results in semi-supervised learning. In this work, we combine these two ideas and make GANs self-trainable for semi-supervised learning tasks by exploiting their infinite data generation potential. Results show that using even the simplest form of self-training yields an improvement. We also show results for a more complex self-training scheme that performs at least as well as the basic self-training scheme but with significantly less data augmentation. | rejected-papers | The paper presents self-training scheme for GANs. The proposed idea is simple but reasonable, and the experimental results show promise for MNIST and CIFAR10. However, the novelty of the proposed method seems relatively small and experimental results lack comparison against other stronger baselines (e.g., state-of-the-art semi-supervised methods). Presentation needs to be improved. More comprehensive experiments on other datasets would also strengthen the future version of the paper. | test | [
"SknKUAteG",
"S1e2kO9gG",
"r1UheZ6gG",
"B11QNrOfM",
"Sk7Azr_ff",
"rkoVGHOMf",
"r1e4YlmWG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper proposes to use self-training strategies for using unlabeled data in GAN. Experiments on only one data set, i.e., MNIST, are conducted \n\nPros:\n* Studying how to use unlabeled data to improve performance of GAN is of technical importance. The use of the self-training in GAN for exploiting unlabeled data is sound.\n \nCons:\n* The novelty and technical contribution is low. The unlabeled data are exploited by off-the-shelf self-training strategies, where the base learner is fixed to GAN. Using GAN does not make the self-training strategy special to the existing self-training approaches. Thus, the proposed approaches are actually a straight application of the existing techniques. In fact, It would be more interesting if the unlabeled data could be employed to the “G” and “A” in GAN.\n\n* In each self-training iteration, GAN needs to be retrained, whose computational cost is high..\n\n* Only one data set is used in the experiment. Some widely-used datasets, like SVHN or CIFAR-10, are not used in the experiment. \n\n* Important baseline methods are missing. The proposed methods should be evaluated with the state-of-the-art semi-supervised deep learning methods, such as those mentioned in related work section.\n",
"The paper presents to combine self-learning and GAN. The basic idea is to first use GAN to generate data, and then infer the pseudo label, and finally use the pseudo labeled data to enhance the learning process. Experiments are conducted on one image data set. The paper contains several deficiencies.\n\n1.\tThe experiment is weak. Firstly, only one data set is employed for evaluation, which is hard to justify the applicability of the proposed approach. Secondly, the compared methods are too few and do not include many state-of-the-art SSL methods like graph-based approaches. Thirdly, in these cases, the results in table 1 contain evident redundancy. Fourthly, the performance improvement over compared method is not significant and the result is based on 3 splits of data set, which is obviously not convincing and involves large variance. \n2.\tThe paper claims that ‘when paired with deep, semi-supervised learning has had a few success’. I do not agree with such a claim. There are many success SSL deep learning studies on embedding. They are not included in the discussions. \n3.\tThe layout of the paper could be improved. For example, there are too many empty spaces in the paper. \n4.\tOverall technically the proposed approach is a bit straightforward and does not bring too much novelty.\n5.\tThe format of references is not consistent. For example, some conference has short name, while some does not have. ",
"This paper presents a self-training scheme for GANs and tests it on image (NIST) data.\n\nSelf-training is a well-known and usually effective way to learn models in a semi-supervised setting. It makes a lot of sense to try this with GANs, which have also been shown to help train Deep Learning methods.\n\nThe novelty seems quite limited, as both components (GANs and self-training) are well-known and their combination, given the context, is a fairly obvious baseline. The small changes described in Section 4 are not especially motivated and seem rather minor. [btw you have a repeated sentence at the end of that section]\n\nExperiments are also quite limited. An obvious baseline would be to try self-training on a non-GAN model, in order to determine the influence of both components on the performance. Results seem quite inconclusive: the variances are so large that all method perform essentially equivalently. On the other hand, starting with 10 labelled examples seems to work marginally better than 20. This is a bit weird and would justify at least a mention, and idealy some investigation.\n\nIn summary, both novelty and impact seem limited. The idea makes a lot of sense though, so it would be great to expand on these preliminary results and explore the use of GANs in semi-supervised learning in a more thorough manner.\n\n[Response read -- thanks]",
"Regarding the Cons:\n\n* One important part of the paper is the use of generated data. Furthermore, while it might seem like a straightforward approach, there is no real guarantee that these methods would work as well unless we have empirical proofs. The second method, in particular, is a complex one that might go wrong in the settings of GAN. \n\n* There is indeed a computational cost but training many times is already something that is commonly done e.g. when searching for hyperparameters. Also, with the advance of computer hardware and parallel processing machine learning libraries like Chainer, this problem will become less important. However, decreasing the computational time while keeping the same performance is something that can be investigated in future work. \n\n* We agree, we have added results for CIFAR-10. ",
"1. Firstly, yes, we agree and we have added results for CIFAR-10, see above. Secondly, what we wanted to show was the success of self-training on the Improved GAN which already does some semi-supervised learning. Thirdly and fourthly, the results might seem similar for both self-training methods but they still show an improvement over a non-self-trained GAN which is one of the goals of our paper. The difference is more important with the CIFAR-10 results. \n2. We did not say \"has had few success\", we said \"has had a few success\". The former means that there was little success while the latter means that there were successes, which is what we claim. If the question is on the choice of the word \"few\" vs \"many\", then okay we can change \"few\" to \"many\". \n3. Okay, we will rearrange the layout. \n4. The Basic Self-Training scheme might seem obvious and straightforward but the second self-training method should not be considered obvious: label inversion, disagreement calculation and multiple subset candidates is not necessarily something that anyone can think about on top of their head. Furthermore, theoretical justifications exist in the original published paper. \n5. Okay, we will fix the references. \n",
"Although it might make a lot of sense, it seems that no paper has been published about the combination of both. Many things seem obvious in hindsight but a priori we cannot know for sure if these things will work out or not. \n\nThe Basic Self-Training method may seem obvious but the Improved Self-Training method is not obvious. Thinking about selecting multiple subsets of unlabelled data and inverting their labels to check their effect on the decision boundary is not something that people would immediately think about. In fact, it is not even obvious why this might work. The original paper testing this method presented theoretical justification as to why it is a good idea to try on simple hypotheses but not on a complex one such as GANs. \n\nOne of the goals of the paper was to test the self-training method specifically on GANs to see how it can be applied to them. Using a baseline that is not a GAN does not seem to provide information towards that goal. The MNIST results appear equivalent in both self-training methods but they still show an improvement over a non-self-trained GAN which is one of the goals we wanted to achieve with this paper.\n\nAlthough one method seems obvious to use, no other publications seem to have done a similar proof of concept. Moreover, the second self-training method used should not be thought of as obvious; label inversion, disagreement calculation and multiple subset candidates is not necessarily something that anyone can think about on top of their head. ",
"Here are some results on the CIFAR-10 dataset. \n\n== Error Rates == \nVanilla Improved GAN: 0.2513 ± 0.0037\nBasic Self-Training: 0.2471 ± 0.0002\nImproved Self-Training: 0.2231 ± 0.0029\n\n== Improvements Over Vanilla Improved GAN == \nVanilla Improved GAN: 0\nBasic Self-Training: 0.0042 ± 0.0039\nImproved Self-Training: 0.0282 ± 0.0008\n\nThe results are averaged over 2 different seeds and the error margins represent one standard deviation. \n\nHere, we notice significant improvement of our Improved Self-Training method over the original vanilla Improved GAN."
] | [
3,
4,
3,
-1,
-1,
-1,
-1
] | [
5,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ry4S90l0b",
"iclr_2018_ry4S90l0b",
"iclr_2018_ry4S90l0b",
"SknKUAteG",
"S1e2kO9gG",
"r1UheZ6gG",
"iclr_2018_ry4S90l0b"
] |
iclr_2018_rJg4YGWRb | Attention-based Graph Neural Network for Semi-supervised Learning | Recently popularized graph neural networks achieve the state-of-the-art accuracy on a number of standard benchmark datasets for graph-based semi-supervised learning, improving significantly over existing approaches. These architectures alternate between a propagation layer that aggregates the hidden states of the local neighborhood and a fully-connected layer. Perhaps surprisingly, we show that a linear model, that removes all the intermediate fully-connected layers, is still able to achieve a performance comparable to the state-of-the-art models. This significantly reduces the number of parameters, which is critical for semi-supervised learning where number of labeled examples are small. This in turn allows a room for designing more innovative propagation layers. Based on this insight, we propose a novel graph neural network that removes all the intermediate fully-connected layers, and replaces the propagation layers with attention mechanisms that respect the structure of the graph. The attention mechanism allows us to learn a dynamic and adaptive local summary of the neighborhood to achieve more accurate predictions. In a number of experiments on benchmark citation networks datasets, we demonstrate that our approach outperforms competing methods. By examining the attention weights among neighbors, we show that our model provides some interesting insights on how neighbors influence each other. | rejected-papers | A version of GCNs of Kipf and Welling is introduced with (1) no non-linearity; (2) a basic form of (softmax) attention over neighbors where the attention scores are computed as the cosine of endpoints' representations (scaled with a single learned scalar). There is a moderate improvement on Citeseer, Cora, Pubmed.
Since the use of gates with GCNs / Graph neural networks is becoming increasingly common (starting perhaps with GGSNNs of Li et al, ICLR 2016)) and using attention in graph neural networks is also not new (see reviews and comments for references), the novelty is very limited. In order to make the submission more convincing the authors could: (1) present results on harder datasets; (2) carefully evaluate against other forms of attention (i.e. previous work).
As it stands, though it is interesting to see that such simple model performs well on the three datasets, I do not see it as an ICLR paper.
Pros:
-- a simple model, achieves results close / on par with state of the art
Cons:
-- limited originality
-- either results on harder datasets or / and evaluation agains other forms of attention (i.e. previous work) are needed
| train | [
"HJb-xDvHG",
"S1Z9bmyZf",
"rJmKbdIgM",
"HJvS2zhgz",
"S1rCoFiQz",
"Hk129YimG",
"H1w_FFiQz",
"HyJAutiQM",
"rk5cCabxf",
"H1buZ4xlz",
"rkWt6QPyG",
"rk8rhql1G"
] | [
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"public"
] | [
"To add to the discussion, we would like to draw the attention to our recent paper accepted by AAAI-2018 as oral presentation (https://arxiv.org/abs/1801.07606). We have shown in our paper that the convolution layer of GCNs acts as \"Laplacian smoothing\" on the vertex features, which is the key reason why GCNs work. This may also help explain why the GLN model works just as well as the GCN model. ",
"SUMMARY.\n\nThe paper presents an extension of graph convolutional networks.\nGraph convolutional networks are able to model nodes in a graph taking into consideration the structure of the graph.\nThe authors propose two extensions of GCNs, they first remove intermediate non-linearities from the GCN computation, and then they add an attention mechanism in the aggregation layer, in order to weight the contribution of neighboring nodes in the creation of the new node representation.\nInterestingly, the proposed linear model obtains results that are on-par with the state-of-the-art model, and the linear model with attention outperforms the state-of-the-art models on several standard benchmarks.\n\n\n----------\n\nOVERALL JUDGMENT\nThe paper is, for the most part, clear, although some improvement on the presentation would be good (see below).\nAn important issue the authors should address is the notation consistency, the indexes i and j are used for defining nodes and labels, please use another index for labels.\nIt is very interesting that stripping standard GCN out of nonlinearities gives pretty much the same results, I would appreciate if the authors could give some insights of why this is the case.\nIt seems to me that an important experiment is missing here, have the authors tried to apply the attention model with the standard GCN?\nI like the idea of using a very minimal attention mechanism. The similarity function used for the attention (cosine) is symmetric, this means that if two nodes are connected in both directions, they will be equally important for each other. But intuitively this is not true in general. It would be interesting if the authors could elaborate a bit more on the choice of the similarity function.\n\n\n----------\n\nDETAILED COMMENTS\nPage 2. I do not understand the point of so many details on Graph Laplacian Regularization.\nPage 2. The use of the term 'skip-grams' is somewhat odd, it is not clear what the authors mean with that.\nPage 3. 'the natural random walk' ???\nBottom of page 4. When the authors introduce the attention based network also introduce the input/embedding layer, I believe there is a better place to do so instead of that together with the most important contribution of the paper.\n",
"The paper proposes graph-based neural network in which weights from neighboring nodes are adaptively determined. The paper shows importance of propagation layer while showing the non-linear layer does not have significant effect. Further the proposed method also provides class relation based on the edge-wise relevance.\n\nThe paper is easy to follow and the idea would be reasonable. \n\nImportance of the propagation layer than the non-linear layer is interesting, and I think it is worth showing.\n\nVariance of results of AGNN is comparable or even smaller than GLN. This is a bit surprising because AGNN would be more complicated computation than GLN. Is there any good explanation of this low variance of AGNN?\n\nInterpretation of Figure 2 is not clear. All colored nodes except for the thick circle are labeled node? I couldn't judge those predictions are appropriate or not.",
"The paper proposes a semi supervised learning algorithm for graph node classification. The Algorithm is inspired from Graph Neural Networks and more precisely graph convolutional NNs recently proposed by ref (Kipf et al 2016)) in the paper. These NNs alternate 2 types of layers: non linear projection and diffusion, the latter incorporates the graph relational information by constraining neighbor nodes to have close representations according to some “graph metrics”. The authors propose a model with simplified projection layers and more sophisticated diffusion ones, incorporating a simple attention mechanism. Experiments are performed on citation textual datasets. Comparisons with published results on the same datasets are presented.\n\nThe paper is clear and develops interesting ideas relevant to semi-supervised graph node classification. One finding is that simple models perform as well as more complex ones in this setting where labeled data is scarce. Another one is the importance of integrating relational information for classifying nodes when it is available. The attention mechanism itself is extremely simple, and learns one parameter per diffusion layers. One parameter weights correlations between node embeddings in a diffusion layer. I understand that you tried more complex attention mechanisms, but the one finally selected is barely an attention mechanism and rather a simple “importance” weight. This is not a criticism, but this makes the title somewhat misleading. The experiments show that the proposed model is state of the art for graph node classification. The performance is on par with some other recent models according to table 2. The other tests are also interesting, but the comparison could have been extended to other models e.g. GCN.\nYou advocate the role of the diffusion layers, and in the experiments you stack 3 to 4 such layers. It would be interesting to have indications on the compromise performance/ number of diffusion layers and on the evolution of these performances when adding such layers.\nThe bibliography on semi-supervised learning in graphs for classification is light and should be enhanced.\nOverall this is an interesting paper with nice findings. The originality is however relatively limited in a field where many recent papers have been proposed, and the experiments need to be completed.\n",
"We thank the reviewers and the other commenters for helping us improve our work and its presentation. Taking the reviews and comments to heart we have made several changes which, we believe, greatly improve our paper. We added comparison of performance of AGNN with different number of propagation layers in Appendix C. In the Appendix D, we added experimental results of GCN on both random splits and cross-validation settings. Further, we have expanded the bibliography in the Sections 2 and 4.1. As per the reviews we have also made changes to notations and presentation style in Sections 3 and 4. In Section 5.2 and Appendix A, we corrected the order of class names. We improved the caption and marked training set nodes in Figures 2, 4, 5 and 6. Finally, we made some minor changes in the text.",
"We are thankful for your review and insightful comments.\n\n1. Confusing notation is corrected: In the revised version $c$ indexes a label.\n\n2. Why GLN works: For semi-supervised learning, we believe that the primary gain of using graph neural network comes from the “Averaging” effect. Similar to denoising pixels in images, by averaging neighbors features, we get a denoised version of current nodes’ features. This gives significant gain over those estimations without denoising (such as Mulit-Layer Perceptron in Table 2). This, we believe, is why GLN is already achieving the state-of-the-art performance. The focus of this paper is how to get the next remaining gain, which we achieve by proposing asymmetric averaging using “attention”. So far, we did not see any noticeable gain in non-linear activation for semi-supervised learning. However, we believe such non-linearity can be important for other applications, such as graph classification tasks on molecular networks. \n\n3. Attention in GCN: GCN with attention did not give gain over our AGNN architecture, which is somewhat expected as GCN and GLN have comparable performances, within the error margin of each other. Note that from the architecture complexity perspective AGNN is simpler than GCN with attention, meaning that AGNN might have a better chance explaining the data.\n\n4. Symmetric attention: Even though the scaled cosine similarity would be symmetric between two connected nodes $i$ and $j$, the attention value itself can be different due to the fact that softmax computations are calculated on different neighborhoods: $N(i)$ and $N(j)$ respectively.\nBut we agree that attention mechanism has an element of symmetry and this might be alleviated by using more complex attention mechanism. As the reviewer pointed out, we chose the simple attention mechanism here; we tried various attention mechanisms with varying degrees of complexity, and found the simple attention mechanism to give the best performance. Training complex attention is challenging, and we would like to explore more complex ones in our future work.\n\nResponse to detailed comments:\n\n1. Details on Graph Laplacian Regularization: We added details about Laplacian regularization for completeness of discussion of previous work and because Laplacian regularizations closely related to the propagations layers used in almost all Graph Neural Network papers.\n2. ‘Skip-grams’: We added some clarification on the use of ‘skip-grams’ in the revised version.\n3. ‘Natural random walk’ on a graph is random walk where one move from a node to one of its neighbors selected with uniform probability. We have clarified this in the revised version.\n4. Presentation of the Attention-based Graph Neural Network: Thanks for pointing this out. We have made some changes to the presentation style.\n",
"Thank you for reviewing our paper and pointing out missed experiments and inconsistencies.\n\n1. Attention mechanism: It is true as the reviewer pointed out that our attention mechanism is very simple. We settled on this choice after training/testing several attention mechanisms, most of which are more complex than the one we propose. The proposed simple attention mechanism gave the best performance, among those we tried. We believe this is due to the fact that complex attention mechanisms are harder to train as there are more parameters to learn.\n\n2. GCN on other training sets: The reason we do not report GCN performance in tables 2 and 3 is that we made it our rule not to run other researcher’s algorithms ourselves, at the fear of not doing justice in the hyperparameters we need to choose. However, given the interest in the numerical comparisons, as the reviewer pointed out, in the revised version, we run these experiments and reported the performance of GCN in the appendix D (as it might give the wrong impression that those results are performed by the authors of GCN, if we put it in the table in the main text).\n\n3. Choice of number of diffusion layers: Thanks for pointing this out. We have added a table in the appendix C which contains testing accuracies of AGNN model with different number of diffusion layers.\n\n4. Regarding bibliography: We have expanded the bibliography on semi-supervised learning using graphs. Please see the section 2 in the revised manuscript.\n",
"Thank you for your time, review and valuable comments.\n\n1. Regarding the similar variance of results of AGNN and GLN: In Table 2 of the original version we don’t report the variance or standard-deviation of accuracies of the trials, but we report (as mentioned in paragraph 1 on page 3 of original version) standard-error which defined as standard-deviation/square-root(number of trials) (https://en.wikipedia.org/wiki/Standard_error). That being said, when the training data is fixed (as is the case for Table 2), the variance of GLN is smaller than that of AGNN as predicted by the reviewer, as the only source of randomness is the initialization of the neural network weights. On the other hand, when the training data is chosen randomly (As is the case for Tables 3 and 4), there are two sources of randomness and the variance of GLN and AGNN are harder to predict and compare. We could not predict how different choices of the training data affects the accuracy, and it can happen that GLN has larger variance than AGNN.\n\n2. Regarding Figure 2.: We apologize for the lack of clarity in its caption. The thick nodes are from the test set whose labels are not known to the model at training time. For clarification, we have now added `*’ (asterisk) to mark nodes from the training set whose labels were revealed to the model during training (e.g. Figure 4). Coincidentally none of the neighborhood in Figure 2 have any nodes from the training set.\n",
"We agree that graph classification is another exciting application where graph neural networks are making breakthroughs. There are several key differences in the dataset (from citation networks) and we have not tried the idea of linear architecture for the molecular dataset yet. For example, the edges have attributes. There are straight forward ways to incorporate such information into GNNs, but we have not pursued this direction yet. I do agree the experiments you suggested will both (a) clarify what the gain is in non-linear activation; and (b) give insights on how different datasets (and applications) might require different architectures. \n\nFor the linear model, we did not tune the hyper parameters and the same hyper parameters are used as your (Kipf and Welling) original GCN. We made a small change in the stopping criteria to take the best model in validation error out of all epochs. We did not see any significant change when we use the same stopping criteria as GCN. We will make this explicit during the revision process. Overall, there was no hyperparameter tuning for the linear model, and all the numbers should provide fair comparisons.\n\nThank you for the references, we will surely include and discuss all the great prior work you pointed out. \n\n",
"Very interesting work!\n\nYour insight about using a linear activation function on the hidden layers of a graph neural net looks interesting and indeed simplifies this class of models significantly. Have you been able to verify this architecture on some more challenging tasks, e.g. for molecule classification or the tasks presented in https://arxiv.org/abs/1511.05493, where graph neural networks typically show very strong performance as well? \n\nI am also wondering about your choice of hyper parameters when you compare your linear model to the one in https://arxiv.org/abs/1609.02907 : do you similarly use the same set of hyper parameters for all three datasets (Cora, Citeseer and Pubmed), or do you tune them individually? To make your result stronger, it would be good to tune the baseline (GCN) with the same procedure and include GCN baseline results on all of your experiments - it should be very simple by just running: https://github.com/tkipf/gcn/\n\nAs noted by Yedid Hoshen, it would be good if you could refer to some earlier work on graph neural networks with attention mechanisms:\n\nhttps://arxiv.org/abs/1703.07326 - Introduces \"Neighborhood attention\"\nhttps://arxiv.org/abs/1706.06383 - Improved version of \"Neighborhood attention\"\nhttps://arxiv.org/abs/1706.06122 - Attention mechanism in a graph neural net model for multi-agent reinforcement learning (as noted by Yedid Hoshen)",
"Thank you for your interest in our paper and bringing the VAIN model to our attention.\n\nWe see that VAIN uses attention between multiple-agents in a system. We were not aware of this line of literature when we submitted the paper. We will cite this line of work in our final version. Below is a comparison between VAIN and Attention-based-Graph Neural Network.\n\nMain similarities and differences between VAIN and Attention-based Graph Neural Network (AGNN) are as follows:\n1. Experimental results: AGNN is tested on semi-supervised classification of nodes on a graph where as VAIN is tested on prediction of future state in multi-agent systems.\n\n2. Side information: AGNN is graph processing neural network, but VAIN model does not take graph as an input as initially proposed (although it could). In VAIN, it is assumed that every agent can possibly interact with every other agent in the system. Where as in AGNN we have a known graph in which real world first order interaction between two nodes are represented as edges and attention is computed only for these first order interactions. VAIN clubs all the higher order (long range) interactions into a single attention mechanism, where as AGNN computes higher order interactions through multiple hops of first order attention mechanism.\n",
"Nice work! \n\nIt would be interesting to compare this work and VAIN (NIPS'17) https://arxiv.org/abs/1706.06122"
] | [
-1,
6,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
3,
2,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"S1Z9bmyZf",
"iclr_2018_rJg4YGWRb",
"iclr_2018_rJg4YGWRb",
"iclr_2018_rJg4YGWRb",
"iclr_2018_rJg4YGWRb",
"S1Z9bmyZf",
"HJvS2zhgz",
"rJmKbdIgM",
"H1buZ4xlz",
"iclr_2018_rJg4YGWRb",
"rk8rhql1G",
"iclr_2018_rJg4YGWRb"
] |
iclr_2018_HJRV1ZZAW | FAST READING COMPREHENSION WITH CONVNETS | State-of-the-art deep reading comprehension models are dominated by recurrent
neural nets. Their sequential nature is a natural fit for language, but it also precludes
parallelization within an instances and often becomes the bottleneck for
deploying such models to latency critical scenarios. This is particularly problematic
for longer texts. Here we present a convolutional architecture as an alternative
to these recurrent architectures. Using simple dilated convolutional units in place
of recurrent ones, we achieve results comparable to the state of the art on two
question answering tasks, while at the same time achieving up to two orders of
magnitude speedups for question answering. | rejected-papers | The key motivation for the work is producing both an efficient (parallelizable / fast) and accurate reading comprehension model. At least two reviewers are not convinced that this goal is really achieved (e.g., no comparison to hierarchical modeling, performance is not as strong). I also share concerns of R1 that, without proper ablation search and more careful architecture choice, the modeling decisions seem somewhat arbitrary.
+ the goal (of achieving effective reading comprehesion models) is important
- alternative parallelization techniques (e.g., hierarchical modeling) are not considered
- ablation studies / more systematic architecture search are missing
- it is not clear that the drop in accuracy can be justified by the potential efficiency gains (also see details in R3 -> no author response to them)
| test | [
"SyyBD2Vxf",
"ry0IS1Kxz",
"H1xKadcgf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper borrows the idea from dilated CNN and proposes a dilated convolution based module for fast reading comprehension, in order to deal with the processing of very long documents in many reading comprehension tasks. The method part is clear and well-written. The results are fine when the idea is applied to the BiDAF model, but are not very well on the DrQA model.\n\n(1) My biggest concern is about the motivation of the paper: \n\nFirstly, another popular approach to speed up reading comprehension models is hierarchical (coarse-to-fine) processing of passages, where the first step processes sentences independently (which could be parallelized), then the second step makes predictions over the whole passage by taking the sentence processing results. Examples include , \"Attention-Based Convolutional Neural Network for Machine Comprehension\", \"A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data\", and \"Coarse-to-fine question answering for long documents\"\n\nThis paper does not compare to the above style of approach empirically, but the hierarchical approach seems to have more advantages and seems a more straightforward solution. \n\nSecondly, many existing works on multiple passage reading comprehension (or open-domain QA as often named in the papers) found that dealing with sentence-level passages could result in better (or on par) results compared with working on the whole documents. Examples include \"QUASAR: Datasets for question answering by search and reading\", \"SearchQA: A new q&a dataset augmented with context from a search engine\", and \"Reinforced Ranker-Reader for Open-Domain Question Answering\". If in many applications the sentence-level processing is already good enough, the motivation of doing speedup over LSTMs seems even waker.\n\nEven on the SQuAD data, the sentence-level processing seems sufficient: as discussed in this paper about Table 5, the author mentioned (at the end of Page 7) that \"the Conv DrQA model only encode every 33 tokens in the passage, which shows that such a small context is ENOUGH for most of the questions\".\n\nMoreover, the proposed method failed to give any performance boost, but resulted in a big performance drop on the better-performed DrQA system. Together with the above concerns, it makes me doubt the motivation of this work on reading comprehension.\n\nI would agree that the idea of using dilated CNN (w/ residual connections) instead of BiLSTM could be a good solution to many online NLP services like document-level classification tasks. Therefore, the motivation of the paper may make more sense if the proposed method is applied to a different NLP task.\n\n(2) A similar concern about the baselines: the paper did not compare with ANY previous work on speeding up RNNs, e.g. \"Training RNNs as Fast as CNNs\". The example work and its previous work also accelerated LSTM by several times without significant performance drop on some RC models (including DrQA).\n\n(3) About the speedup: it could be imaged that the speedup from the usage of dilated CNN largely depends on the model architecture. Considering that the DrQA is a better system on both SQuAD and TriviaQA, the speedup on DrQA is thus more important. However, the DrQA has less usage of LSTMs, and in order to cover a large reception field, the dilated CNN version of DrQA has a 2-4 times speedup, but still works much worse. This makes the speedup less impressive.\n\n(4) It seems that this paper was finished in a rush. The experimental results are not well explained and there is not enough analysis of the results.\n\n(5) I do not quite understand the reason for the big performance drop on DrQA. Could you please provide more explanations and intuitions?",
"The paper proposes a simple dilated convolutional network as drop-in replacements for recurrent networks in reading comprehension tasks. The first advantage of the proposed model is short response time due to parallelism of non-sequential output generation, proved by experiments on the SQuAD dataset. The second advantage is its potentially better representation, proved by better results compared to models using recurrent networks on the TriviaQA dataset.\n\nThe idea of using dilated convolutional networks as drop-in replacements for recurrent networks should have more value than just reading comprehension tasks. The paper should stress on this a bit more. The paper also lacks discussion with other models that use dilated convolution in different ways, such as WaveNet[1].\n\nIn general, the proposed model has novelty. The experimental results also sufficiently demonstrate the proposed advantages of the model. Therefore I recommend acceptance for it.\n\n[1] Oord, Aaron van den, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. \"Wavenet: A generative model for raw audio.\" arXiv preprint arXiv:1609.03499 (2016).",
"This paper proposes a convnet-based neural network architecture for reading comprehension and demonstrates reasonably good performance on SQuAD and TriviaQA with a great speed-up.\n\nThe proposed architecture combines a few recent DL techniques: residual networks, dilated convolutions and gated linear units.\n\nI understand the motivation that ConvNet has a great advantage of easing parallelization and thus is worth exploring. However, I think the proposed architecture in this paper is less motivated. Why is GLU chosen? Why is dilation used? According to Table 4, dilation is really not worth that much and GLU seems to be significantly better than ReLU, but why?\n\nThe architecture search (Table 3 and Figure 4) seems to quite arbitrary. I would like to see more careful architecture search and ablation studies. Also, why is Conv DrQA significantly worse than DrQA while Conv BiDAF can be comparable to BiDAF?\n\nI would like to see more explanations of Figure 4. How important is # of layers and residual connections?\n\nMinor:\n- It’d be helpful to add the formulation of gated linear units and residual layers. \n- It is necessary to put Table 5 in the main paper instead of Appendix. These are still the main results of the paper."
] | [
4,
7,
5
] | [
4,
3,
4
] | [
"iclr_2018_HJRV1ZZAW",
"iclr_2018_HJRV1ZZAW",
"iclr_2018_HJRV1ZZAW"
] |
iclr_2018_BJMuY-gRW | Jointly Learning Sentence Embeddings and Syntax with Unsupervised Tree-LSTMs | We introduce a neural network that represents sentences by composing their words according to induced binary parse trees. We use Tree-LSTM as our composition function, applied along a tree structure found by a fully differentiable natural language chart parser. Our model simultaneously optimises both the composition function and the parser, thus eliminating the need for externally-provided parse trees which are normally required for Tree-LSTM. It can therefore be seen as a tree-based RNN that is unsupervised with respect to the parse trees. As it is fully differentiable, our model is easily trained with an off-the-shelf gradient descent method and backpropagation. We demonstrate that it achieves better performance compared to various supervised Tree-LSTM architectures on a textual entailment task and a reverse dictionary task. Finally, we show how performance can be improved with an attention mechanism which fully exploits the parse chart, by attending over all possible subspans of the sentence. | rejected-papers | Though the general direction is interesting and relevant to ICLR, the novelty is limited. As reviewers point out it is very similar to Le & Zuidema (2015), with few modifications (using LSTM word representations, a different type of pooling). However, it is not clear if they are necessary as there is no direct comparison (e.g., using a different type of pooling). Overall, though the submission is generally solid, it does not seem appropriate for ICLR.
+ solid
+ well written
- novelty limited
- relation to Le & Zuidema is underplayed | train | [
"HkMU7q5lf",
"HJwHbiqlG",
"SyTH-BCxz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes to jointly learning a semantic objective and inducing a binary tree structure for word composition, which is similar to (Yogatama et al, 2017). Differently from (Yogatama et al, 2017), this paper doesn’t use reinforcement learning to induce a hard structure, but adopts a chart parser manner and basically learns all the possible binary parse trees in a soft way. \n\nOverall, I think it is really an interesting direction and the proposed method sounds reasonable. However, I am concerned about the following points: \n\n- The improvements are really limited on both the SNLI and the Reverse Dictionary tasks. (Yogatama et al, 2017) demonstrate results on 5 tasks and I think it’d be helpful to present results on a diverse set of tasks and see if conclusions can generally hold. Also, it would be much better to have a direct comparison to (Yogatama et al, 2017), including the performance and also the induced tree structures.\n\n- The computational complexity of this model shouldn’t be neglected. If I understand it correctly, the model needs to compute O(N^3) LSTM compositions. This should be at least discussed in the paper. And I am not also sure how hard this model is being converged in all experiments (compared to LSTM or supervised tree-LSTM).\n\n- I am wondering about the effects of the temperature parameter t. Is that important for training?\n\nMinor:\n- What is the difference between LSTM and left-branching LSTM?\n- I am not sure if the attention overt chart is a highlight of the paper or not. If so, better move that part to the models section instead of mention it briefly in the experiments section. Also, if any visualization (over the chart) can be provided, that’d be helpful to understand what is going on. \n",
"Summary: The paper proposes to use the CYK chart-based mechanism to compute vector representations for sentences in a bottom-up manner as in recursive NNs. The key idea is to maintain a chart to take into account all possible spans. The paper also introduces an attention method over chart cells. The experimental results show that the propped model outperforms tree-lstm using external parsers.\n\nComment: I kinda like the idea of using chart, and the attention over chart cells. The paper is very well written.\n- My only concern about the novelty of the paper is that the idea of using CYK chart-based mechanism is already explored in Le and Zuidema (2015).\n- Le and Zudema use pooling and this paper uses weighted sum. Any differences in terms of theory and experiment?\n- I like the new attention over chart cells. But I was surprised that the authors didn’t use it in the second experiment (reverse dictionary).\n- In table 2, it is difficult for me to see if the difference between unsupervised tree-lstm and right-branching tree-lstm (0.3%) is “good enough”. In which cases the former did correctly but the latter didn’t?\n- In table 3, what if we use the right-branching tree-lstm with attention?\n- In table 4, why do Hill et al lstm and bow perform much better than the others?\n",
"The paper presents a model titled the \"unsupervised tree-LSTM,\" in which the authors mash up a dynamic-programming chart and a recurrent neural network. As far as I can glean, the topology of the neural network is constructed using the chart of a CKY parser. When combining different constituents, an energy function is computed (equation 6) and the resulting energies are passed through a softmax. The architecture achieves impressive results on two tasks: SNLI and the reverse dictionary of Hill et al. (2016).\n\nOverall, I found the paper deeply uninspired. The authors downplay the similarity of their paper to that of Le and Zuidema (2015), which I did not appreciate. It's true that Le and Zuidema take a parse forest from an existing parser, but it still contains an exponential number of trees, as does the work in here. Note that exposition in Le and Zuidema (2015) discusses the pruned case as well, i.e., a compete parse forest. The authors of this paper simply write \"Le and Zuidema (2015) propose a model that takes as input a parse forest from an external parser, in order to deal with uncertainty.\" I would encourage the authors to revisit Le and Zuidema (2015), especially section 3.2, and consider the technical innovations over the existing work. I believe the primary difference (other using an LSTM instead of a convnet) is to replace max-pooling with softmax-pooling. Do these two architectural changes matter? The experiments offer no empirical comparison. In short, the insight of having an end-to-end differentiable function based on a dynamic-programming chart is pretty common -- the idea is in the air. The authors provide yet another instantiation of such an approach, but this time with an LSTM. \n\nThe technical exposition is also relatively poor. The authors could have expressed their network using a clean recursion, following the parse chart, but opted not to, and, instead, provided a round-about explanation in English. Thus, despite the strong results, I would not like to see this work in the proceedings, due to the lack of originality and poor technical discussion. If the paper were substantially cleaned-up, I would be willing to increase my rating. "
] | [
5,
6,
4
] | [
4,
4,
4
] | [
"iclr_2018_BJMuY-gRW",
"iclr_2018_BJMuY-gRW",
"iclr_2018_BJMuY-gRW"
] |
iclr_2018_B1kIr-WRb | LEARNING SEMANTIC WORD RESPRESENTATIONS VIA TENSOR FACTORIZATION | Many state-of-the-art word embedding techniques involve factorization of a cooccurrence
based matrix. We aim to extend this approach by studying word embedding
techniques that involve factorization of co-occurrence based tensors (N-
way arrays). We present two new word embedding techniques based on tensor
factorization and show that they outperform common methods on several semantic
NLP tasks when given the same data. To train one of the embeddings, we present
a new joint tensor factorization problem and an approach for solving it. Furthermore,
we modify the performance metrics for the Outlier Detection Camacho-
Collados & Navigli (2016) task to measure the quality of higher-order relationships
that a word embedding captures. Our tensor-based methods significantly
outperform existing methods at this task when using our new metric. Finally, we
demonstrate that vectors in our embeddings can be composed multiplicatively to
create different vector representations for each meaning of a polysemous word.
We show that this property stems from the higher order information that the vectors
contain, and thus is unique to our tensor based embeddings. | rejected-papers | The reviewers are concerned that the evaluation quality is not sufficient to convince readers that the proposed embedding method is indeed superior to alternatives. Though the authors attempted to address these comments in a subsequent revision but still, e.g., the evaluation is only intrinsic or on contrived problems. Given the limited novelty of the approach (it is a fairly straightforward generalization of Levy and Goldberg's factorization of PPMI matrix; the factorization is not new per se as well), the quality of experiments and analysis should be improved.
+ the paper is well written
- novelty is moderate
- better evaluation and analysis are necessary
| train | [
"HJR9ENn4G",
"r1aR0zMgf",
"Hy4sVp_lf",
"rkyEiZKef",
"HywV8xJNz",
"H1UafWrXG",
"SyX1tqVMG",
"r1wGD94GM",
"B1Qh854MM"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Hi Reviewer 3,\n\nWe see your point about needing to compare against another tensor method. While we are unaware of another embedding method that directly compares to ours (semantically-focused tensor factorization-based embedding), we see the utility in comparing against another embedding method that utilizes tensor factorization. \nSo, we implemented HOSG from the fourth paper you mentioned as it was the closest to our approach -- still an unsupervised word embedding. We used the best method found in the paper -- context window of 5 and using third order factorization of the positional third order tensor as it performed better than SG more consistently in their experiments, using the recommended hyperparameters and the same dataset we used in the original paper for a fair comparison. The results on all tasks are linked below in an anonymous data dump, including the results of our embeddings and baselines for easy comparison.\n\nhttps://pastebin.com/Skyfa44v\n\nThe results are as we expected -- it outperforms both our embeddings and the baselines at the synactic-based PoS task when the greatest amount of training data is presented to the supervised model (HOSG also performs best at the MTurk WS dataset). This makes sense since it was motivated in the original paper that this embedding would be more focused on encoding syntactic information. Still, we see our embeddings continue to outperform the baseline even at the synactic task when supervised data is poor (indicating information accessibility) and at semantically-focused tasks like OD(n) and Sentiment Analysis.\n\nIt is also worth noting that it makes sense that HOSG does not outperform SG on OD3 since it is not actually being trained using information about groups of 3 words (which is what OD3 is testing for), but rather using the third tensor dimension to augment the syntactic positions in the context matrix.\n\nWe can of course include these results in the final version of the paper if need be. Let us know if these results change your stance on the paper at all. Also, if other reviewers were concerned about other tensor-based embeddings, please consider the discussion in this comment.\n\nThank you.",
"In this paper, the authors consider symmetric (3rd order) CP decomposition of a PPMI tensor M (from neighboring triplets), which they call CP-S. Additionally, they propose an extension JCP-S, for n-order tensor decompositions. This is then compared with random, word2vec, and NNSE, the latter of two which are matrix factorization based (or interpretable) methods. The method is shown to be superior in tasks of 3-way outlier detection, supervised analogy recovery, and sentiment analysis. Additionally, it is evaluated over the MEN and Mturk datasets.\n\n\nFor the JCP-S model, the loss function is unclear to me. L is defined for 3rd order tensors only; how is the extended to n > 3? Intuitively it seems that L is redefined, and for, say, n = 4, the model is M(i,j,k,n) = \\sum_1^R u_ir u_jr u_kr u_nr. However, the statement \"since we are using at most third order tensors in this work\" I am further confused. Is it just that JCP-S also incorporates 2nd order embeddings? I believe this requires clarification in the manuscript itself.\n\nFor the evaluations, there are no other tensor-based methods evaluated, although there exist several well-known tensor-based word embedding models existing:\n\nPengfei Liu, Xipeng Qiu∗ and Xuanjing Huang, Learning Context-Sensitive Word Embeddings with Neural Tensor Skip-Gram Model, IJCAI 2015\n\nJingwei Zhang and Jeremy Salwen, Michael Glass and Alfio Gliozzo. Word Semantic Representations using Bayesian Probabilistic Tensor Factorization, EMNLP 2014\n\nMo Yu, Mark Dredze, Raman Arora, Matthew R. Gormley, Embedding Lexical Features via Low-Rank Tensors\n\nto name a few via quick googling.\n\nAdditionally, since it seems the main benefit of using a tensor-based method is that you can use 3rd order cooccurance information, multisense embedding methods should also be evaluated. There are many such methods, see for example \n\nJiwei Li, Dan Jurafsky, Do Multi-Sense Embeddings Improve Natural Language Understanding?\n\nand citations within, plus quick googling for more recent works.\n\nI am not saying that these works are equivalent to what the authors are doing, or that there is no novelty, but the evaluations seem extremely unfair to only compare against matrix factorization techniques, when in fact many higher order extensions have been proposed and evaluated, and especially so on the tasks proposed (in particular the 3-way outlier detection). \n\nObserve also that in table 2, NNSE gets the highest performance in both MEN and MTurk. Frankly this is not very surprising; matrix factorization is very powerful, and these simple word similarity tasks are well-suited for matrix factorization. So, statements like \"as we can see, our embeddings very clearly outperform the random embedding at this task\" is an unnecessary inflation of a result that 1) is not good and 2) is reasonable to not be good. \n\nOverall, I think for a more sincere evaluation, the authors need to better pick tasks that clearly exploit 3-way information and compare against other methods proposed to do the same.\n\nThe multiplicative relation analysis is interesting, but at this point it is not clear to me why multiplicative is better than additive in either performance or in giving meaningful interpretations of the model. \n\nIn conclusion, because the novelty is also not that big (CP decomposition for word embeddings is a very natural idea) I believe the evaluation and analysis must be significantly strengthened for acceptance. ",
"The paper proposes to extend the usual PPMI matrix factorization (Levy and Goldberg, 2014) to a (3rd-order) PPMI tensor factorization. The paper chooses symmetric CP decomposition so that word representations are tied across all three views. The MSE objective (optionally interpolated with a 2nd-order tensor) is optimized incrementally by SGD. \n\nThe paper's most clear contribution is the observation that the objective results in multiplicative compositionality of vectors, which indeed does not seem to hold in CBOW. \n\nWhile the paper reports superior performance, the empirical claims are not well substantiated. It is *not* true that given CBOW, it's not important to compare with SGNS and GloVe. In fact, in certain cases such as unsupervised word analogy, SGNS is clearly and vastly superior to other techniques (Stratos et al., 2015). The word similarity scores are also generally low: it's easy to achieve >0.76 on MEN using the plain PPMI matrix factorization on Wikipedia. So it's hard to tell if it's real improvement. \n\nQuality: Borderline. The proposed approach is simple and has an appealing compositional feature, but the work is not adequately validated and the novelty is somewhat limited. \n\nClarity: Clear.\n\nOriginality: Low-rank tensors have been used to derive features in many prior works in NLP (e.g., Lei et al., 2014). The paper's particular application to learning word embeddings (PPMI factorization), however, is new although perhaps not particularly original. The observation on multiplicative compositionality is the main strength of the paper.\n\nSignificance: Moderate. For those interested in word embeddings, this work suggests an alternative training technique, but it has some issues (described above). ",
"The paper presents the word embedding technique which consists of: (a) construction of a positive (i.e. with truncated negative values) pointwise mutual information order-3 tensor for triples of words in a sentence and (b) symmetric tensor CP factorization of this tensor. The authors propose the CP-S (stands for symmetric CP decomposition) approach which tackles such factorization in a \"batch\" manner by considering small random subsets of the original tensor. They also consider the JCP-S approach, where the ALS (alternating least squares) objective is represented as the joint objective of the matrix and order-3 tensor ALS objectives. The approach is evaluated experimentally on several tasks such as outlier detection, supervised analogy recovery, and sentiment analysis tasks.\n\nCLARITY: The paper is very well written and is easy to follow. However, some implementation details are missing, which makes it difficult to assess the quality of the experimental results.\n\nQUALITY: I understand that the main emphasis of this work is on developing faster computational algorithms, which would handle large scale problems, for factorizing this tensor. However, I have several concerns about the algorithms proposed in this paper:\n\n - First of all, I do not see why using small random subsets of the original tensor would give a desirable factorization. Indeed, a CP decomposition of a tensor can not be reconstructed from CP decompositions of its subtensors. Note that there is a difference between batch methods in stochastic optimization where batches are composed of a subset of observations (which then leads to an approximation of desirable quantities, e.g. the gradient, in expectation) and the current approach where subtensors are considered as batches. I would expect some further elaboration of this question in the paper. Although similar methods appeared in the tensor literature before, I don't see any theoretical ground for their correctness.\n\n - Second, there is a significant difference between the symmetric CP tensor decomposition and the non-negative symmetric CP tensor decomposition. In particular, the latter problem is well posed and has good properties (see, e.g., Lim, Comon. Nonengative approximations of nonnegative tensors (2009)). However, this is not the case for the former (see, e.g., Comon et al., 2008 as cited in this paper). Therefore, (a) computing the symmetric and not non-negative symmetric decomposition does not give any good theoretical guarantees (while achieving such guarantees seems to be one of the motivations of this paper) and (b) although the tensor is non-negative, its symmetric factorization is not guaranteed to be non-negative and further elaboration of this issue seem to be important to me.\n\n - Third, the authors claim that one of their goals is an experimental exploration of tensor factorization approaches with provable guarantees applied to the word embedding problem. This is an important question that has not been addressed in the literature and is clearly a pro of the paper. However, it seems to me that this goal is not fully implemented. Indeed, (a) I mentioned in the previous paragraph the issues with the symmetric CP decomposition and (b) although the paper is motivated by the recent algorithm proposed by Sharan&Valiant (2017), the algorithms proposed in this paper are not based on this or other known algorithms with theoretical guarantees. This is therefore confusing and I would be interested in the author's point of view to this issue.\n\n - Further, the proposed joint approach, where the second and third order information are combined requires further analysis. Indeed, in the current formulation the objective is completely dominated by the order-3 tensor factor, because it contributes O(d^3) terms to the objective vs O(d^2) terms contributed by the matrix part. It would be interesting to see further elaboration of the pros and cons of such problem formulation.\n\n - Minor comment. In the shifted PMI section, the authors mention the parameter alpha and set specific values of this parameter based on experiments. However, I don't think that enough information is provided, because, given the author's approach, the value of this parameter most probably depends on other parameters, such as the bach size.\n\n - Finally, although the empirical evaluation is quite extensive and outperforms the state-of the art, I think it would be important to compare the proposed algorithm to other tensor factorization approaches mentioned above. \n\nORIGINALITY: The idea of using a pointwise mutual information tensor for word embeddings is not new, but the authors fairly cite all the relevant literature. My understanding is that the main novelty is the proposed tensor factorization algorithm and extensive experimental evaluation. However, such batch approaches for tensor factorization are not new and I am quite skeptical about their correctness (see above). The experimental evaluation presents indeed interesting results. However, I think it would also be important to compare to other tensor factorization approaches. I would also be quite interested to see the performance of the proposed algorithm for different values of parameters (such as the butch size).\n\nSIGNIFICANCE: I think the paper addresses very interesting problem and significant amount of work is done towards the evaluation, but there are some further important questions that should be answered before the paper can be published. To summarize, the following are the pros of the paper:\n\n - clarity and good presentation;\n - good overview of the related literature;\n - extensive experimental comparison and good experimental results.\n\nWhile the following are the cons:\n\n - the mentioned issues with the proposed algorithm, which in particular does not have any theoretical guarantees;\n - lack of details on how experimental results were obtained, in particular, lack of the details on the values of the free parameters in the proposed algorithm;\n - lack of comparison to other tensor approaches to the word embedding problem (i.e. other algorithms for the tensor decomposition subproblem);\n - the novelty of the approach is somewhat limited, although the idea of the extensive experimental comparison is good.\n\n\n",
"After reading the revision I cannot change my rating. My biggest issue is still the lack of comparison against a tensor-based, or at least a multisense embedding. The papers I listed were after some quick googling and I am still unclear as to why they are unacceptable comparisons (except perhaps the second one). If you can show your work outperforms these other works, agreed that it’s not apples to apples but it’s still much closer than comparing a tensor based approach against a slew of matrix based approaches, of which one can never expect promising behavior on the contextual tasks you listed. ",
"Hi all, \nWe have uploaded the edited version of our paper. Based on your feedback, we included more detail in hope of clarify the utility of our work. \nWe have now adequately revised related tensor factorization literature and expanded the related work section. On the evaluation front, the major changes to the paper include:\n\n1) We replaced the neural network task with supervised part of speech classification based on the word's embedding. The reason we did this was because we noticed a troubling trend when fine-tuning the parameters of the network: the random embedding was achieving over 90% accuracy *on the test set* after sufficient training. This is higher than some of the best results we were getting for SGNS, GloVe, etc.\nWe believe this to be because of problems with the construction of the Google analogy dataset. There are 29 instances of \"a : b :: Europe : Euro\", so the neural network can simply learn the mapping Europe -> Euro, without even taking the words \"a\" and \"b\" into account. (Further, there are no instances of any \"a : b :: Europe : d\" for any d != Euro). We believe this is what the NN was doing for the random embedding, simply memorizing the word->word mapping without taking the rest of the analogy query into account. \nBecause the random embedding (which clearly encodes no information) was able to outperform well-established baselines, we no longer believe it should be used as a measure of the quality of information encoded in an embedding. Thus, rather than have just one fewer evaluation task, we decided to replace it with the different, simpler one of PoS classification.\n\n2) Added SGNS and GloVe as baselines on the same dataset for a more robust comparison against the state-of-the-art techniques.\n\n3) Re-trained *all* of our embeddings on a more recent dump of Wikipedia (mid-2017). Our previous results were trained on a 2008 dump of Wikipedia which we initially chose because it was readily available and already parsed. The more recent data should provide fresher/more recent word representations. We also decreased the minimum wordcount requirement from 2,000 times to 1,000 times, increasing the vocabulary count while still removing noisy words.\n\n4) Included two more wordsim datasets (RW, SimLex999) to add more depth to the evaluation. \n\n5) Included explicit hyperparameter settings we used and discussed how we found them.\n\nDespite these rather large changes, the story of the evaluation remains fairly unchanged -- our embeddings continue to perform better at semantic tasks and in more data-sparse domains, indicating that they tend to encode more semantic information more readily, even when compared against state-of-the-art baselines including SGNS and GloVe.\nWe hope that these updated results are much more convincing than those in our original paper. We look forward to your feedback based on these updates! Thank you for your time and consideration.\n",
"The reviewer's comments are marked with a single * and our responses are marked with double **\n\n* \"Is it just that JCP-S also incorporates 2nd order embeddings?\"\n** Exactly correct. We apologize for the confusing wording, and will update it in revised versions to be more clear. JCP-S is simultaneously decomposing a second and third order tensor using a single factor matrix. \n\n* No tensor-based baselines\n** Let's go over the mentioned tensor-based approaches (which we also considered when formulating the idea for this paper):\n\"Learning Context-Sensitive Word Embeddings with Neural Tensor Skip-Gram Model\" - Their best word embedding by far results from making the tensor's third axis... one dimensional? So they are factoring a |V| x |V| x 1 \"tensor\"? This is really just matrix factorization.\n\"Word Semantic Representations using Bayesian Probabilistic Tensor Factorization\" doesn't apply because it uses supervised data, and we are considering unsupervised pre-trained embeddings. While they use the CP decomposition, they do not consider a symmetric decomposition based on an unsupervised corpus, which is the problem we are considering.\n\"Embedding Lexical Features via Low-Rank Tensors\" - While they do use word embeddings and CP decomposition, they are not pre-training generic word embeddings in this paper and thus we cannot use their methods to compare against ours.\n\"Explaining and Generalizing Skip-Gram through Exponential Family Principal Component Analysis\" is focused on creating an embedding to capture syntactic information, whereas ours are more semantically-focused and as such we evaluate our embeddings on a bed of semantic tasks, a testbed on which it would be unfair to compare a syntactic-focused embedding.\n** Thus, the pointed tensor-based approaches to creating word embeddings are relatively orthogonal to our ideas.\n\n* Word similarity result wording\n** We agree that matrix factorization is well-suited to modeling word similarity, and likely optimal in some sense for the task. However, with our comment, we are trying to emphasize the fact that we are including this result for completeness and not taking it heavily. We apologize if the wording sounds arrogant and will change the wording in future iterations of the paper. \n\nAgain, please keep an eye out for a revised version of the paper, which will include further baselines, in particular, other state-of-the-art embedding techniques, which all reviewers seemed to agree that our paper needed. We hope that such updated results will alleviate some of your concerns about the evaluation of our techniques, and further convince you of the utility of our embeddings.\n",
"The reviewer's comments are marked with a single * and our responses are marked with double **\n\n* \"It is *not* true that given CBOW, it's not important to compare with SGNS and GloVe. In fact, in certain cases such as unsupervised word analogy, SGNS is clearly and vastly superior to other techniques (Stratos et al., 2015).\"\n** Thank you for pointing this out. We are training these embeddings on the same data the other embeddings were trained on using the recommended hyperparameters, and will include both SGNS and GloVe as baselines in the updated version of the paper.\n\n* \"The word similarity scores are also generally low: it's easy to achieve >0.76 on MEN using the plain PPMI matrix factorization on Wikipedia. So it's hard to tell if it's real improvement\"\n** We repeat the statement from our paper that word similarity is a poor metric for evaluating the quality of information held in a word embedding, and performance at word embedding is poorly correlated with performance at downstream tasks. As such, we are less worried about performing well at the word similarity task compared to the other tasks shown to be more relevant to the practical use of word embeddings. \n** Also, remember that we are providing a preliminary exploration of this approach, and thus only use a much smaller dataset (but still non-trivial - ~150M tokens) than production-ready approaches such as pre-trained GloVe or word2vec, and thus do not expect our numbers to be directly comparable to other published approaches.\nTo reviewer 2: Please keep an eye out for revised versions of our paper after we have time to consider more experiments. Hopefully, our next form of evaluation will be more compelling after we update with even more of the common baselines. ",
"Comments by reviewer start with a single * and our responses start with double star **\n* \"The main emphasis of this work is .....\"\n** Actually, the main emphasis of our work is exploring the utility of considering higher order approaches than just pairs of words. We make few arguments based on the computational superiority of our work.\n\n** JCP-S has nothing to do with ALS -- this is a misconception by the reviewer. \n* \"I do not see why using small random subsets ....\"\n** See \"Expected Tensor Factorization with Stochastic Gradient Descent\" by T Maehara (2016). We consider the same problem.We believe the reviewer misunderstood the objective we are actually minimizing. If we consider the entire PPMI tensor to be the full \"set of observations\", then we are exactly taking subsets of the full set of observations. Explicitly, if we represent the entire tensor as a list of 4-tuples of the indices and values of the nonzero entries L := [(x1, y1, z1, val1), (x2, y2, z2, val2), ..., (xnnz, ynnz, znnz, valnnz)], then each batch i we consider is a strict subset Li ⊊ L where ∪i Li = L. Because of this, we actually are considering the batch stochastic optimization setting that the reviewer was talking about.\n** We also do not attempt to prove any theoretical guarantees for why this approach works. We simply demonstrate empirically that it improves the quality of certain types of information encoded for specific classes of NLP tasks, and are motivated by the many applications of tensor decompositions to existing ML problems.\n\n* Symmetric and/or non-negative CP Decomposition and related tensor literature.\n** The reviewer’s points are well-taken. We realize that our coverage of the relevant papers on theoretical properties and results on symmetric tensor factorization was rather incomplete and could have been misleading to the reader. We will revise it adequately. In Comon et. al’s 2008 paper and see also https://web.stanford.edu/group/mmds/slides/lim-mmds.pdf - slide 26, the main result states symmetric CP decomp of a symmetric always exists over complex numbers, doesn’t matter if the tensor is non-negative or not. Further (see proposition 5.3 in Comon et al.’s 2008 paper) it also follows that when order < dim, then symmetric CP rank = CP rank generically. While our tensor is real and we are optimizing over the real space, in this case the symmetric CP rank can exceed the dimension (in particular, can be larger than \\binom{k + n -1}{n-1}, k=3, n=size of vocab), nevertheless, as is done in many application papers that factor the third order cumulant, we are merely seeking *a* real symmetric rank-R approximation (may not be the best) to the symmetric tensor. \n** We actually did spend a good amount of time considering the non-negative symmetric CP decomposition, and called it CP-SN. We also tried a non-negative joint symmetric CP decomposition we called JCP-SN. However, in all of the experiments we tried, the performance of these non-negative embeddings never surpassed that of the unconstrained varieties we presented in this paper, so we decided to omit it from the paper in favor of clarity and conciseness. However, we concede that it perhaps would have been smart to acknowledge such experiments in light of the nice theoretical properties of non-negative CP decompositions that you mentioned, although it can still be computationally hard to obtain).\n\n* JCP-S being dominated by the third order tensor\n** Again, we provide no theoretical properties or guarantees of our methods. We agree that such research would likely be illuminating and would probably lead to gains and extensions to our approaches, but at this point we are merely providing preliminary empirical results to a new way to encode word information in vectors. \n\n* \"I think it would be important to compare .....\"\n** Regarding Sharan and Valiant [2017], we are merely pointing out that in contrast to the results in that paper, we show that tensor factorization of 3-way co-occurrence can outperform the matrix based approaches on a number of different tasks -- our research was conducted at the same time as theirs, and was in fact not inspired by their work at all.\n** In light of this discussion, it is worth noting that our paper is primarily aimed to demonstrate the utility of symmetric tensor factorization and encoding higher order information for word embeddings, rather than claim to be the current \"state-of-the-art\" in word embeddings (if such a thing were to exist). If we were to claim the latter, we would have to train on a much larger dataset (production-scale), but we instead aim to show that our methods can be used to encode certain types of information better than those that do not take higher-order co-occurrence information into account.\nWe are considering them heavily for the revised version of our paper (to be uploaded shortly)."
] | [
-1,
5,
5,
5,
-1,
-1,
-1,
-1,
-1
] | [
-1,
3,
5,
5,
-1,
-1,
-1,
-1,
-1
] | [
"HywV8xJNz",
"iclr_2018_B1kIr-WRb",
"iclr_2018_B1kIr-WRb",
"iclr_2018_B1kIr-WRb",
"SyX1tqVMG",
"iclr_2018_B1kIr-WRb",
"r1aR0zMgf",
"Hy4sVp_lf",
"rkyEiZKef"
] |
iclr_2018_H113pWZRb | Topology Adaptive Graph Convolutional Networks | Convolution acts as a local feature extractor in convolutional neural networks (CNNs). However, the convolution operation is not applicable when the input data is supported on an irregular graph such as with social networks, citation networks, or knowledge graphs. This paper proposes the topology adaptive graph convolutional network (TAGCN), a novel graph convolutional network that generalizes CNN architectures to graph-structured data and provides a systematic way to design a set of fixed-size learnable filters to perform convolutions on graphs. The topologies of these filters are adaptive to the topology of the graph when they scan the graph to perform convolution, replacing the square filter for the grid-structured data in traditional CNNs. The outputs are the weighted sum of these filters’ outputs, extraction of both vertex features and strength of correlation between vertices. It
can be used with both directed and undirected graphs. The proposed TAGCN not only inherits the properties of convolutions in CNN for grid-structured data, but it is also consistent with convolution as defined in graph signal processing. Further, as no approximation to the convolution is needed, TAGCN exhibits better performance than existing graph-convolution-approximation methods on a number
of data sets. As only the polynomials of degree two of the adjacency matrix are used, TAGCN is also computationally simpler than other recent methods. | rejected-papers | The authors provide an extension to GCNs of Kipf and Welling in order to incorporate information about higher order neighborhoods. The extension is well motivated (and though I agree that it is not trivial modification of the K&W approach to the second order, thanks to the authors for the clarification). The improvements are relatively moderate.
Pros:
-- The approach is well motivated
-- The paper is clearly written
Cons:
-- The originality and impact (as well as motivation) are questioned by the reviewers
| train | [
"Hkqc-xL4z",
"ry6GiiKlz",
"H1kIb-Kef",
"r1XXuJcgM",
"rkSkNfj7G",
"ryXsYOBXz",
"Bk6GVtrmf",
"B1z3_FBXf",
"H1tKKKBXG",
"rJxc2drmG",
"B1kA7KrQz",
"S1-ItYrQM",
"SJvZYKHXG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"Thank you for pointing out that the Arxiv paper was updated recently. Please note that I did not mean to require that the two mentioned articles should have been referenced and discussed in your initial submission.\n\nI understand that there are technical differences between TAGCN and higher-order extensions of GCN, but still both approaches pursue the goal to incorporate information from nodes at a farther distance from the reference node and are very similar in that sense. Thank you for adding additional experimental results on this. I have changed my rating back to its original value.",
"The authors propose a new CNN approach to graph classification that generalizes previous work. Instead of considering the direct neighborhood of a vertex in the convolution step, a filter based on outgoing walks of increasing length is proposed. This incorporates information from more distant vertices in one propagation step.\n\nThe proposed idea is not exceptional original, but the paper has several strong points:\n\n* The relation to previous work is made explicit and it is show that several previous approaches are generalized by the proposed one.\n* The paper is clearly written and well illustrated by figures and examples. The paper is easy to follow although it is on an adequate technical level.\n* The relation between the vertex and spectrum domain is well elaborated and nice (although neither important for understanding nor implementing the approach).\n* The experimental evaluation appears to be sound. A moderate improvement compared to other approaches is observed for all data sets.\n\nIn summary, I think the paper can be accepted for ICLR.\n----------- EDIT -----------\nAfter reading the publications mentioned by the other reviewers as well as the following related contributions\n\n* Network of Graph Convolutional Networks Trained on Random Walks (under review for ICLR 2018)\n* Graph Convolution: A High-Order and Adaptive Approach, Zhenpeng Zhou, Xiaocheng Li (arXiv:1706.09916)\n\nI agree that the relation to previous work is not adequately outlined. Therefore I have modified my rating accordingly.",
"In this paper a new neural network architecture for semi-supervised graph classification is proposed. The new construction builds upon graph polynomial filters and utilizes them on each successive layer of the neural network with ReLU activation functions.\n\nIn my opinion writing of this paper requires major revision. The first 8 pages mostly constitute a literature review and experimental section provides no insights about the performance of the TAGCN besides the slight improvement of the Cora, Pubmed and Citeseer benchmarks.\n\nThe one layer analysis in sections 2.1, 2.2 and 2.3 is simply an explanation of graph polynomial filters, which were previously proposed and analyzed in cited work of Sandryhaila and Moura (2013). Together with the summary of other methods and introduction, it composes the first 8 pages of the paper. I think that the graph polynomial filters can be summarized in much more succinct way and details deferred to the appendix for interested reader. I also recommend stating which ideas came from the Sandryhaila and Moura (2013) work in a more pronounced manner.\n\nNext, I disagree with the statement that \"it is not clear how to keep the vertex local property when filtering in the spectrum domain\". Graph Laplacian preserves the information about connectivity of the vertices and filtering in the vertex domain can be done via polynomial filters in the Fourier domain. See Eq. 18 and 19 in [1].\n\nFinally, I should say that TAGCN idea is interesting. I think it can be viewed as an extension of the GCN (Kipf and Welling, 2017), where instead of an adjacency matrix with self connections (i.e. first degree polynomial), a higher degree graph polynomial filter is used on every layer (please correct me if this comparison is not accurate). With more experiments and interpretation of the model, including some sort of multilayer analysis, this can be a good acceptance candidate.\n\n\n[1] David I Shuman, Sunil K Narang, Pascal Frossard, Antonio Ortega, and Pierre Vandergheynst.\nThe emerging field of signal processing on graphs: Extending high-dimensional data analysis to\nnetworks and other irregular domains. IEEE Signal Processing Magazine, 30(3):83–98, 2013.",
"The paper introduces Topology Adaptive GCN (TAGCN) to generalize convolutional\nnetworks to graph-structured data.\nI find the paper interesting but not very clearly written in some sections,\nfor instance I would better explain what is the main contribution and devote\nsome more text to the motivation. Why is the proposed approach better than the\npreviously published ones, and when is that there is an advantage in using it?\n\nThe main contribution seems to be the use of the \"graph shift\" operator from\nSandryhaila and Moura (2013), which closely resembles the one from\nShuman et al. (2013). It is actually not very well explained what is the main\ndifference.\n\nEquation (2) shows that the learnable filters g are operating on the k-th power\nof the normalized adjacency matrix A, so when K=1 this equals classical GCN\nfrom T. Kipf et al.\nBy using K > 1 the method is able to leverage information at a farther distance\nfrom the reference node.\n\nSection 2.2 requires some polishing as I found hard to follow the main story\nthe authors wanted to tell. The definition of the weight of a path seems\ndisconnected from the main text, ins't A^k kind of a a diffusion operator or\nrandom walk?\nThis makes me wonder what would be the performance of GCN when the k-th power\nof the adjacency is used.\n\nI liked Section 3, however while it is true that all methods differ in the way they\ndo the filtering, they also differ in the way the input graph is represented\n(use of the adjacency or not).\n\nExperiments are performed on the usual reference benchmarks for the task and show\nsensible improvements with respect to the state-of-the-art. TAGCN with K=2 has\ntwice the number of parameters of GCN, which makes the comparison not entirely\nfair. Did the author experiment with a comparable architecture?\nAlso, how about using A^2 in GCN or making two GCN and concatenate them in\nfeature space to make the representational power comparable?\n\nIt is also known that these benchmarks, while being widely used, are small and\nresult in high variance results. The authors should report statistics over\nmultiple runs.\nGiven the systematic parameter search, with reference to the actual validation\n(or test?) set I am afraid there could be some overfitting. It is quite easy\nto probe the test set to get best performance on these benchmarks.\n\nAs a minor remark, please make figures readable also in BW.\n\nOverall I found the paper interesting but also not very clear at pointing out\nthe major contribution and the motivation behind it. At risk of being too reductionist:\nit looks as learning a set of filters on different coordinate systems given\nby the various powers of A. GCN looks at the nearest neighbors and the paper\nshows that using also the 2-ring improves performance.\n",
"We’ve uploaded a new version with the revised part in blue font. In the revised version, following the request of the reviewers, we make it clearer about the motivations and main contributions and move one subsection to the appendix. Besides, we thank the reviewer for pointing out one related paper and refer it in this new version.",
"1) \"I find the paper interesting but not very clearly written in some sections, for instance I would better explain what is the main contribution and devote some more text to the motivation. Why is the proposed approach better than the previously published ones, and when is that there is an advantage in using it?\"\n\nReply: Thank you for your suggestion. We now provide additional explanation of the main contributions and strengths of our approach in the revised version. This paper proposes a modification to the graph convolution step in CNNs that is particularly relevant for graph structured data. Our proposed convolution is graph-based convolution and draws on techniques from graph signal processing. We define rigorously the graph convolution operation on the vertex domain as multiplication by polynomials of the graph adjacency matrix, which is consistent with the notion of convolution in graph signal processing. In graph signal processing, polynomials of the adjacency matrix are graph filters, extending to graph based data the usual concept of filters in traditional time or image based signal processing. Thus, comparing ours with existing work of graph CNNs, our paper provides a solid theoretical foundation for our proposed convolution step instead of an ad-hoc approach to convolution in CNNs for graph structured data. \n\nFurther, our method avoids computing the spectrum of the graph Laplacian as in (Bruna et al. 2014), or approximating the spectrum using high degree Chebyshev polynomials of the graph Laplacian matrix (in Defferrard et al. 2016, it is suggested that one needs a 25th degree Chebyshev polynomial to provide a good approximation to the graph Laplacian spectrum) or using high degree Cayley polynomials of the graph Laplacian matrix (in Levie et al. 2017, 12th degree Cayley polynomials are needed). We also clarify that the GCN method in Kipf & Welling 2017 is a first order approximation of the Chebyshev polynomials approximation in Defferrard et al. 2016, which is very different from our method. Our method has a much lower computational complexity than the complexity of the methods proposed in Bruna et al. 2014, Defferrard et al. 2016, Levie et al. 2017, since our method only uses polynomials of the adjacency matrix with maximum degree 2 as shown in our experiments. Finally, the method that we propose exhibits better performance than existing methods. \n\nIn the revised version, we have followed your suggestion and elaborated on the above two points as our main contributions and devoted more text to motivating our method.\n\nBruna, J., Zaremba, W., Szlam, A., & LeCun, Y. Spectral networks and locally connected networks on graphs, ICLR2013\nDefferrard, M., Bresson, X., & Vandergheynst, P. Convolutional neural networks on graphs with fast localized spectral filtering. In NIPS2016\nGraph Convolutional Neural Networks with Complex Rational Spectral Filters, submitted to ICLR18\nKipf, T. N., & Welling, M. (2016). Semi-supervised classification with graph convolutional networks. ICLR2017\n\n",
"Our method is able to leverage information at a farther distance on the graph than the GCN of Kipf & Welling 2017. However, ours is not a simple generalization of GCN. Below, we clarify the fundamental difference between our method and the GCN methodology in Kipf & Welling 2017 if we extended the latter to a higher order:\n\nOur first comment is that the graph convolution in GCN is defined as a first order Chebyshev polynomial of the graph Laplacian matrix, which is an approximation to the graph convolution defined in the spectrum domain in Bruna et al. 2014 (see eqn(4)-eqn(8) in Kipf & Welling 2017 for the derivation). In contrast, our graph convolution is rigorously defined as multiplication by polynomials of the graph adjacency matrix; this is not an approximation, rather, it simply is filtering with graph filters as defined and as being consistent with graph signal processing.\n\nThe approximate convolution by Chebyshev polynomials of the Laplacian matrix is defined as \\sum_{k=0}^{K} \\theta_k T_k(L) (eqn(5) in Kipf & Welling 2017), where T_k(L) is the matrix Chebyshev polynomials of degree k, and L is the graph Laplacian matrix. The matrix polynomial T_k(L) is recursively defined as T_k(L) = 2LT_{k-1}(L) – T_{k-2}(L) with T_0(L)=I and T_1(L) = L. This expression is K-localized, i.e., it depends only on nodes that are at maximum K steps away from the central node Kipf & Welling 2017. In Defferrard et al. 2016, 25th-order matrix Chebyshev polynomials are needed (K=25) for the semisupervised classification problem studied therein. Kipf & Welling 2017 adopted a first order matrix Chebyshev polynomial (K=1). By some further approximation, the convolution operator \\sum_{k=0}^{K} \\theta_k T_k(L) is approximated by \\hat{A}, where \\hat{A} is the normalized adjacency matrix of an undirected graph (see eqn(4)-eqn(8) in Kipf & Welling 2017). \n\nNext, we show the difference between our work and the GCN method in Kipf & Welling 2017 when using 2nd order (K=2, 2 steps away from the central node) Chebyshev polynomials of Laplacian matrix following the above method. It has been shown that \\sum_{k=0}^{1} \\theta T_k(L) ≈\\hat{A} in Kipf & Welling 2017, and T_2(L) =2L^2 -I by definition of Chebyshev polynomials. Then, the extension of GCN to the second order Chebyshev polynomials (two steps away from a central node) can be obtained from the original definition in Kipf & Welling 2017 (eqn (5)) as \\sum_{k=0}^{2} \\theta_k T_k(L)= \\hat{A} + 2L^2 -I, which is obviously different from ours. Thus, it is evident that our method is not a simple extension of the GCN method in Kipf & Welling 2017. We apply graph convolution as proposed from basic principles in the graph signal processing, with no approximations involved, while both T. Kipf’s GCN and Bruna et al. 2014, Defferrard et al. 2016, Levie et al. 2017 are based on approximations of convolution defined in the spectrum domain. In our approach, the degree of freedom is the design of the graph filter – its degree and its coefficients. Ours is a principled approach and provides a generic methodology. The performance gains we obtain are the result of capturing the underlying graph structure with no approximation in the convolution operation. \n\nBruna, J., Zaremba, W., Szlam, A., & LeCun, Y. Spectral networks and locally connected networks on graphs, ICLR2013\nDefferrard, M., Bresson, X., & Vandergheynst, P. Convolutional neural networks on graphs with fast localized spectral filtering. In NIPS2016\nGraph Convolutional Neural Networks with Complex Rational Spectral Filters, submitted to ICLR18\nKipf, T. N., & Welling, M. Semi-supervised classification with graph convolutional networks. In ICLR2017\n",
"2) \"The main contribution seems to be the use of the \"graph shift\" operator from Sandryhaila and Moura (2013), which closely resembles the one from Shuman et al. (2013). It is actually not very well explained what is the main difference. \" \n\nReply: In Sandryhaila and Moura (2013), the graph shift operator is defined as the adjacency matrix, and the graph convolution operator is defined as multiplication by polynomials of the adjacency matrix, while in Shuman et al. (2013), the graph convolution operator is obtained by the eigendecomposition of the Laplacian matrix. There are in addition, significant differences between using the adjacency matrix or the graph Laplacian as “graph shifts.” The Laplacian matrix is a second order operator (like a second derivative) on the graph, and only applies to undirected graphs. In contrast, the shift operator is a first order operator (like a first-order derivative) on the graph and applies to arbitrary graphs (directed, undirected, or mixed). \n\nIn addition, as indicated above, the previous work defined convolution in the spectral domain, rather than in the node domain, which may require either crude approximations of the spectrum or requires finding the spectrum, a costly operation, with computational complexity O(N^3), where N is the number of graph nodes. Thus, to avoid computing the spectrum of the Laplacian matrix, the spectrum is approximated and the convolution further approximated via different matrix polynomials, such as matrix Chebyshev polynomials in Defferrard et al. 2016 and matrix Cayley polynomials in Levie et al. 2017. To have reasonable approximate accuracy, very high degree of such special polynomials should be needed, which increases both the computation burden and taxes the numerical stability of the procedure. For example, in Defferrard et al. 2016 a Chebyshev polynomial of the Laplacian matrix with degree 25 is needed, and in Levie et al. 2017 a Cayley polynomial of the Laplacian matrix with degree 12 is needed to provide reasonable accuracy to the spectrum. In contrast, in our paper, we do not need computing the spectrum of the adjacency matrix and so no need to resort to these high degree polynomials. The graph filters we use only have a degree 2 and outperform the existing spectral based convolution methods in terms of classification accuracy, while not requiring costly polynomials of much larger degree.\n\n3) \"Equation (2) shows that the learnable filters g are operating on the k-th power of the normalized adjacency matrix A, so when K=1 this equals classical GCN from T. Kipf et al. By using K > 1 the method is able to leverage information at a farther distance from the reference node. \"\n\nReply: We agree with the reviewer that our method is able to leverage information at a farther distance on the graph than the GCN in Kipf & Welling 2017. However, ours is not a simple extension of GCN. In fact, extending GCN to the second order would not lead to our results. In the separate comment (due to space limitation) with title “Differences between the proposed TAGCN and GCN in Kipf & Welling 2017”, we clarify the fundamental difference between our method and the GCN methodology if we extend the latter to a higher order. Thank you for your attention. We have added the corresponding discussion in Section 3 of the revised version.",
"4) \"Finally, I should say that TAGCN idea is interesting. I think it can be viewed as an extension of the GCN (Kipf and Welling, 2017), where instead of an adjacency matrix with self connections (i.e. first degree polynomial), a higher degree graph polynomial filter is used on every layer (please correct me if this comparison is not accurate). With more experiments and interpretation of the model, including some sort of multilayer analysis, this can be a good acceptance candidate. \"\n\nReply: Thank you for your encouraging comments. We have added more interpretation of the model in the revised version. We have also done further experiments for the filter of A^2. As we explained below in the next paragraph, graph convolution in our paper is not simply extending GCN to k-th order. Nevertheless, we implemented A^2 and compared its performance with ours. For the data sets Pubmed, Cora, and Citeseer, the classification accuracies are 79.1 (81.1), 81.7(82.5) and 70.8 (70.9), where the numbers in parentheses are the results obtained with our method. Our method still achieves a noticeable performance advantage over A^2 for the Pubmed and Cora data; in particular, we note the significant performance gain with the Pubmed database that has the largest number of nodes among these three data sets. For multi-layer analysis, we did experiment by further extending the hidden layers. However, there is no performance improvement, which is consistent with the multilayer analysis in Kipf & Welling 2017.\n\nWe agree with the reviewer that our method is able to leverage information at a farther distance on the graph than the GCN (Kipf & Welling 2017). However, ours is not a simple extension of GCN. In fact, extending GCN to second order would not lead to our results. We clarify the fundamental difference between our method and the GCN methodology if we extended the latter to a higher order in a separate comment (due to space limitation) with title “Differences between the proposed TAGCN and GCN in Kipf & Welling 2017”. Thank you for your attention. We have added the corresponding discussion in Section 3 of the revised version.",
"Thank you for the positive comments.\n\n1) We agree with the reviewer that our method is able to leverage information at a farther distance on the graph than the GCN of Kipf & Welling 2017. However, ours is not a simple generalization of GCN. In fact, extending GCN to the second order would not lead to our results. We clarify the fundamental difference between our method and the GCN methodology if we extended the latter to a higher order in the separate comment with title “Differences between the proposed TAGCN and GCN in Kipf & Welling 2017” due to space limitation. Thank you for your attention. We have added the corresponding discussion in Section 3 of the revised version.\n\n2) \"After reading the publications mentioned by the other reviewers as well as the following related contributions\n* Network of Graph Convolutional Networks Trained on Random Walks (under review for ICLR 2018)\n* Graph Convolution: A High-Order and Adaptive Approach, Zhenpeng Zhou, Xiaocheng Li (arXiv:1706.09916)\nI agree that the relation to previous work is not adequately outlined. Therefore I have modified my rating accordingly.\"\n\nWe thank the reviewer for pointing out these two recent works on graph CNN. We would like to point out that our method is substantially different from these two papers. \n\nThe graph convolutions in these two papers are defined based on ad-hoc methods, which do not have the physical meaning of convolution. The first paper concatenates A^k with k from 0 to 6, and the second paper defines {\\tilde A}^k = min{A^k + I,1}. In contrast, our definition of convolution is based on graph signal processing, it is consistent with the convolutional theorem, and, finally, it reduces to classical convolution for the direct circle topology. \n\nThe first paper reports their top 3 performers rather than reporting average performance, while we report an averaged performance over 100 trails. Our performance for Pubmed is still better than theirs even if the comparison is unfair. The second paper did not follow the usual data splitting method and so we cannot compare ours to their performance directly.\n\nWe respectfully disagree with the comment that these two papers are not adequately outlined in the original submission. The first paper is submitted to the same conference ICLR2018 as our paper, so, at the same time – how could we have access to it before hand? Thus, there is no way we could refer to this paper in our original submission. The second paper appeared in arXiv with the title “Graph Convolutional Networks for Molecules,” which was specific to molecules with content that was quite different from its second version. The second version was submitted to arXiv on Oct, 20, becoming only available to the public almost at the same time as we submitted our paper to ICLR2018. Further, there is a major revision between these two versions as we can see on arXiv, and the number of pages increased from less than 5 pages and a half to 8 pages. We thank the reviewer for finding this paper for us.\n\nWe also want to mention that, besides providing a solid foundation for our proposed the graph convolution operation, our method also exhibits better performance due to the fact that no approximation is needed for the convolution operation. Our method outperforms all recently proposed methods on all three datasets. In addition, for the Pubmed dataset, which is much larger than the Citeseer and Cora data sets, we have a 2.1% improvement over GCN (Kipf & Welling 2017) and 6.7% improvement over ChebNet (Defferrard et al. 2016). These performance results are averages obtained over 100 Monte Carlo runs. As far as we know and as far as we can determine, our method exhibits the best performance on the Pubmed data not only when compared with all previous available publications, as well as when compared with all papers submitted to ICLR18, see papers below. Also, please note that, as explained by the authors, the last paper listed below fails with the Pubmed data set because of its storage complexity.\n\nGraph Partition Neural Networks for Semi-Supervised Classification, submitted to ICLR18\nAttention-based Graph Neural Network for Semi-supervised Learning, submitted to ICLR18\nStochastic Training of Graph Convolutional Networks, submitted to ICLR18\nGraph Attention Networks, submitted to ICLR18\n",
"1)\t\"In my opinion writing of this paper requires major revision. The first 8 pages mostly constitute a literature review and experimental section provides no insights about the performance of the TAGCN besides the slight improvement of the Cora, Pubmed and Citeseer benchmarks. \"\n\nReply: We have reorganized the paper and added more insights for the proposed TAGCN algorithm. We explain our proposed method in Section 2 and compare it with previous work in Section 3 to emphasize the novelty and differences of our method. We want to emphasize that the adjacency matrix polynomial filter (graph convolution operation) defined on the vertex domain (our method) is totally different from all the existing graph CNN methods available and that define the convolution in the spectrum domain. Thus, our proposed convolution, its computational complexity, and understanding of the choice of the filter size, all need adequate explanations, and these are given in Section 3. Even some of the reviewers seem to misunderstand the GCN method in Kipf & Welling 2017 based on approximations by matrix Chebyshev polynomials in Defferrard et al. 2016 with our method. Thus explaining adequately our method from different perspectives is necessary. We have further described these relationships in Section 3 in the revised version and made the architecture of our method clearer.\n\nAs for performance, our method outperforms all recently proposed methods on all three datasets. In addition, for the Pubmed dataset, which is much larger than the Citeseer and Cora data sets, we have a 2.1% improvement over GCN (Kipf & Welling 2017) and 6.7% improvement over ChebNet (Defferrard et al. 2016). These performance results are averages obtained over 100 Monte Carlo runs. As far as we know and as far as we can determine, our method exhibits the best performance on the Pubmed data not only when compared with all previous available publications, as well as when compared with all papers submitted to ICLR18, see papers below. Also, please note that, as explained by the authors, the last paper listed below fails with the Pubmed data set because of its storage complexity.\n\nGraph Partition Neural Networks for Semi-Supervised Classification, submitted to ICLR18\nAttention-based Graph Neural Network for Semi-supervised Learning, submitted to ICLR18\nStochastic Training of Graph Convolutional Networks, submitted to ICLR18\nGraph Attention Networks, submitted to ICLR18\n\n2) \"The one layer analysis in sections 2.1, 2.2 and 2.3 is simply an explanation of graph polynomial filters, which were previously proposed and analyzed in cited work of Sandryhaila and Moura (2013). Together with the summary of other methods and introduction, it composes the first 8 pages of the paper. I think that the graph polynomial filters can be summarized in much more succinct way and details deferred to the appendix for interested reader. I also recommend stating which ideas came from the Sandryhaila and Moura (2013) work in a more pronounced manner. \"\n\nReply: Thank you for your suggestion. We have moved Section 2.3 to the Appendix following your suggestion. Sections 2.1 and 2.2 explain important concepts in our proposed method: the definition of graph CNN, graph filter size, as well as how to understand graph convolution as a local feature extractor, which are important for the understanding of our graph CNN and do not appear in Sandryhaila and Moura (2013). We better describe these subsections in the revised version and make them more succinct and clearer.\n\n3) \"Next, I disagree with the statement that \"it is not clear how to keep the vertex local property when filtering in the spectrum domain\". Graph Laplacian preserves the information about connectivity of the vertices and filtering in the vertex domain can be done via polynomial filters in the Fourier domain. See Eq. 18 and 19 in [1]. \"\n\n[1] David I Shuman, Sunil K Narang, Pascal Frossard, Antonio Ortega, and Pierre Vandergheynst. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Processing Magazine, 30(3):83–98, 2013.\n\nReply: Thank you for pointing out this. We have removed this sentence and referred to [1] in the revised version.",
"8) \"It is also known that these benchmarks, while being widely used, are small and result in high variance results. The authors should report statistics over multiple runs.\"\n\nReply: Thank you for pointing out this. In the original submission, all the results are averaged performance over 100 Monte Carlo runs. We have added the statistics in our revised version.\n\n9) \"Given the systematic parameter search, with reference to the actual validation (or test?) set I am afraid there could be some overfitting. It is quite easy to probe the test set to get best performance on these benchmarks.\"\n\nReply: We would like to clarify the reviewer on this point. As we explain in our experimental set up, we follow exactly the experimental settings in GCN. The data set is split into three parts: training, cross validation, and testing. We search the hyperparameters using cross validation on the validation set. And the performance results reported are evaluated on the test data set. \n\n10) \"As a minor remark, please make figures readable also in BW.”\n\nReply: Thank you for your advice. We believe the reviewer refers to figure 2. The different colors represent filters at different locations. One can easily tell apart the different plots as they are in different figures. We have revised the description in the text to better reflect this and to make it easy to tell the differences.\n\n11) Overall I found the paper interesting but also not very clear at pointing out the major contribution and the motivation behind it. At risk of being too reductionist: it looks as learning a set of filters on different coordinate systems given by the various powers of A. GCN looks at the nearest neighbors and the paper shows that using also the 2-ring improves performance.\n\nReply: As in our response to the previous comments, this work is based on using graph filters designed from basic principles drawn from the graph signal processing, with no approximation of the graph convolution. GCN is based on approximations by Chebyshev polynomials. Further, extending GCN to 2-rings does not result in A^2. As we do not utilize approximations to the convolution operation, we obtain better classification accuracy when compared with any existing methods, either previously published or proposed in the current crop of papers submitted to ICLR18.",
"4) \"Section 2.2 requires some polishing as I found hard to follow the main story the authors wanted to tell. The definition of the weight of a path seems disconnected from the main text, ins't A^k kind of a diffusion operator or random walk? This makes me wonder what would be the performance of GCN when the k-th power of the adjacency is used.\"\n\nReply: We have polished section 2.2 as suggested. As A is the normalized adjacency matrix, A^k is indeed a weighted diffusion or random walk. In Section 2.2, we would like to understand the proposed convolution as a feature extraction operator in traditional CNN rather than as propagating labeled data on the graph. Taking this point of view helps us to profit from the design knowledge/experience from traditional CNN and apply it to grid structured data. Our definition of weight of a path and the following filter size (Section 2.2) for graph convolution make it possible to design a Graph CNN architecture similar to GoogLeNet (Szegedy et al., 2015), in which a set of filters with different sizes are used in each convolutional layer. In fact, we found that a combination of size 1 and size 2 filters gives the best performance in all three data sets studied, which is a polynomial with maximum order 2.\n\nAs we explained above in the previous comment, graph convolution in our paper is not simply extending GCN to k-th order. To address the reviewer’s comment, nevertheless, we implement A^2 and compare its performance with ours. For the data sets Pubmed, Cora, and Citeseer, the classification accuracies are 79.1 (81.1), 81.7(82.5) and 70.8 (70.9), where the numbers in parentheses are the results obtained with our method. Our method still achieves a noticeable performance advantage over A^2 for the Pubmed and Cora data; in particular, we note the significant performance gain with the Pubmed database that has the largest number of nodes among these three data sets. \n\n5) \"I liked Section 3, however while it is true that all methods differ in the way they do the filtering, they also differ in the way the input graph is represented (use of the adjacency or not).\"\n\nReply: We agree with the reviewer and have incorporated this point of view in Section 3 in the revised version.\n\n6) \"Experiments are performed on the usual reference benchmarks for the task and show sensible improvements with respect to the state-of-the-art. \"\n\nReply: We would like to thank the reviewer for this comment. We also want to mention that, besides providing a solid foundation for our proposed graph convolution operation, our method also exhibits better performance due to the fact that no approximation is needed for the convolution operation. Our method outperforms all recently proposed methods on all three datasets. In addition, for the Pubmed dataset, which is much larger than the Citeseer and Cora data sets, we have a 2.1% improvement over GCN (Kipf & Welling 2017) and 6.7% improvement over ChebNet (Defferrard et al. 2016). These performance results are averages obtained over 100 Monte Carlo runs. As far as we know and as far as we can determine, our method exhibits the best performance on the Pubmed data not only when compared with all previously available publications, as well as when compared with all papers submitted to ICLR18, see papers below. Also, please note that, as explained by the authors, the last paper listed below fails with the Pubmed data set because of its storage complexity.\n\nGraph Partition Neural Networks for Semi-Supervised Classification, submitted to ICLR18\nAttention-based Graph Neural Network for Semi-supervised Learning, submitted to ICLR18\nStochastic Training of Graph Convolutional Networks, submitted to ICLR18\nGraph Attention Networks, submitted to ICLR18\n\n7) \"TAGCN with K=2 has twice the number of parameters of GCN, which makes the comparison not entirely fair. Did the author experiment with a comparable architecture? Also, how about using A^2 in GCN or making two GCN and concatenate them in feature space to make the representational power comparable? \"\n\nReply: We confirm that TAGCN with K=2 has twice the number of parameters in GCN. To compare implementations with the same number of parameters, we provide the performance of our method with half the number of filters in each convolution layer in the original submitted version (Table 4). These have the same number of parameters as GCN. Our method still has an obvious advantage in terms of classification accuracy. This proves that, even with a similar number of parameters or architecture, our method still exhibits superior performance than GCN. As explained in our response to previous comments, our method also still achieves a noticeable performance advantage over A^2 for the Pubmed and Cora data. \n\nIn Appendix B of the original GCN paper, the authors have already extended the number of layers from 2 to 4, but their performance degrades when the number of layers is 4 as compared with the 2-layer case. \n"
] | [
-1,
6,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
3,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"rJxc2drmG",
"iclr_2018_H113pWZRb",
"iclr_2018_H113pWZRb",
"iclr_2018_H113pWZRb",
"iclr_2018_H113pWZRb",
"r1XXuJcgM",
"iclr_2018_H113pWZRb",
"ryXsYOBXz",
"B1kA7KrQz",
"ry6GiiKlz",
"H1kIb-Kef",
"SJvZYKHXG",
"B1z3_FBXf"
] |
iclr_2018_rylejExC- | Stochastic Training of Graph Convolutional Networks | Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data. However, GCN computes nodes' representation recursively from their neighbors, making the receptive field size grow exponentially with the number of layers. Previous attempts on reducing the receptive field size by subsampling neighbors do not have any convergence guarantee, and their receptive field size per node is still in the order of hundreds. In this paper, we develop a preprocessing strategy and two control variate based algorithms to further reduce the receptive field size. Our algorithms are guaranteed to converge to GCN's local optimum regardless of the neighbor sampling size. Empirical results show that our algorithms have a similar convergence speed per epoch with the exact algorithm even using only two neighbors per node. The time consumption of our algorithm on the Reddit dataset is only one fifth of previous neighbor sampling algorithms. | rejected-papers | The paper studies subsampling techniques necessary to handle large graphs with graph convolutional networks. The paper introduces two ideas: (1) preprocessing for GCNs (basically replacing dropout followed by linear transformation with linear transformation followed by drop out); (2) adding control variates based on historical activations. Both ideas seem useful (but (1) is more empirically useful than (2), Figure 4*). The paper contains a fair bit of math (analysis / justification of the method).
Overall, the ideas are interesting and can be useful in practice. However, not all reviewers are convinced that the methods constitute a significant contribution. There is also a question whether the math has much value (strong assumptions - also, from interpretation, may be too specific to the formulation of Kipf & Welling making it a bit narrow?). Though I share these feelings and recommend rejection, I think that the reviewers 2 and 3 were a bit too harsh, and the scores do not reflect the quality of the paper.
*Potential typo: Figure 4 -- should it be CV +PP rather than CV?
+ an important problem
+ can be useful in practical applications
+ generally solid and sufficiently well written
- significance not sufficient
- math seems not terribly useful
| train | [
"HkW14AGSz",
"B1jssYMHG",
"rJA4cxJlf",
"B1FrpdOeM",
"S1g5R5Ogz",
"rJ5i42a7M",
"BJz3D4TQf",
"rkC02767G",
"S1Cz67aQz",
"Hy-NnmTQz",
"SyLvq76Xf",
"HJu46VFxG"
] | [
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"Thanks for you interest in our paper!\n\nFor CV (without +PP), the running time is 95.85 seconds, while CV+PP takes 56 seconds in Table 4. Our implementation does not support CVD without PP. The improvement of PP is not as large as CV, but is reasonable given its simplicity. Furthermore, some theoretical results of CV and CVD depend on PP. For example, in Appendix G.1, we justify the independent Gaussian assumption in Sec. 4.3 by showing that node activations are independent in a two-layer GCN with PP. \n\nThe original GCN model by Kipf and Welling does not support pre-processing. To enable pre-processing, we modify the GCN model by changing the order of dropout and multiplying the propagation matrix. We justify the modification and also show that the modification of the model does not affect the predictive performance in Table 3. Pre-processing reduces the number of graph convolution layers by one. Since the receptive field of GCN grows exponentially w.r.t. the number of layers, the removal of each layer has important impact to the time complexity. Such study should not be summarized as \"implementation detail\".",
"I think this paper introduces interesting ideas to speed up the training of graph neural networks (and graph convolutional nets specifically) which could potentially have direct industrial impact.\n\nI haven't yet fully worked through the mathematical motivation of the paper, but I was wondering how much impact the preprocessing of the first layer (in the paper denoted by \"+PP\") had on the timing results in Table 4? How much of the speedup of CV+PP and CVD+PP is due to the preprocessing (+PP) and how much is due to the actual variance reduction technique? I think this would be a very important distinction to make, as I think that the proposed preprocessing is not the major contribution of this paper (this seems more like an implementation detail).",
"This paper proposes a new training method for graph convolutional networks. The experimental results look interesting. However, this paper has some issues.\n\nThis paper is hard to read. There are some undefined or multi-used notations. For instance, sigma is used for two different meanings: an activation function and variance. Some details that need to be explained are omitted. For example, what kind of dropout is used to obtain the table and figures in Section 5? Forward and backward propagation processes are not clearly explained\n\nIn section 4.2, it is not clear why we have to multiply sqrt{D}. Why should we make the variance from dropout sigma^2? \n\nProposition 1 is wrong. First, \\|A\\|_\\infty should be max_{ij} |A_ij| not A_{ij}. Second, there is no order between \\|AB\\|_\\infty and \\|A\\|_\\infty \\|B\\|_\\infty. When A=[1 1] and B is the transpose matrix of A, \\|AB\\|_\\infty =2 and \\|A\\|_\\infty \\|B\\|_\\infty = 1. When, A’=[1 -1] and B is the same matrix defined just before, \\|A’ B \\|_\\infty = 0 and \\|A’\\|_\\infty \\|B\\|_\\infty =1. So, both \\|AB\\|_\\infty \\le \\|A\\|_\\infty \\|B\\|_\\infty and \\|AB\\|_\\infty \\ge \\|A\\|_\\infty \\|B\\|_\\infty are not true. I cannot believe the proof of Theorem 2.\n",
"The paper proposes a method to speed up the training of graph convolutional networks, which are quite slow for large graphs. The key insight is to improve the estimates of the average neighbor activations (via neighbor sampling) so that we can either sample less neighbors or have higher accuracy for the same number of sampled neighbors. The idea is quite simple: estimate the current average neighbor activations as a delta over the minibatch running average. I was hoping the method would also include importance sampling, but it doesn’t. The assumption that activations in a graph convolution are independent Gaussians is quite odd (and unproven). \n\nQuality: Statistically, the paper seems sound. There are some odd assumptions (independent Gaussian activations in a graph convolution embedding?!?) but otherwise the proposed methodology is rather straightforward. \n\nClarity: It is well written and the reader is able to follow most of the details. I wish the authors had spent more time discussing the independent Gaussian assumption, rather than just arguing that a graph convolution (where units are not interacting through a simple grid like in a CNN) is equivalent to the setting of Wang and Manning (I don’t see the equivalence). Wang and Manning are looking at MLPs, not even CNNs, which clearly have more independent activations than a CNN or a graph convolution. \n\nSignificance: Not very significant. The problem of computing better averages for a specific problem (neighbor embedding average) seems a bit too narrow. The solution is straightforward, while some of the approximations make some odd simplifying assumptions (independent activations in a convolution, infinitesimal learning rates). \n\nTheorem 2 is not too useful, unfortunately: Showing that the estimated gradient is asymptotically unbiased with learning rates approaching zero over Lipchitz functions does not seem like an useful statement. Learning rates will never be close enough to zero (specially for large batch sizes). And if the running activation average converges to the true value, the training is probably over. The method should show it helps when the values are oscillating in the early stages of the training, not when the training is done near the local optimum.\n\n\n",
"Existing training algorithms for graph convolutional nets are slow. This paper develops new novel methods, with a nice mix of theory, practicalities, and experiments.\n\nLet me caution that I am not familiar with convolutional nets applied to graph data.\n\nClearly, the existing best algorithm - neighborhood sampling is slow as well as not theoretically sound. This paper proposes two key ideas - preprocessing and better sampling based on historical activations. The value of these ideas is demonstrated very well via theoretical and experimental analysis. I have skimmed through the theoretical analysis. They seem fine, but I haven't carefully gone through the details in the appendices.\n\nAll the nets considered in the experiments have two layers. The role of preprocessing to add efficiency is important here. It would be useful to know how much the training speed will suffer if we use three or more layers, say, via one more experiment on a couple of key datasets. This will help see the limitations of the ideas proposed in this paper.\n\nIn subsection 4.3 the authors prove reduced variance under certain assumptions. While I can see that this is done to make the analysis simple, how well does this analysis correlate with what is seen in practice? For example, how well does the analysis results given in Table 2 correlate with the standard deviation numbers of Figure 5 especially when comparing NS+PP and CV+PP?",
"We appreciate the valuable feedback from the reviewers. Based on your comments we made a few revisions. \n\n1. We add experiments for >2 layers in appendix F as suggested by Reviewer 1.\n2. We add some justifications of the independent Gaussian assumption in appendix G as suggested by Reviewer 2.\n3. We replace Theorem 2 with its non-asymptotic version as suggested by Reviewer 2.\n4. We fixed some typos as well as Proposition 1 as suggested by Reviewer 3.\n5. We add pseudo-code for the CV and CVD algorithm in appendix E as suggested by Reviewer3.\n\nPlease see our response for individual comments for your questions. We are happy to provide more clarifications if needed.",
"Thanks for the review! We addressed the comments below.\n\nQ1: How much the training speed will suffer if we use three or more layers, say, via one more experiment on a couple of key datasets. \n\nThanks for the suggestion. We added the results for three-layer networks in appendix F on the Reddit dataset. The exact algorithm takes tens of thousands per epoch on the original graph (max degree is 128). We subsampled the graph so that the max degree is 10, CVD+PP is about 6 times faster than Exact to converge to 0.94 testing accuracy, and the convergences speed are reported in Fig. 6. The observations are pretty much the same, that control variate based algorithms are much better than those without control variates. \n\nQ2: Subsection 4.3: the authors prove reduced variance under certain assumptions. How well does this analysis correlate with what is seen in practice? For example, how well does the analysis results given in Table 2 correlate with the standard deviation numbers of Fig. 5 especially when comparing NS+PP and CV+PP?\n\nFor models without dropout, the main theoretical result is Theorem 1, which states that CV+PP has zero bias & variance as the learning rate goes to zero, and the independent Gaussian assumption is not needed. Fig. 5 (top row) shows that the bias and variance of CV+PP are quite close to zero in practice, which matches the theoretical result. \n\nFor models with dropout, we found that the standard deviations (Fig. 5 bottom right) of CV+PP and CVD+PP were greatly reduced from NS+PP, mostly because of the reduction of VMCA. The bias was not always reduced, which calls better treatment of the term (h_v - \\mu_v) in Sec. 4.2. We do not use historical values for this term. Incorporating historical values for this term may further reduce the bias and generalize Theorem 2 (which does not rely on the independent Gaussian assumption) to the dropout case. This is one possible future direction. \n\nQ3: The paper may not appeal to a general audience since the ideas are very specific to graph convolutions, which itself is restricted only to data connected by a graph structure.\n\nGraph-structured data is prevalent, e.g., user-graphs, citation graphs, web pages, knowledge graphs, etc. Moreover, graphs are generalization of many data structures, e.g., an image can be represented by 2d lattices; and a document categorization task can be improved by utilizing the citations between them. We therefore think extending deep learning to graph-structured data is important.",
"Thanks for the valuable comments. We address the detailed questions below.\n\nQ1: I was hoping the method would also include importance sampling:\n\nImportance sampling is a useful technique. There is another submission about importance sampling for graph convolutional networks [1]. We have some remarks regarding to importance sampling [1]:\n1)\tOur result is already close to the best we can possibly do, so importance sampling may only have marginal improvement. Despite using the much cheaper control-variate based gradient, we almost lost no convergence speed comparing with exact gradients (without neighbor sampling), according to Fig. 2 and Fig. 3.\n2)\tImportance sampling and our control variate & preprocessing are orthogonal techniques for reducing the bias and variance of the gradient.\n3)\tOur control variate based gradient estimator is asymptotically *unbiased*. As the learning rate goes to zero, our estimator yields unbiased stochastic gradient, *regardless of the neighbor sampling size*. On the other hand, the importance sampling based estimator is only *consistent*. It is unbiased *only when the neighbor sampling size goes to infinity*. This result is also shown experimentally: our work uses a very small neighbor sampling size (e.g., 2 neighbors for each node), while the neighbor sampling size of [1] is still hundreds. It takes only 50 seconds for our algorithm training on the largest Reddit dataset, while [1] takes 638.6 seconds.\n[1] FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling. https://openreview.net/forum?id=rytstxWAW\n\nQ2: The assumption that activations in a graph convolution are independent Gaussians is quite odd (and unproven). I wish the authors had spent more time discussing the independent Gaussian assumption, rather than just arguing that a graph convolution (where units are not interacting through a simple grid like in a CNN) is equivalent to the setting of Wang and Manning (I don’t see the equivalence). Wang and Manning are looking at MLPs, not even CNNs, which clearly have more independent activations than a CNN or a graph convolution. \n\nThe assumption makes some sense intuitively. \n1)\tIf all the nodes are isolated, it reduces to the MLP case that Wang and Manning considered. \n2)\tIn two-layer GCNs where the first layer is pre-processed, which is the most popular architecture, we can show that the neighbors’ activations are indeed independent with each other.\n3)\tIn deeper GCNs, the correlations between neighbor’s may still be weak in our algorithm, because the sampled subgraph is very sparse (each node only picks itself and another random neighbor). \n\nNow we show that in two-layer GCNs where the first layer is pre-processed, the neighbors’ activations are indeed independent with each other (we added the discussions in Appendix G). Assume that we want to compute the gradient w.r.t. node “a” on the second layer, the computational graph looks like:\n\nLayer 2: a\nLayer 1: a b (b is a random neighbor of a)\n\nBy Eq. (3), h_a^1 = \\sigma(Dropout(u^0_a) W^0) and h_b^1 = \\sigma(Dropout(u^0_b) W^0), where U^0=PH^0. The independent Gaussian assumption states that h_a^1 and h_b^1 are independent. To show this, we need the Lemma (function of independent r.v. s): If a and b are independent r.v. s, then f_1(a) and f_2(b) are independent r.v. s\nhttps://math.stackexchange.com/questions/8742/are-functions-of-independent-variables-also-independent\n\nLet .* be the element-wise product. We have h_a^1 = f_1(\\phi_a) := \\sigma(\\phi_a .* u^0_a) W^0 and h_b^1 := f_2(\\phi_b) = \\sigma(\\phi_b .* u^0_b) W^0. Because the dropout masks \\phi_a and \\phi_b are independent, we know that h_a^1 and h_b^1 are independent by the lemma. The rest assumptions about the Gaussian approximation and the independence between feature dimensions are discussed in Wang and Manning. \n\nWe admit that the independent Gaussian assumption is somewhat rough. However, we do not explicitly rely on the independent Gaussian assumption like Wang and Manning, where they directly compute the mean and variance for the activation, and manually derive update rules of the mean and variance after each layer. Our algorithm only requires the samples, and the algorithm itself can execute regardless of the distribution of activation.\n\nOverall, the assumption is more like a motivating case (in which the algorithm works perfectly) rather than a must-hold condition for the algorithm to work. In practice, our estimator does have smaller bias & variance than the estimator without control variates (Fig. 5), although the condition does not hold perfectly. Furthermore, our main theoretical result (Theorem 2) does not depend on the independent Gaussian assumption. \n",
"Q3: The problem of computing better averages for a specific problem (neighbor embedding average) seems a bit too narrow.\n\nGraph convolutional networks (GCNs) are important extensions of CNNs to graph structured data. There are an increasing number of works applying GCNs to different graph-based problems including node classification, node embedding, link prediction and knowledge base completion, with state-of-the-art performance on a large proportion of these tasks. We believe that GCNs are revolutionizing graph-related areas just like CNNs did to the image-related tasks. Our method is general for different GCN variants across tasks, and thus is not narrow.\n\nQ4: The solution is straightforward:\nOur solution is simple to implement and effective, which is an advantage to reproduce the results and build further extensions. But we believe the theory behind the simple updates is not straightforward. Unlike most variance reduction works, control variates bring stronger guarantees to the algorithm, besides just reducing the variance. Our algorithm is the first one that guarantees the testing accuracy and the convergence to local optimum, regardless of the neighbor sampling size. The simplicity, effectiveness and theoretical guarantee enable users easily adopt our technique to their models, and get good results.\n\nQ5: Theorem 2 is not too useful: Showing that the estimated gradient is asymptotically unbiased with learning rates approaching zero over Lipchitz functions does not seem like an useful statement. Learning rates will never be close enough to zero (specially for large batch sizes). And if the running activation average converges to the true value, the training is probably over. The method should show it helps when the values are oscillating in the early stages of the training, not when the training is done near the local optimum.\n\nWe do have non-asymptotic version of Theorem 2 but we choose to present the asymptotic version for ease to understand in the submission. We can show that square norm of the gradient is proportional with 1/sqrt{N} with respect to the number of iterations \\sqrt{N}, which is on the same order of the analysis by Ghadimi & Lan (2013) who used unbiased stochastic gradients (i.e., without sampling neighbors). Simple neighbor sampling or importance sampling does not have such a guarantee. The non-asymptotic result can be directly derived with the proof in appendix B. We replaced Theorem 2 with its non-asymptotic version in the revision.\n\nOur empirical results in Fig. 2 and Fig. 3 show that our method indeed helps in the early stages of the training. Despite using cheaper gradients by sampling neighbors, we almost have no loss of the convergence speed -- the number of epochs (or iterations) for our method (CV+PP & CVD+PP) to converge to a certain testing accuracy is almost the same for CV+PP/CVD+PP and Exact – which is the best we can possibly do.\n",
"Thanks for your review! The review mostly concerns about some unclarified details and typos in the paper. We addressed these concerns below. Meanwhile, the review does not mention any aspects about technical contribution itself. We think stochastic training for graph convolutional networks is very important for scaling up neural networks towards practical graphs and helping develop more expressive models. Our approach is significant both practically and theoretically. We can compute approximate gradients for GCNs at a cost similar with MLPs, while losing little convergence speed. We also provide new theoretical guarantees to reach the same training and testing performance of the exact algorithm. After we clarified all the mentioned details and typos, could you please also assess the work based on the technical contribution? We are happy to give more clarifications if needed.\n\nQ1: This paper is hard to read. There are some undefined or multi-used notations. For instance, sigma is used for two different meanings: an activation function and variance. Some details that need to be explained are omitted. For example, what kind of dropout is used to obtain the table and figures in Section 5? Forward and backward propagation processes are not clearly explained.\n\nWe change the notation for variance from \\sigma^2 to s^2. Throughout the paper we only have one kind of dropout (Srivastava et al., 2014), which randomly zeros out features. The dropout operation in our paper is already explained in Eq. (1). We add a pseudocode in appendix E to explain the forward and backward propagation processes. Basically, forward propagation is defined using Eq. (5) and Eq. (6) and backward propagation is simply computing the gradient of the objective with respect to the parameters automatically.\n\nQ2: In section 4.2, it is not clear why we have to multiply sqrt{D}. Why should we make the variance from dropout sigma^2? \n\nWe multiply sqrt{D} so that the approximated term and the original term have the same mean and variance, based on the case study under the independent Gaussian assumption in Sec. 4.3 (See Q3 of Reviewer 2 for the justification of the assumption). Under the assumption, the activations h_1, …, h_D are approximated by independent Gaussian random variables h_v~N(\\mu_v, s_v^2), and the randomness comes from randomly dropping out features from the feature vector x while computing h_v = activation(PWx). We define s_v^2 to be the variance of the Gaussian random variable p_1h_1+…+p_Dh_D. We separate p_1h_1+…+p_Dh_D as \np_1(h_1-\\mu_1)+…+p_D(h_D-\\mu_D) (which has zero mean)\nand\np_1\\mu_1+…+p_D\\mu_D (which is deterministic).\nWe approximate the first term as sqrt{D}(h_v’-\\mu_v’), where v’is selected uniformly from {1, …, D}. Because sqrt{D}(h_v’-\\mu_v’) and p_1(h_1-\\mu_1)+…+p_D(h_D-\\mu_D) have the same expected mean and variance, as shown in Appendix C.\n\nFor short, without loss of generality we assume that \\mu_1=…=\\mu_D=0. \nThen, Var[h_1+…+h_D]=Var[h_1]+…+Var[h_D]=s_1^2+…+s_D^2 (because of independence). And \nE_{v’}[Var[sqrt{D} h_v’]]=E_{v’}[Ds_v’^2]= s_1^2+…+s_D^2. \n\n\nQ3: Proposition 1 is wrong. First, \\|A\\|_\\infty should be max_{ij} |A_ij| not A_{ij}. Second, there is no order between \\|AB\\|_\\infty and \\|A\\|_\\infty \\|B\\|_\\infty. I cannot believe the proof of Theorem 2.\n\nThanks for pointing out. The correct version should be \\|AB\\|_\\infty <= col(A) \\|A\\|_\\infty \\|B\\|_\\infty, where col(A) is the number of columns of A (or number of rows of B). We updated Proposition 1 and its proof. Note that the constant col(A) is absorbed and does not affect the proof of Theorem 2. \n\nBesides the proof, Theorem 2 is also verified empirically in Fig. 2, where the algorithm using CV’s approximated gradients (CV+PP) has almost an overlapping convergence curve with the algorithm using exact stochastic gradients (Exact). ",
"Thanks for your interest to our paper! \n\nAfter LI iterations, part 1 of Theorem 1 shows that the activations Z^{(l)}_{CV} and H^{(l)}_{CV} are DETERMINISTIC, because the history is already complete\n\nZ^{(l)}_{CV}=Z^{(l)}\nH^{(l)}_{CV}=H^{(l)}\n\nIn other words, the forward propagation is deterministic and does not depends on \\bar P^{(l)}. The only random thing is the gradient, because the backward propagation is stochastic. \n\nTherefore, sigma'(Z^{(l)}_{CV})=sigma'(Z^{(l)}) is deterministic. It does not depends on random sampling (after LI iterations). ",
"The paper is addressing a very interesting problem with imminent importance and industrial impact. \n\nIn the proof of the unbiased estimator for gradient: that is the proof of theorem 1, two lines above equation (10), Z^(l) inside \\sigma'(Z^(l)) also depends on the random sampling, no? \n\nThe situation is similar to doubly stochastic gradient descent: \nDai et al. Scalable Kernel Methods via Doubly Stochastic Gradients, NIPS 2016\nhttps://arxiv.org/pdf/1407.5599.pdf\nLine 5 Algorithm 1. \nThe analysis of the paper is able to take this source of bias into account. \n\nThe proof of the current paper could also be fixed accordingly. "
] | [
-1,
-1,
3,
4,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"B1jssYMHG",
"iclr_2018_rylejExC-",
"iclr_2018_rylejExC-",
"iclr_2018_rylejExC-",
"iclr_2018_rylejExC-",
"iclr_2018_rylejExC-",
"S1g5R5Ogz",
"B1FrpdOeM",
"rkC02767G",
"rJA4cxJlf",
"HJu46VFxG",
"iclr_2018_rylejExC-"
] |
iclr_2018_SkJKHMW0Z | Recurrent Relational Networks for complex relational reasoning | Humans possess an ability to abstractly reason about objects and their interactions, an ability not shared with state-of-the-art deep learning models. Relational networks, introduced by Santoro et al. (2017), add the capacity for relational reasoning to deep neural networks, but are limited in the complexity of the reasoning tasks they can address. We introduce recurrent relational networks which increase the suite of solvable tasks to those that require an order of magnitude more steps of relational reasoning. We use recurrent relational networks to solve Sudoku puzzles and achieve state-of-the-art results by solving 96.6% of the hardest Sudoku puzzles, where relational networks fail to solve any. We also apply our model to the BaBi textual QA dataset solving 19/20 tasks which is competitive with state-of-the-art sparse differentiable neural computers. The recurrent relational network is a general purpose module that can augment any neural network model with the capacity to do many-step relational reasoning. | rejected-papers | The proposed relational reasoning algorithm is basically a fairly standard graph neural network, with a few modifications (e.g., the prediction loss at each layer - also not a new idea per se).
The claim that previously reasoning has not been considered in previous applications of graph neural networks (see discussion) is questionable. It is not even clear what is meant here by 'reasoning' as many applications of graph neural networks may be regarded as performing some kind of inference on graphs (e.g., matrix completion tasks by Berg, Kipf and Welling; statistical relational learning by Schlichtkrull et al).
So the contribution seems a bit over-stated. Rather than introduces a new model, the work basically proposes an application of largely known model to two (not-so-hard) tasks which have not been studied in the context of GNNs. The claim that the approach is a general framework for dealing with complex reasoning problems is not well supported as both problems are (arguably) not complex reasoning problems (see R2).
There is a general consensus between reviewers that the paper, in its current form, does not quite meet acceptance criteria.
Pros:
-- an interesting direction
-- clarity
Cons:
-- the claim of generality is not well supported
-- the approach is not so novel
-- the approach should be better grounded in previous work
| train | [
"r17v3MDxG",
"Syc551clG",
"rJFvPvqgz",
"HyLeDbvZf",
"r10uXbvbf",
"HymqZbDbz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The paper introduced recurrent relational network (RRNs), an enhanced version of the\nexisting relational network, that can be added to any neural networks to add\nrelational reasoning capacity. RRNs are illustrated on sudoku puzzles and textual QA.\n\nOverall the paper is well written and structured. It also addresses an important research question: combining relational reasoning and neural networks is currently receiving a lot of attention, in particular when generally considering the question of bridging sub-symbolic and symbolic methods. Unfortunately, it is current form, the paper has two major downsides. First of all, the sudoku example does not illustrate “complex relational reasoning” as claimed in the title. The problem is encoded at a positional level where \nmessages encoded as MLPs and LSTMs implement the constraints for sudoko. Indeed, \nthis allows to realise end-to-end learning but does not illustrate complex reasoning. \nThis is also reflected in the considered QA task, which is essentially coded as a positional problem. Consequently, the claim of the conclusions, namely that “we have\nproposed a general relational reasoning model” is not validated, unfortunately. Such\na module that can be connected to any existing neural network would be great. However, \nfor that one should show capabilities of relational logic. Some standard (noisy) \nreasoning capabilities such as modus ponens. This also leads me to the second downside. \nUnfortunately, the paper falls short on discussion related work. First of all, \nthere is the large field of statistical relational learning, see \n\nLuc De Raedt, Kristian Kersting, Sriraam Natarajan, David Poole:\nStatistical Relational Artificial Intelligence: Logic, Probability, and Computation. Synthesis Lectures on Artificial Intelligence and Machine Learning, Morgan & Claypool Publishers 2016\n\nfor a recent overview. As it has the very same goals, while not using a neural architecture for implementation, it is very much related and has to be discussed. That\none can also use a neural implementation can be seen in \n\nIvan Donadello, Luciano Serafini, Artur S. d'Avila Garcez:\nLogic Tensor Networks for Semantic Image Interpretation. IJCAI 2017: 1596-1602\n\nMatko Bosnjak, Tim Rocktäschel, Jason Naradowsky, Sebastian Riedel:\nProgramming with a Differentiable Forth Interpreter. ICML 2017: 547-556\n\nLuciano Serafini, Artur S. d'Avila Garcez:\nLearning and Reasoning with Logic Tensor Networks. AI*IA 2016: 334-348\n\nGustav Sourek, Vojtech Aschenbrenner, Filip Zelezný, Ondrej Kuzelka:\nLifted Relational Neural Networks. CoCo@NIPS 2015\n\nTim Rocktäschel, Sebastian Riedel:\nEnd-to-end Differentiable Proving. CoRR abs/1705.11040 (2017)\n\nWilliam W. Cohen, Fan Yang, Kathryn Mazaitis:\nTensorLog: Deep Learning Meets Probabilistic DBs. CoRR abs/1707.05390 (2017)\n\nto list just some approaches. There are also (deep) probabilistic programming \napproaches such as Edward that should be mentioned as CPS like problems (Sudoku) can\ndefinitely be implement there. Moreover, there is a number of papers that discuss \nembeddings of relational data and rules such as \n\nWilliam Yang Wang, William W. Cohen:\nLearning First-Order Logic Embeddings via Matrix Factorization. IJCAI 2016: 2132-2138\n\nThomas Demeester, Tim Rocktäschel, Sebastian Riedel:\nLifted Rule Injection for Relation Embeddings. EMNLP 2016: 1389-1399\n\nand even neural-symbolic approaches with a long publication history. Unfortunately, \nnon of these approaches has been cited, giving the wrong impression that this is \nthe first paper that tackles the long lasting question of merging sub-symbolic and symbolic reasoning. BTW, there have been also other deep networks for optimisation, see e.g. \n\nBrandon Amos, J. Zico Kolter:\nOptNet: Differentiable Optimization as a Layer in Neural Networks. \nICML 2017: 136-145\n\nthat have also considered Sudoku. To summarise, I like very much the direction of the paper but it seems to be too early to be published. ",
"This paper introduces recurrent relational networks: a deep neural network for structured prediction (or relational reasoning). The authors use it to achieve state-of-the-art performance on Soduku puzzles and the BaBi task (a text based QA dataset designed as a set of to toy prerequisite tasks for reasoning).\n\nOverall I think that by itself the algorithm suggested in the paper is not enough to be presented in ICLR, and on the other hand the authors didn't show it has a big impact (could do so by adding more tasks - as they suggest in the discussion). This is why I think the paper is marginally below the acceptance threshold but could be convinced otherwise.\n\nC an the authors give experimental evidences for their claim: \"As such, the network could use a small part of the hidden state for retaining a current best guess, which might remain constant over several steps, and other parts of the hidden state for running a non-greedy...\" - \n\nPros\n- The idea of the paper is clearly presented, the algorithm is easy to follow.\n- The motivation to do better relational reasoning is clear and the network suggested in this paper succeeds to achieve it in the challenging tasks.\n\nCons\n- The recurrent relational networks is basically a complex learned message passing algorithm. As the authors themselves state there are several works from recent years which also tackle this (one missing reference is Deeply Learning the Messages in Message Passing Inference of Lin et al from NIPS 2016). It would been interesting to compare results to these algorithms.\n- For the Sudoku the proposed architecture of the network seems a bit to complex, for example why do a 16 embedding is needed for representing a digit between 0-9? Some other choices (batch size of 252) seem very specific.",
"This paper describes a method called relational network to add relational reasoning capacity to deep neural networks. The previous approach can only perform a single step of relational reasoning, and was evaluated on problems that require at most three steps. The current method address the scalability issue and can solve tasks with orders of magnitude more steps of reasoning. The proposed methods are evaluated on two problems, Sudoku and Babi, and achieved state-of-the-art results. \n\nThe proposed method should be better explained. What’s the precise definition of interface? It’s claimed that other constraint propagation-based methods can solve Sudoku problems easily, but don’t respect the interface. It is hard to appreciate without a precise definition of interface. The proposed recurrent relational networks are only defined informally. A definition of the model as well as related algorithms should be defined more formally. \n\n\n\n",
"Thank you for the review. \n\n> Overall the paper is well written and structured. It also addresses an important research question: combining relational reasoning and neural networks is currently receiving a lot of attention, in particular when generally considering the question of bridging sub-symbolic and symbolic methods.\n\nAnswer: Thank you. \n\n> Unfortunately, it is current form, the paper has two major downsides. \n\nAnswer: We get the objections put forward below. Our terminology clearly has made you expect a paper on statistical relational learning. We believe that we are solving problems that requires what we associate with relational reasoning although that it does not involve explicit “relational logical” in the first order logic sense. So in short we think it is acceptable to view reasoning in a broader sense as also done by Santoro et al. Detailed answers below.\n\n> First of all, the sudoku example does not illustrate “complex relational reasoning” as claimed in the title. The problem is encoded at a positional level where messages encoded as MLPs and LSTMs implement the constraints for Sudoko. Indeed, this allows to realise end-to-end learning but does not illustrate complex reasoning. This is also reflected in the considered QA task, which is essentially coded as a positional problem. \n\nAnswer: Clearly, there are ample opportunity for misunderstandings given how ambiguous notions such as “complex”, “relational” and “reasoning” are. For your definitions of these concepts you are right that our network does not perform it. Obviously with our definitions, we think it does, otherwise we wouldn’t have made those experiments or claims. So our definitions are almost certainly different.\n\nOur use of the term “relational reasoning” follows Santoro et al. and is to be honest quite vague. By “relational reasoning” we mean to represent the world as, and perform inference over, a set of objects and their relations. We did not intend to claim that we are performing “relational logic” in the strict first-order logic sense, e.g. with variables and quantifiers. Is this the source of the disagreement? If so, we’re happy to amend our paper to make this more clear. If not would you be kind enough to clarify your definitions of those concepts such that it is immediately obvious that our network does not perform “relational reasoning” under those definitions? Also, specifically could you clarify what you mean by “positional encoding/problem” and how that nullifies “reasoning”?\n\n> Consequently, the claim of the conclusions, namely that “we have proposed a general relational reasoning model” is not validated, unfortunately. Such a module that can be connected to any existing neural network would be great. However, for that one should show capabilities of relational logic. Some standard (noisy) reasoning capabilities such as modus ponens.\n\nAnswer: Sudoku can be formulated as a logical problem using e.g. propositional logic or first-order logic, and solved using e.g. SAT a solver. It's clearly a logical problem that requires reasoning to solve (efficiently). Are you really arguing that solving Sudoku does not require reasoning? Also, w.r.t modus ponens, the first step of our RRN eliminates digits which demonstrates (fuzzy) modus ponens in which “x implies (not y), x, thus (not y)”. It’s not exact logic, and you can’t extract the logical clauses and inspect them, as in many SRL systems, but that does not mean it’s not reasoning or useful. Just like human reasoning is inexact and impossible to introspect, but still reasoning and very useful.\n\n> This also leads me to the second downside. Unfortunately, the paper falls short on discussion related work. First of all, there is the large field of statistical relational learning, see\n\nAnswer: Thank you for these references. These interesting papers are trying to solve different problems than those we consider. We will discuss these and clarify this in the updated paper.",
"Thank you for the review. And thank you for the kind words regarding motivation and clarity.\n\n>Overall I think that by itself the algorithm suggested in the paper is not enough to be presented in ICLR, and on the other hand the authors didn't show it has a big impact (could do so by adding more tasks - as they suggest in the discussion). \nThis is why I think the paper is marginally below the acceptance threshold but could be convinced otherwise.\n\nAnswer: The presented algorithm is a plug-n-play neural network module for solving problems requiring complex reasoning. We clearly show how it can solve a difficult reasoning problem, Sudoku, which comparable state-of-the-art methods cannot. We also show state-of-the-art results on a very different task, BaBi, showing its general applicability. We’ve released the code (but can’t link it yet, due to double blind). Simply put, with this algorithm the community can approach a swathe of difficult reasoning problems, which they couldn’t before. As such we think it merits publication. If you’re not convinced, what additional task would make you excited about this algorithm?\n\n>Can the authors give experimental evidences for their claim: \"As such, the network could use a small part of the hidden state for retaining a current best guess, which might remain constant over several steps, and other parts of the hidden state for running a non-greedy...\" - \n\nAnswer: To clarify, we are not claiming that it does this, just that it has the capacity since the output and the hidden state is separated by an arbitrarily complex function. Since the function is arbitrarily complex it can learn to conditionally ignore parts of the hidden state, similar to how a LSTM can learn to selectively update its memory. We don’t have any experimental evidence whether it actually does this. \n\n> The recurrent relational networks is basically a complex learned message passing algorithm. As the authors themselves state there are several works from recent years which also tackle this (one missing reference is Deeply Learning the Messages in Message Passing Inference of Lin et al from NIPS 2016). It would been interesting to compare results to these algorithms.\n\nAnswer: Yes, that’s a fair point. Comparing to those methods would be interesting. We know the Lin et al.’s paper but somehow forgot to cite it. It will be added in the update. One important difference is that those works retain parts of loopy belief propagation and learn others whereas ours is completely learned.\n\n> For the Sudoku the proposed architecture of the network seems a bit too complex, for example why do a 16 embedding is needed for representing a digit between 0-9? Some other choices (batch size of 252) seem very specific.\n\nAnswer: We’re reporting all the gory details out of a (misplaced?) sense of scientific rigor, not because they are important hyper parameters. The 16 dimensional embedding was simply the first thing we tried. It’s not the result of extensive hyper-parameter tuning. We could have used a one-hot encoding, but then the next matrix multiply would effectively be the embedding. We think it’s a bit cleaner to have the embedding separately. Also using an embedding the x vector more closely resembles the expected input to the RRN from a perceptual front-end, e.g. a dense vector. The 252 batch size was simply so that the batch size was divisible by 6, because we trained on 6 GPUs.",
"Thank you for the review.\n\n> The proposed method should be better explained. What’s the precise definition of interface? It’s claimed that other constraint propagation-based methods can solve Sudoku problems easily, but don’t respect the interface. \n\nAnswer: A function respecting the interface must accept a graph (set of nodes and set of edges) where the nodes are described with real valued vectors and most importantly output a solution which is *differentiable* w.r.t. the parameters of the function.\nTraditional Sudoku solvers, e.g. constraint propagation and search are not differentiable, since they use non-differentiable operations e.g. hard memory lookups, writes, if statements, etc.\n\nWe need the function to respect this interface so it can be used with other neural network modules, and trained end-to-end. For an example see “A simple neural network module for relational reasoning” in which a Relation Network is added to a Convolutional Neural Network and trained end-to-end to reason about objects in images. Similarly with our BaBi example we combine a LSTM that reads each sentence with a Recurrent Relational Network to reason about the sentences.\n\n>It is hard to appreciate without a precise definition of interface. The proposed recurrent relational networks are only defined informally. A definition of the model as well as related algorithms should be defined more formally. \n\nAnswer: The first draft of the paper actually had a very formal introduction of the interface and algorithm, but we re-worked it to the current informal, example-driven style since we thought it was easier to understand and follow. We’ll add the rigorous definition to the appendix. We’ll let you know once we’ve updated the paper.\n"
] | [
3,
5,
5,
-1,
-1,
-1
] | [
5,
3,
3,
-1,
-1,
-1
] | [
"iclr_2018_SkJKHMW0Z",
"iclr_2018_SkJKHMW0Z",
"iclr_2018_SkJKHMW0Z",
"r17v3MDxG",
"Syc551clG",
"rJFvPvqgz"
] |
iclr_2018_ByquB-WC- | Finding ReMO (Related Memory Object): A Simple neural architecture for Text based Reasoning | Memory Network based models have shown a remarkable progress on the task of relational reasoning.
Recently, a simpler yet powerful neural network module called Relation Network (RN) has been introduced.
Despite its architectural simplicity, the time complexity of relation network grows quadratically with data, hence limiting its application to tasks with a large-scaled memory.
We introduce Related Memory Network, an end-to-end neural network architecture exploiting both memory network and relation network structures.
We follow memory network's four components while each component operates similar to the relation network without taking a pair of objects.
As a result, our model is as simple as RN but the computational complexity is reduced to linear time.
It achieves the state-of-the-art results in jointly trained bAbI-10k story-based question answering and bAbI dialog dataset. | rejected-papers | The contribution of this paper basically consists of using MLPs in the attention mechanism of end-2-end memory networks. Though it leads to some improvements on bAbI (which may not be so surprising - MLP attention has been shown preferable in certain scenarious), it does not seem to be a sufficient contribution. The motivation is also confusing - the work is not really that related to relation networks, which were specifically designed to deal with situations where *relations* between objects matter. The proposed architecture does not model relations.
+ improvement on bAbI over the baselines
- limited novelty (MLP attention is fairly standard)
- the presentation of the idea is confusing (if the claim is about relations -> other datasets need to be considered)
There is a consensus between reviewers. | train | [
"r1Z9q7Ygf",
"rk-hlXcez",
"SyuT1isxG",
"B1NJePCzM",
"ByV3YdaGf",
"HyCnDI6Gz",
"HyH5K8Tzf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The paper proposes to address the quadratic memory/time requirement of Relation Network (RN) by sequentially attending (via multiple layers) on objects and gating the object vectors with the attention weights of each layer. The proposed model obtains state of the art in bAbI story-based QA and bAbI dialog task.\n\nPros:\n- The model achieves the state of the art in bAbI QA and dialog. I think this is a significant achievement given the simplicity of the model.\n- The paper is clearly written.\n\nCons:\n- I am not sure what is novel in the proposed model. While the authors use notations used in Relation Network (e.g. 'g'), I don't see any relevance to Relation Network. Rather, this exactly resembles End-to-end memory network (MemN2N) and GMemN2N. Please tell me if I am missing something, but I am not sure of the contribution of the paper. Of course, I notice that there are small architectural differences, but if these are responsible for the improvements, I believe the authors should have conducted ablation study or qualitative analysis that show that the small tweaks are meaningful.\n \nQuestion:\n- What is the exact contribution of the paper with respect to MemN2N and GMemN2N?",
"This paper introduces Related Memory Network (RMN), an improvement over Relationship Networks (RN). RMN avoids growing the relationship time complexity as suffered by RN (Santoro et. Al 2017). RMN reduces the complexity to linear time for the bAbi dataset. RN constructs pair-wise interactions between objects in RN to solve complex tasks such as transitive reasoning. RMN instead uses a multi-hop attention over objects followed by an MLP to learn relationships in linear time.\n\nComments for the author:\n\nThe paper addresses an important problem since understanding object interactions are crucial for reasoning. However, how widespread is this problem across other models or are you simply addressing a point problem for RN? For example, Entnet is able to reason as the input is fed in and the decoding costs are low. Likewise, other graph-based networks (which although may require strong supervision) are able to decode quite cheaply. \n\nThe relationship network considers all pair-wise interactions that are replaced by a two-hop attention mechanism (and an MLP). It would not be fair to claim superiority over RN since you only evaluate on bABi while RN also demonstrated results on other tasks. For more complex tasks (even over just text), it is necessary to show that you outperform RN w/o considering all objects in a pairwise fashion. More specifically, RN uses an MLP over pair-wise interactions, does that allow it to model more complex interactions than just selecting two hops to generate attention weights. Showing results with multiple hops (1,2,..) would be useful here.\n\nMore details are needed about Figure 3. Is this on bAbi as well? How did you generate these stories with so many sentences? Another clarification is the bAbi performance over Entnet which claims to solve all tasks. Your results show 4 failed tasks, is this your reproduction of Entnet?\n\nFinally, what are the savings from reducing this time complexity? Some wall clock time results or FLOPs of train/test time should be provided since you use multiple hops.\n\nOverall, this paper feels like a small improvement over RN. Without experiments over other datasets and wall clock time results, it is hard to appreciate the significance of this improvement. One direction to strengthen this paper is to examine if RMN can do better than pair-wise interactions (and other baselines) for more complex reasoning tasks.\n\n",
"This paper proposes an alternative to the relation network architecture whose computational complexity is linear in the number of objects present in the input. The model achieves good results on bAbI compared to memory networks and the relation network model. From what I understood, it works by computing a weighted average of sentence representations in the input story where the attention weights are the output of an MLP whose input is just a sentence and question (not two sentences and a question). This average is then fed to a softmax layer for answer prediction. I found it difficult to understand how the model is related to relation networks, since it no longer scores every combination of objects (or, in the case of bAbI, sentences), which is the fundamental idea behind relation networks. Why is the approach not evaluated on CLEVR, in which the interaction between two objects is perhaps more critical (and was the main result of the original relation networks paper)? The fact that the model works well on bAbI despite its simplicity is interesting, but it feels like the paper is framed to suggest that object-object interactions are not necessary to explicitly model, which I can't agree with based solely on bAbI experiments. I'd encourage the authors to do a more detailed experimental study with more tasks, but I can't recommend this paper's acceptance in its current form.\n\nother questions / comments:\n- \"we use MLP to produce the attention weight without any extrinsic computation between the input sentence and the question.\" isn't this statement false because the attention computation takes as input the concatenation of the question and sentence representation?\n- writing could be cleaned up for spelling / grammar (e.g., \"last 70 stories\" instead of \"last 70 sentences\"), currently the paper is very hard to read and it took me a while to understand the model",
"Thank you for your review. Based on the points you mentioned, I revised the paper.\nBelow is your review and my answer to that.\n\n\"I found it difficult to understand how the model is related to relation networks, since it no longer scores every combination of objects (or, in the case of bAbI, sentences), which is the fundamental idea behind relation networks.”\n— Our response ))\nIn the past, this point has not been clarified, so we have revised paper to emphasize on how RMN is related to relation network.\nOur model is a new text-based reasoning model based on the Memory Network framework.\nIn text-based reasoning, the most important thing is to select supporting sentences from large memory, which is performed through attention mechanism. \nWe found that the performance increases with more complex attention mechanisms.\nAs RN is one of the models that reasons well, we analyzed RN from the perspective of Memory Network.\nWe found out that the g of RN examines the relatedness of object pair and question very well.\nMotivated from it, we also used the MLP to focus on the supporting sentences examined from the relatedness of object and question in the memory network framework.\nAs a result, we were motivated by the fact that MLP was effective to examine the relatedness rather than the modeling structure of the RN that use object pair combination.\n\n\n\"Why is the approach not evaluated on CLEVR, in which the interaction between two objects is perhaps more critical (and was the main result of the original relation networks paper)?”\n— Our response ))\nThis is because our model is a new model for text-based reasoning based on Memory Network. \nI also thought about evaluating our model on images.\nHowever, since it is Memory Network based reasoning model, I wanted to verify the performance of the model for text, first.\n\n\n\"I'd encourage the authors to do a more detailed experimental study with more tasks.\"\n— Our response ))\nWe added the experimental results of the RN to the bAbI dialog-based dataset and discussed it on the paper.\nIn addition, we compared training time and performance of RN to our model in a large memory condition.\n\n\n\n“ \"we use MLP to produce the attention weight without any extrinsic computation between the input sentence and the question.\" isn't this statement false because the attention computation takes as input the concatenation of the question and sentence representation?\"\n— Our response ))\nExtrinsic attention computation refers to inner product and absolute difference performed by MemN2N, GMemN2N, DMN+ when relatedness of question and sentence is calculated.\nOn the other hand, there is no computation conducted in RN and RMN because they use simple concatenation.\n\n\n\"writing could be cleaned up for spelling / grammar (e.g., \"last 70 stories\" instead of \"last 70 sentences”)\"\n— Our response ))\nI have reviewed a number of times but have not been able to catch them.\nThank you for pointing out and I removed such content from the new paper.",
"We thank the reviewer for the points of clarification and correction.\nWe have modified the paper to address these points, and include detailed answers about each question below.\n\n\"how widespread is this problem across other models or are you simply addressing a point problem for RN?”\n\n— Our response ))\nIt seems to have asked this question because the issue of the submitted paper was unclear (We revised it to be clear) .\nIn fact, our paper suggests a new framework suitable for text-based reasoning rather than solving the problems of RN.\nWhile suggesting RMN, it shows the possibility of replacing the pair-wise interaction of RN.\n\n\n\"It would not be fair to claim superiority over RN since you only evaluate on bAbI while RN also demonstrated results on other tasks. For more complex tasks (even over just text), it is necessary to show that you outperform RN w/o considering all objects in a pairwise fashion.”\n\n— Our response ))\nAs RMN is a new framework for text-based reasoning, we didn’t perform additional experiment over text, like image.\nRather, we conducted experiments on bAbI dialog-based QA dataset for rich discussion.\n\n\n\"RN uses an MLP over pair-wise interactions, does that allow it to model more complex interactions than just selecting two hops to generate attention weights. Showing results with multiple hops (1,2,..) would be useful here.\"\n\n— Our response )\nI’m not sure that I understood you comment as you intended to.\nPlease let me know if my response is not enough to this question.\nYou said that RN is allowed to model more complex interactions than just two hops, however, at least in text-based QA dataset, it is revealed not always true.\nIf we take a closer look at our model, RMN is also able to model complex interactions.\nWhen hop 1 result, r1, is concatenated with updated memory, it is similar to the object pair that RN is dealing with.\nTherefore RMN is able to handle complicate interaction as much as RN is.\n\nAlso we add the result with multiple hops and it reveals that the number of hops is correlated with the number of relation.\n\n\n\"More details are needed about Figure 3. Is this on bAbi as well? How did you generate these stories with so many sentences?\"\n\n— Our response ))\nSorry for the unclear description about Figure 3.\nIt was tested on the bAbI story based QA dataset because it has 320 sentences on one story at maximum.\nTraditionally, Memory Network based models test their performance on 130 sentences at maximum for task 3 and 70 sentences at maximum for the others.\nRN’s experiment was a special case that tested on 20 sentences.\nHowever our revised paper does not include this figure anymore.\n\n\n\"Some wall clock time results or FLOPs of train/test time should be provided since you use multiple hops. what are the savings from reducing this time complexity?\"\n\n— Our response ))\nWe changed our comparison with RN to model accuracy and training time when memory size is large and small.\nOur reduction in the time complexity leads to shorter training time than RN when memory size is large.\nIn addition, when memory is large, RN’s reasoning ability is decreased while RMN still shows good reasoning ability.\n\n\n\"Another clarification is the bAbI performance over Entnet which claims to solve all tasks.”\n\n— Our response ))\nIncluding most of other models, such as MemN2N, DMN+, and RN, we also conducted experiment in jointly rather than task-wise. On the other hand, EntNet's results which claims to solve all tasks, are the results of task-wise condition. For fair comparison, we used the jointly trained results of the EntNet which is described in the Appendix of EntNet paper.\nTo make clear, I add this to footnote.",
"We’ve uploaded a new version of the paper that addresses much of the reviewers’ comments and questions. \nAlso we included new experiments that throws a light on the modeling effect of RMN. \nThe additional experiments are as follows.\n1) RN's result on bAbI dialog based QA dataset.\n2) RN's result on bAbI story based QA dataset.\n3) Ablation study on RMN where attention mechanism is changed.\n4) Result of RN and RMN where memory size is varied.\n5) Result of RMN according to the number of hops\n\nThe results show that RMN is better at text-based reasoning compared to MemN2N, GMemN2N, and other Memory Network based models.\nIn addition, when compared to Relation Network, RMN's strong reasoning ability is revealed when memory size in large.\nWe also found the correlation between the number of hops and the number of relations.\n\nComments from reviewers have helped to clarify the paper, and we have revised it to talk more clearly about the contribution of our model.",
"Thank you for your review. You raise a good point that help us clarify and improve the paper. \nIt took us quite a long time to do some additional experiments on the point you pointed out.\n\n\n\"While the authors use notations used in Relation Network (e.g. 'g'), I don't see any relevance to Relation Network. Rather, this exactly resembles End-to-end memory network (MemN2N) and GMemN2N.\"\n\n--- Our response))\nSince the proposed model (RMN) follows the framework of Memory Network, it has a similar structure to MemN2N and GMemN2N which are also Memory Network based models.\nThe reason for mentioning RN is that the MLP-based attention mechanism of the RMN is motivated by the RN's g.\nWhen analyzing the structure of the RN from the viewpoint of the memory network, it can be seen that the g of the RN plays the same role as the output feature map which takes charge of the attention mechanism among the components of the memory network.\nWe re-described this on the paper to clarify and added a table comparing MemN2N, RN, and RMN. \n\n\"What is the exact contribution of the paper with respect to MemN2N and GMemN2N?\"\n\n--- Our response))\nWe think it is the most critical question, and I acknowledge that I have written the paper unclear.\nWe have revised the paper so that we can answer this question.\n\nFor your question, I would like to say that all models based on Memory Network (MemN2N, GMemN2N etc) show small differences.\nFor example, GMemN2N only added gate operation compared to MemN2N.\nIn this respect, our model’s contribution is the MLP-based attention mechanism.\nWe thought that the reasoning ability of the model depends on how well the relevant sentence are found in the memory.\nThe performance of MemN2N, GMemN2N, and DMN + was improved in the order of the attention mechanism becoming complex.\nTherefore, RMN is designed to have an overall simple structure while having MLP-based attention mechanism to catch the complicate relation.\nTo validate this effect, we added model analysis as a subsection to the discussion and conducted an ablation study to compare the results according to the approach of the attention mechanism.\nAlso, we designed an updating component that fits well on our attention component. "
] | [
4,
4,
4,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ByquB-WC-",
"iclr_2018_ByquB-WC-",
"iclr_2018_ByquB-WC-",
"SyuT1isxG",
"rk-hlXcez",
"iclr_2018_ByquB-WC-",
"r1Z9q7Ygf"
] |
iclr_2018_rJBwoM-Cb | Neural Tree Transducers for Tree to Tree Learning | We introduce a novel approach to tree-to-tree learning, the neural tree transducer (NTT), a top-down depth first context-sensitive tree decoder, which is paired with recursive neural encoders. Our method works purely on tree-to-tree manipulations rather than sequence-to-tree or tree-to-sequence and is able to encode and decode multiple depth trees. We compare our method to sequence-to-sequence models applied to serializations of the trees and show that our method outperforms previous methods for tree-to-tree transduction. | rejected-papers | The proposed neural tree transduction framework is basically a combination of tree encoding and tree decoding. The tree encoding component is simply reused from previous work (TreeLSTM) whereas the decoding components is somewhat different from the previous work. They key problems (acknowledge also by at least 2 reviewers):
Pros:
-- generating trees input under-explored direction (note that it is more general than parsing as nodes may not directly correspond to input symbols)
Cons:
-- no comparison with previous tree-decoding work
-- only artificial experiments
-- the paper is hard too read (confusing) / mathematical notation and terminology is confusing and seems sometimes inaccurate (see R3)
| train | [
"B1ueBCKeM",
"B1ISgaRez",
"B1BFRS7ZM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper introduces a neural tree decoder architecture for binary trees that conditions the next node prediction on \nrepresentations of its ascendants (encoded with an LSTM recurrent net) and left sibling subtree (encoded with a binary LSTM recursive net) for right sibling nodes. \nTo perform tree to tree transduction the input tree is encoded as a vector with a Tree LSTM; correspondences between input and output subtrees are not modelled directly (using e.g. attention) as is done in traditional tree transducers. \nWhile the term context-sensitive should be used with caution, I do accept the claim here, although the notation used does not make the exposition clear. \nExperimental results show that the architecture performs better at synthetic tree transduction tasks (relabeling, reordering, deletion) than sequence-to-sequence baselines. \n\nWhile neural approches to tree-to-tree transduction is an understudied problem, the contributions of this paper are very narrow and it is not shown that the proposed approach will generalize to more expressive models or real-world applications of tree-to-tree transduction. \nExisting neural tree decoders, such as Dong and Lapata or Alvarex-Melis and Jaakkola, could be combined with tree LSTM encoders without any technical innovations and could possibly do as well as the proposed model for the transduction tasks tested - no experiments are performed with existing tree-based decoder architectures. \n\nSpecific comments per section:\n\n1. Unclear what is meant be \"equivalent\" in first paragraph. \n2. The model does not assign an explicit probability to the tree structure - rather it seems to rely on the distinction between terminal and non-terimal symbols and the restriction to binary trees to know when closing brackets are implied - this is not made clear, and a general model should not have this restriction, as there are many cases where we want to generate non-binary trees.\nThe production rule notation used is incorrect and confusing, mixing sets with non-terminals and terminal symbols: \nA better notation for the rules in 2.1.1 would be something like S -> P | v | \\epsilon; P -> Q R | Q u | u Q | u w, where P, Q, R \\in O and u, w \\in v.\n2.1.2. Splitting production rules as ->_left, ->_right is not standard notation. Rather introduce intermediate non-terminals in the grammar:\nO -> O_L O_R; O_L -> a | Q, O_R -> b | Q. \n2.1.3 The context-sensitively here arise when conditioning on the entire left sibling subtree (not just the top non-terimal).\nThe rules should have a format such as O -> O_L O_R; O_L -> a | Q; \\alpha O_R -> \\alpha a | \\alpha Q, where \\alpha is an entire subtree rooted at O_L.\n2.1.4 Should be g(x|.) = exp( ), the softmax function includes the normalization which is done in the equation below. \n\n3. Note that is is possible to restrict the decoder to produce tree structures while keeping a sequential neural architecture. For some tasks sequential decoders do actually produce mostly well-formed trees, given enough training data. \nRNNG encodes completed subtrees recursively, and the stack LSTM encodes the entire partially-produced tree, so it does produce and condition on trees not just sequences. The model in this paper is not more expressive than RNNG, it just encodes somewhat different structural biases, which might or might not be suited for real tasks. \n\n4. In the examples given, the same set of symbols are used as both terminals and non-terminals. How is the tree structure then predicted by the decoder?\nDetails about the training setup are missing: How is the training data generated, what is the size of the trees during training (compared to testing)?\n4.2 The steep drop in performance between depth 5 and 6 indicates that model is very sensitive to its memorization capacity and might not be generalizing over the given training data.\nFor real tree-to-tree applications involving these operations, there is good reason to believe that some kind of attention mechanism will be needed over the input tree during decoding. \n\nReference should generally be to published proceedings rather than to arxiv where available - e.g. Aharoni and Goldberg, Dong and Lapata, Erguchi et al, Rush et al. For Graehl and Knight there is a published journal paper in Computational Linguistics.",
"The authors propose to tackle the tree transduction learning problem using recursive NN architectures: the prediction of a node label is conditioned on the ancestors sequence and the nodes in the left sibling subtree (in a serialized order)\nPros:\n- they identify the issue of locality as important (sequential serialization distorts locality) and they move the architecture closer to the tree structure of the problem\n- the architecture proposed moves the bar forward in the tree processing field\nCons: \n- there is still a serialization step (depth first) that can potentially create sharp dips to null probabilities for marginal changes in the conditioning sequence (the issue is not addressed or commented by the authors) \n- the experimental setup lacks a perturbation test: rather than a copy task, it would be of greater interest to assess the capacity to recover from noise in the labels (as the noise magnitude increases)\n- a clearer and more articulated comparison of the pros/cons w.r.t. competitive architectures would improve the quality of the work: what are the properties (depth, vocabulary size, complexity of the underlying generative process, etc) that are best dealt with by the proposed approach? \n- it is not clear if the is the vocabulary size in their model needs to increase exponentially with the tree depth: a crucial vocabulary size vs performance experiment is missing\n",
"There may be some interesting ideas here, but I think in many places the mathematical\ndescription is very confusing and/or flawed. To give some examples:\n\n* Just before section 2.1.1, P(T) = \\prod_{p \\in Path(T)} ... : it's not clear \nat all clear that this defines a valid distribution over trees. There is an\nimplicit order over the paths in Path(T) that is simply not defined (otherwise\nhow for x^p could we decide which symbols x^1 ... x^{p-1} to condition\nupon?)\n\n* \"We can write S -> O | v | \\epsilon...\" with S, O and v defined as sets.\nThis is certainly non-standard notation, more explanation is needed.\n\n* \"The observation is generated by the sequence of left most \nproduction rules\". This appears to be related to the idea of left-most\nderivations in context-free grammars. But no discussion is given, and\nthe writing is again vague/imprecise.\n\n* \"Although the above grammar is not, in general, context free\" - I'm not\nsure what is being referred to here. Are the authors referring to the underlying grammar,\nor the lack of independence assumptions in the model? The grammar\nis clearly context-free; the lack of independence assumptions is a separate\nissue.\n\n* \"In a probabilistic context-free grammar (PCFG), all production rules are\nindependent\": this is not an accurate statement, it's not clear what is meant\nby production rules being independent. More accurate would be to say that\nthe choice of rule is conditionally independent of all other information \nearlier in the derivation, once the non-terminal being expanded is\nconditioned upon.\n\n"
] | [
3,
7,
2
] | [
4,
4,
5
] | [
"iclr_2018_rJBwoM-Cb",
"iclr_2018_rJBwoM-Cb",
"iclr_2018_rJBwoM-Cb"
] |
iclr_2018_S1sRrN-CW | Revisiting Knowledge Base Embedding as Tensor Decomposition | We study the problem of knowledge base (KB) embedding, which is usually addressed through two frameworks---neural KB embedding and tensor decomposition. In this work, we theoretically analyze the neural embedding framework and subsequently connect it with tensor based embedding. Specifically, we show that in neural KB embedding the two commonly adopted optimization solutions---margin-based and negative sampling losses---are closely related to each other. We also reach the closed-form tensor that is implicitly approximated by popular neural KB approaches, revealing the underlying connection between neural and tensor based KB embedding models. Grounded in the theoretical results, we further present a tensor decomposition based framework KBTD to directly approximate the derived closed form tensor. Under this framework, the neural KB embedding models, such as NTN, TransE, Bilinear, and DISTMULT, are unified into a general tensor optimization architecture. Finally, we conduct experiments on the link prediction task in WordNet and Freebase, empirically demonstrating the effectiveness of the KBTD framework.
| rejected-papers | The reviewers are not convinced by a number of aspects: including originality and clarity. Whereas the assessment of clarity and originality may be somewhat subjective (though the connections between margin-based loss and negative sampling is indeed well known), it is pretty clear that evaluation is very questionable. This is not so much about existence of more powerful factorizations (e.g., ConvE / HolE) but the fact that the shown baselines (e.g., DistMult) can be tuned to yield much better performance on these benchmarks. Also, indeed the authors should report results on cleaned versions of the datasets (e.g., FB15k-237). Overall, there is a consensus that the work is not ready for publication.
Pros:
-- In principle, new insights on standardly used methods would have been very interesting
Cons:
-- Evaluation is highly problematic
-- At least some results do not seem so novel / interesting; there are questions about the rest (e.g., assumptions)
-- The main advantage of sq loss methods is that it enables the alternating least squares algorithm, does not seem possible here (at least not shown) | train | [
"BytzlNjez",
"SyeeEtTef",
"BkwNvgRgf",
"r1S26N5lG",
"BkU9F4qlf",
"HJhXGX8AW",
"HkjsLOd0W",
"Hkw2SfERW"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
"author",
"public",
"public"
] | [
"The paper proposes a unified view of multiple methods for learning knowledge base embeddings.\n\nThe paper's motivations are interesting but the execution does fit standard for a publication at ICLR.\nMain reasons:\n* Section 3 does not bring much value. It is a rewriting trick that many knew but never thought of publishing\n* Section 4.1 is either incorrect or clearly misleading. What happens to the summation terms related to the negative samples (o~=o' and s!=s') between the last equation and the 2 before that (on the expectations) at the bottom of page 4? They vanished while they are depending on the single triple (s, r, o), no?\n* The independence assumption at the top of page 5 is indeed clearly too strong in the case of multi-relational graphs, where triples are all interconnected.\n* In 4.2, writing that both RESCAL and KBTD explain a RDF triple through a similar latent form is not an observation that could explain intrinsic similarities between the methods but the direct consequence of the deliberate choice made for f(.) at the line before.\n* The experiments are hard to use to validate the model because they are based on really outdated baselines. Most methods in Table 4 and 5 are performing well under their best known performance.\n\n",
"This paper deals with the problem of representation learning from knowledge bases (KB), given in form of subject-relationship-object triplets. The paper has two main contributions: (1) Showing that two commonly used loss functions, margin-based and negative sampling-based, are closely related to each other; and (2) many of the KB embedding approaches can be reduced to a tensor decomposition problem where the entries in the tensor are a certain transformation of the original triplets values. \n\nContribution (1) related to the connection between margin-based and negative sampling-based loss functions is sort of obvious in hindsight and I am not sure if it has been not recognized in prior work (I'm not very well-versed in this area). Regardless, even though this connection is moderately interesting, I am not sure of its practical usefulness. I would like the authors to comment on this aspect.\n\nContribution (2) that shows that KB embedding approaches based on some of the popularly used loss functions such as margin-based or negative sampling can be cast as tensor factorization of a certain transformation of the original data is also interesting. However, similar connections have been studied for word-embedding methods. For example, prior work has shown that word embedding methods that optimize loss functions such as negative sampling can be seen as doing implicit matrix factorization of a transformed version of the word-counts. Therefore contribution (2) seems similar in spirit to this line of work.\n\nOverall, the paper does have some interesting insights but it is unclear if these insights are non-trivial/surprising, and are of that much practical utility. I would like to authors to respond to these concerns.",
"The paper proposes a new method to train knowledge base embeddings using a least-squares loss. For this purpose, the paper introduces a reweighting scheme of the entries in the original adjacency tensor. The reweighting is derived from an analysis of the cross-entropy loss. In addition, the paper discusses the connections of the margin and cross-entropy loss and evaluates the proposed method on WN18 and FB15k.\n\n The paper tackles an interesting problem, as learning from knowledge bases via embedding methods has become increasingly important for tasks such as question answering. Providing additional insight into current methods can be an important contribution to advance the state-of-the-art.\n\nHowever, I'm concerned about several aspects in the current form of the paper. For instance, the derivation in Section 4 is unclear to me, as eq.4 suddenly introduces a weighted sum over expectations using the degrees of nodes. The derivation also seems to rely on a very specific negative sampling assumption (uniform sampling without checking whether the corrupted triple is a true negative). This sampling method isn't used consistently across models and also brings its own problems, e.g., see the LCWA discussion in [4]\n\nIn addition, the semantics that are introduced by the weighting scheme are not clear to me either. Using the proposed method, the probability of edges between high-degree nodes are down-weighted, since the ground-truth labels are divided by the node degrees. Since these weighted labels are then fitted using a least-squares loss, this implies that links between high-degree nodes should be less likely, which seems the opposite of what the scores should look like.\n\nWith regard to the significance of the contributions: Using a least-squares loss in combination with tensor methods is attractive because it enables ALS algorithms with closed-form updates that can be computed very fast. However, the proposed method still relies on SGD optimization. In this context, it is not clear to me why a tensor framework/least-squares loss would be preferable.\n\nFurther comments:\n- The paper seems to equate \"tensor method\" with using a least squares loss. However, this doesn't have to be the case. For instance see [1,2] which propose Logistic and Poisson tensor factorizations, respectively.\n- The distinction between tensor factorization and neural methods is unclear. Tensor factorization can be interpreted just as a particular scoring function. For instance, see [5] for a detailed discussion.\n- The margin based ranking loss has been proposed earlier than in (Collobert et al, 2011). For instance see [3]\n- p1: corrupted triples are not described entirely correct, typically only one of s or o is corrputed. \n- Closed-form tensor in Table 1: This should be least-squares loss of f(s,p,o) and log(...)?\n- p6: Adding the constant to the tensor as proposed in (Levy & Goldberg, 2014) can done while gathering the minibatch and is therefore equivalent to the proposed approach.\n\n[1] Nickel et al: Logistic Tensor Factorization for Multi-Relational Data, 2013.\n[2] Chi et al: \"On tensors, sparsity, and nonnegative factorizations\", 2012\n[3] Collobert et al: A unified architecture for natural language processing, 2008\n[4] Dong et al: Knowledge Vault: A Web-Scale Approach to Probabilistic Knowledge Fusion, 2014\n[5] Nickel et al: A Review of Relational Machine Learning for Knowledge Graphs, 2016.",
"Your baseline models are old and maybe easy to beat. So I do not know your approach is good or not when comparing with strong baselines.",
"Toutanova et al. [1] firstly showed the problem, but the authors of ConvE firstly showed SOTA results on FB15k and WN18 by using a simple reversal rule. It's fine if we don't know this. Otherwise, the question is why we use the models that are outperformed by this simple reversal rule on FB15k and WN18???\nFB15k-237 and WN18RR should become the main datasets for the link prediction task.",
"Dear Readers:\n\nWe really appreciate your comments.\n\nFor your first suggestion, we certainly noticed there are many other methods that generated superior performance on WN18 and FB15k, and we also mentioned the ProjE paper in our related work. However, the purpose of this paper is not to design the state-of-the-art methods, and we did not propose any new scoring functions. Instead, we provided the theoretical analysis on a few popular methods, and revealed that all the mentioned methods could be framed into a tensor decomposition framework. For the experiments, we are still using the same scoring functions as proposed in the mentioned papers. The purpose of the experiments is to achieve comparable results under this tensor decomposition framework. Our framework is flexible about various scoring functions. \n\nFor your second suggestion, yes, we also noticed that there is FB15k-237 dataset, and thank you for pointing us the WN18RR dataset. The reason we stick with the original FB15k and WN18 datasets is for fair comparison since most of the above methods used these two datasets in their experiments. Again, we do not aim to beat one certain score function in a certain dataset. Instead, we want to generalize the existing KB embedding models, and further help the research community understand these models. We are open to work on more datasets in our future experiments though.\n\nHope the response above clarified your questions.\n",
"I'm not an author but felt like responding to the first comment. I agree with the statement that the paper is missing SOTA results. With some minor tuning of TransE and DistMult one can achieve *much* better numbers than the one reported in the paper. For instance, on FB15k it is possible to get up to 90 hits@10 with DistMult and 76.x with TransE. That's clearly a weak point of the paper irrespective of the theoretical analysis which might or might not be insightful. I haven't looked at this part of the paper at all.\n\nI disagree with the comment on FB15k and WN18. First, the \"problems\" with FB15k and WN18 weren't first mentioned by the ConvE authors but much earlier by Toutanova et al [1]. The identified \"problems\" relate to the existence of reverse relations that allow simple baselines to perform better than most (at the time) existing KB embedding methods. For example, the existence of (A, parentOf, B) predicts with high probability the relation (B, childOf, A). However, I disagree with the conclusion that these two data sets should not be used anymore. There are still plenty of challenging completion queries in these data sets. Also, FB15k is a data set derived from a human-designed and populated knowledge base. It is somewhat absurd to now exclusively use artificially created KBs to evaluate KB completion methods. FB15k-237 and WN18RR are data sets that should be included in addition to FB15k and WN18.\n\n\n[1] Observed Versus Latent Features for Knowledge Base and Text Inference. Kristina Toutanova and Danqi Chen.",
"As shown in papers such as ProjE: Embedding Projection for Knowledge Graph Completion, Knowledge Base Completion: Baselines Strike Back and Convolutional 2D Knowledge Graph Embeddings, your results are not state-of-the-art results on WN18 and FB15k. You should mention other high published results in your paper.\nThe authors in the paper Convolutional 2D Knowledge Graph Embeddings analyzed and concluded that future research on knowledge base completion should not use WN18 and FB15k anymore. You should do experiments on WN18RR and FB15k-237 datasets."
] | [
3,
5,
3,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_S1sRrN-CW",
"iclr_2018_S1sRrN-CW",
"iclr_2018_S1sRrN-CW",
"HJhXGX8AW",
"HkjsLOd0W",
"Hkw2SfERW",
"Hkw2SfERW",
"iclr_2018_S1sRrN-CW"
] |
iclr_2018_SJ71VXZAZ | Learning To Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given sufficient amounts of capacity, training data, and compute time, the representations learned by these models include disentangled features corresponding to high-level concepts. Specifically, we find a single unit which performs sentiment analysis. These representations, learned in an unsupervised manner, achieve state of the art on the binary subset of the Stanford Sentiment Treebank. They are also very data efficient. When using only a handful of labeled examples, our approach matches the performance of strong baselines trained on full datasets. We also demonstrate the sentiment unit has a direct influence on the generative process of the model. Simply fixing its value to be positive or negative generates samples with the corresponding positive or negative sentiment. | rejected-papers | The paper reports experiments where a LSTM language model is pretrained on a large corpus of reviews, and then the produced representation is used within a classifier on a number of sentiment classification datasets. The relative success of the method is not surprising. The novelty is very questionable, the writing quality is mixed (e.g., typos, the model is not even properly described). There are many gaps in evaluation (e.g., from the intro it seems that the main focus is showing that byte level modeling is preferable to more standard set-ups -- characters / BPE / words). However, there are (almost) no experiments supporting this claim. The same is true for the 'sentiment neuron': its effectiveness is also not properly demonstrated. In general, the results are somewhat mixed.
Pros:
-- good results on some datasets
Cons:
-- limited novelty
-- some claims are not tested / issues with evaluation
-- writing quality is not sufficient / clarity issues
Overall, the reviewers are in agreement that the paper does not meet ICLR standards.
| val | [
"SJXeNaYlM",
"Sk7lrK5ez",
"ryrN2K9gf",
"HJABp-xyM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"The authors propose to use a byte level RNN to classify reviews. In the meantime, they learn to generate reviews. The authors rely on the multiplicative LSTM proposed by Krause et al. 2016, a generative model predicting the next byte. They apply this architecture on the same task as the original article: document classification; they use a logistic regression on the extracted representation. The authors propose an evaluation on classical datasets and compare themselves to the state of the art.\nThe authors obtain interesting results on several datasets. They also explore the core of the unsupervised architecture and discover a neuron which activation matches the sentiment target very accurately. A deeper analyze shows that this neuron is more efficient on small datasets than on larger.\nExploiting the generative capacity of the network, they play with the \"sentiment neuron\" to deform a review. Qualitative results are interesting.\n\n\n\n\nThe authors do not propose an original model and they do not describe the used model inside this publication.\n\nNor the model neither the optimized criterion is detailled: the authors present some curve mentioning \"bits per character\" but we do not know what is measured. In fact, we do not know what is given as input and what is expected at the output -some clues are given in the experimental setup, but not in the model description-.\n\nFigure 2 is very interesting: it is a very relevant way to compare authors model with the literature.\n\nUnfortunately, the unsupervised abilities of the network are not really explained: we are a little bit frustrated by section 5.\n\n==\n\nThis article is very interesting and well documented. However, according to me, the fact that it provides no model description, no model analysis, no modification of the model to improve the sentiment discovery, prevents this article from being publicized at ICLR.\n",
"First of all, I don't think I fully understand this paper, because it is difficult for me to find answers from this paper to the following questions:\n1) what is the hypothesis in this paper? Section 1 talks about lots of things, which I don't think is relevant to the central topic of this paper. But it misses the most important thing: what is THIS paper (not some other deep learning/representation problems)\n2) about section 2, regardless whether this is right place to talk about datasets, I don't understand why these two datasets. Since this paper is about generating reviews and discovering sentiment (as indicated in the paper)\n3) I got completely confused about the content in section 3 and lost my courage to read the following sections. ",
"This paper shows that an LSTM language model trained on a large corpus of Amazon product reviews can learn representations that are useful for sentiment analysis. \nGiven representations from the language model, a logistic regression classifier is trained with supervised data from the task of interest to produce the final model.\nThe authors evaluated their approach on six sentiment analysis datasets (MR, CR, SUBJ, MPQA, SST, and IMDB), and found that the proposed method is competitive with existing supervised methods. \nThe results are mixed, and they understandably are better for test datasets from similar domains to the Amazon product reviews dataset used to train the language model.\nAn interesting finding is that one of the neurons captures sentiment property and can be used to predict sentiment as a single unit.\n\nI think the main result of the paper is not surprising and does not show much beyond we can do pretraining on unlabeled datasets from a similar domain to the domain of interest. \nThis semi-supervised approach has been known to improve in the low data regime, and pretraining an expressive neural network model with a lot of unlabeled data has also been shown to help in the past.\nThere are a few unanswered questions in the paper:\n- What are the performance of the sentiment unit on other datasets (e.g., SST, MR, CR)? Is it also competitive with the full model?\n- How does this method compare to an approach that first pretrains a language model on the training set of each corpus without using the labels, and then trains a logistic regression while fixing the language model? Is the large amount of unlabeled data important to obtain good performance here? Or is similarity to the corpus of interest more important?\n- I assume that the reason to use byte LSTM is because it is cheaper than a word level LSTM. Is this correct or was there any performance issue with using the word directly?\n- More analysis on why the proposed method does well on the binary classification task of SST, but performs poorly on the fine-grained classification would be useful. If the model is capturing sentiment as is claimed by the authors, why does it only capture binary sentiment instead of a spectrum of sentiment level?\n\nThe paper is also poorly written. There are many typos (e.g., \"This advantage is also its difficulty\", \"Much previous work on language modeling has evaluated \", \"We focus in on the task\", and others) so the writing needs to be significantly improved for it to be a conference paper, preferably with some help from a native English speaker.",
"I am not an expert by any means. So I simply put my comments on the paper here, and hope someone would correct me if I was wrong at some points.\n\nI found this paper interesting because of two things. First, several things from the paper are new to me. Second, I like the way the authors make a very good story from their experiments.\n\nUnsupervised representation learning is very promising since unlabeled data are every where. But to date, supervised learning models still outperform unsupervised models. This may be explained because \"supervised approaches have clear objectives that can be directly optimized\". Meanwhile, \"unsupervised approaches rely on proxy tasks such as reconstruction, density estimation, or generation, which do not directly encourage useful representations for specific tasks.\" The paper exploits other perspectives: distributional issue and the limited capacity of current unsupervised representation learning models. Specifically, \"current generic distributed sentence representations may be very lossy - good at capturing the gist, but poor with the precise semantic or syntactic details which are critical for applications.\" This combines with the limited capacity may be the root of devil, and the authors investigate into details this point.\n\nHow? The authors first attempts to learn an unsupervised representation by training byte (character) level language modelling. Then we can use the outputs to train a sentiment analysis classifier. The authors trained their model on a very large dataset (Amazon review dataset) (the training took 1 month!)\n\nGiven a new text (paragraph, article or whatever), we simply perform some pre-processing and then feed the text into the mLSTM. Here is the interesting thing: we then get the outputs of all the output units (there are 4,096 units) and consider them as a feature vector representing the string read by the model. We turned the model into a sentiment classifier by taking a linear combination of these units, learning the weights of the combination via the available supervised data. This is new to me, indeed.\n\nWhat is next? By inspecting the relative contributions of features, they discovered a single unit within the LSTM that directly corresponds to sentiment. This is a very surprising finding, as remember that the mLSTM model is trained only to predict the next character in text.\n\nBut why is it the case? It is indeed an open question why the model recovers the concept of sentiment in such a precise way. It is pity, however, that the authors don't dig into details to have a satisfied answer!\n\nOverall I like the paper and like their interesting findings. This is a very cool work!\n\nBut I think the paper could be significantly improved in two ways:\n\n- I don't think the story written in the paper is really coherent.\n\n- The findings are interesting but a deeper investigation would satisfy readers more. So far everything is still as \"I read a very cool paper which shows that there exists a neural sentiment neuron by simply training language modeling, but I don't know why!\". "
] | [
4,
2,
4,
-1
] | [
3,
5,
5,
-1
] | [
"iclr_2018_SJ71VXZAZ",
"iclr_2018_SJ71VXZAZ",
"iclr_2018_SJ71VXZAZ",
"iclr_2018_SJ71VXZAZ"
] |
iclr_2018_S1XXq6lRW | Zero-shot Cross Language Text Classification | Labeled text classification datasets are typically only available in a few select languages. In order to train a model for e.g news categorization in a language Lt without a suitable text classification dataset there are two options. The first option is to create a new labeled dataset by hand, and the second option is to transfer label information from an existing labeled dataset in a source language Ls to the target language Lt. In this paper we propose a method for sharing label information across languages by means of a language independent text encoder. The encoder will give almost identical representations to multilingual versions of the same text. This means that labeled data in one language can be used to train a classifier that works for the rest of the languages. The encoder is trained independently of any concrete classification task and can therefore subsequently be used for any classification task. We show that it is possible to obtain good performance even in the case where only a comparable corpus of texts is available. | rejected-papers | Unfortunately, it falls short of ICLR standards -- from evaluation, novelty and clarity perspectives. The method is also not discussed in all details. | train | [
"HkGqBf5ef",
"r1-k8XqxG",
"Bk81W32lf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a language independent text encoding method for cross-language classification. The proposed approach demonstrates better performance than machine translation based classifier. \n\nThe proposed approach performs language independent common representation learning for cross-lingual text classification. Such representation learning based methods have been studied in the literature. The authors should provide a review and comparison to related methods. \n\nTechnical contribution of the paper is very limited. The approach section is too short to provide a clear presentation of the model. Some descriptions about the input text representation are actually given in the experimental section. \n\nThe proposed approach uses comparable texts across different languages to train the encoders, while using the topic information as auxiliary supervision label information. In the experiments, it shows the topics are actually fine-grained class information that are closely related to the target class categories. This makes the zero-shot learning scenario not to be very practical. With such fine-grained supervision knowledge, it is also unfair to compare to other cross-lingual methods that use much less auxiliary information. \n\nIn the experiments, it states the data are collected by “For pages with multiple categories we select one at random”. Won’t this produce false negative labels on the constructed data? How much will this affect the test performance?\n\nThe experimental results are not very convincing without empirical comparisons to the state-of-the-art cross-lingual text classification methods. ",
"The draft proposes an approach to cross-lingual text classification through the use of comparable corpora, as exemplified through the use of Wikipedia via the inter-language links. A single task is featured: the prediction of categories for the Italian Wikipedia articles. Two models are contrasted to the proposed zero-shot classification approach, a monolingual classifier and a machine translation-based model.\n\nI have a number of issues with the paper, and for these I vote strong reject. I briefly list some of these issues.\n\n1) The model brings no novelty, or to put it bluntly, it is rather simplistic. Yet, at the same time, its description is split over multiple sections and thus rather convoluted, in effect obscuring the before-mentioned over-simplicity.\n2) The experiment is also oversimplified, as it features only one target language and a comparison to an upper bound and just a single competing system.\n3) In contrast to the thin experiments and (lack of) technical novelty, the introduction & related work writeups are overdrawn and uninteresting.\n\nI am sorry to say that I have learned very little from this paper, and that in my view it does not make for a very compelling ICLR read.",
"This paper addresses the problem of learning a cross-language text categorizer with no labelled information in the target language. The suggested solution relies on learning cross-lingual embeddings, and training a classifier using labelled data in the source language only.\n\nThe idea of using cross-lingual or multilingual representations to seamlessly handle documents across languages is not terribly novel as it has been use in multilignual categorization or semantic similarity for some time. This contribution however proposes a clean separation of the multiligual encoder and classifier, as well as a good (but long) section on related prior art.\n\nOne concern is that the modelling section stays fairly high level and is hardly sufficient, for example to re-implement the models. Many design decisions (e.g. #layers, #units) are not justified. They likely result from preliminary experiments, in that case it should be said.\n\nThe main concern is that the experiments could be greatly improved. Given the extensive related work section, it is odd that no alternate model is compared to. The details on the experiments are also scarce. For example, are all accuracy results computed on the same 8k test set? If so this should be clearly stated. Why are models tested on small subsets of the available data? You have 493k Italian documents, yet the largest model uses 158k... It is unclear where many such decisions come from -- e.g. Fig 4b misses results for 1000 and 1250 dimensions and Fig 4b has nothing between 68k and 137k, precisely where a crossover happens.\n\nIn short, it feels like the paper would greatly improve from a clearer modeling description and more careful experimental design.\n\nMisc:\n- Clarify early on what \"samples\" are in your categorization context.\n- Given the data set, why use a single-label multiclass setup, rather than multilabel?\n- Table 1 caption claims an average of 2.3 articles per topic, yet for 200 topics you have 500k to 1.5M articles?\n- Clarify the use of the first 200 words in each article vs. snippets\n- Put overall caption in Figs 2-4 on top of (a), (b), otherwise references like Fig 4b are unclear."
] | [
4,
2,
3
] | [
3,
4,
4
] | [
"iclr_2018_S1XXq6lRW",
"iclr_2018_S1XXq6lRW",
"iclr_2018_S1XXq6lRW"
] |
iclr_2018_BkM27IxR- | Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning optimization algorithms using reinforcement learning. In this paper, we explore learning an optimization algorithm for training shallow neural nets. Such high-dimensional stochastic optimization problems present interesting challenges for existing reinforcement learning algorithms. We develop an extension that is suited to learning optimization algorithms in this setting and demonstrate that the learned optimization algorithm consistently outperforms other known optimization algorithms even on unseen tasks and is robust to changes in stochasticity of gradients and the neural net architecture. More specifically, we show that an optimization algorithm trained with the proposed method on the problem of training a neural net on MNIST generalizes to the problems of training neural nets on the Toronto Faces Dataset, CIFAR-10 and CIFAR-100. | rejected-papers | The presented work is a good attempt to expand the work of Li and Malik to the high-dimensional, stochastic setting. Given the reviewer comments, I think the paper would benefit from highlighting the comparatively novel aspects, and in particular doing so earlier in the paper.
It is very important, given the nature of this work, to articulate how the hyperparameters of the learned optimizers, and of the hand-engineered optimizers are chosen. It is also important to ensure that the amount of time spent on each is roughly equal in order to facilitate an apples-to-apples comparison.
The chosen architectures are still quite small compared to today's standards. It would be informative to see how the learned optimizers compare on realistic architectures, at least to see the performance gap.
Please clarify the objective being optimized, and it would be useful to report test error.
The approach is interesting, but does not yet meet the threshold required for acceptance. | train | [
"Bynx0wHgM",
"BydHF89lz",
"Skd5kh5ef",
"HJt6cPaQz",
"HyZeqPTmM",
"HJ4qKvpmM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"[Main comments]\n\n* I would advice the authors to explain in more details in the intro\nwhat's new compared to Li & Malik (2016) and Andrychowicz et al. (2016).\nIt took me until section 3.5 to figure it out.\n\n* If I understand correctly, the only new part compared to Li & Malik (2016) is\nsection 3.5, where block-diagonal structure is imposed on the learned matrices.\nIs that correct?\n\n* In the experiments, why not comparing with Li & Malik (2016)? (i.e., without\n block-diagonal structure)\n\n* Please clarify whether the objective value shown in the plots is wrt the training\n set or the test set. Reporting the training objective value makes little\nsense to me, unless the time taken to train on MNIST is taken into account in\nthe comparison. \n\n* Please clarify what are the hyper-parameters of your meta-training algorithm\n and how you chose them.\n\nI will adjust my score based on the answer to these questions.\n\n[Other comments]\n\n* \"Given this state of affairs, perhaps it is time for us to start practicing\n what we preach and learn how to learn\"\n\nThis is in my opinion too casual for a scientific publication...\n\n* \"aim to learn what parameter values of the base-level learner are useful\n across a family of related tasks\"\n\nIf this is essentially multi-task learning, why not calling it so? \"Learning\nwhat to learn\" does not mean anything. I understand that the authors wanted to\nhave \"what\", \"which\" and \"how\" sections but this is not clear at all.\n\nWhat is a \"base-level learner\"? I think it would be useful to define it more\nprecisely early on.\n\n* I don't see the difference between what is described in Section 2.2\n (\"learning which model to learn\") and usual machine learning (searching for\nthe best hypothesis in a hypothesis class).\n\n* Typo: p captures the how -> p captures how\n\n* The L-BFGS results reported in all Figures looked suspicious to me. How do you\n explain that it converges to a an objective value that is so much worse?\nMoreover, the fact that there are huge oscillations makes me think that the\nauthors are measuring the function value during the line search rather than\nthat at the end of each iteration.\n",
"This paper proposed a reinforcement learning (RL) based method to learn an optimal optimization algorithm for training shallow neural networks. This work is an extended version of [1], aiming to address the high-dimensional problem.\n\n\n\nStrengths:\n\nThe proposed method has achieved a better convergence rate in different tasks than all other hand-engineered algorithms.\nThe proposed method has better robustess in different tasks and different batch size setting.\nThe invariant of coordinate permutation and the use of block-diagonal structure improve the efficiency of LQG.\n\n\nWeaknesses:\n\n1. Since the batch size is small in each experiment, it is hard to compare convergence rate within one epoch. More iterations should be taken and the log-scale style figure is suggested. \n\n2. In Figure 1b, L2LBGDBGD converges to a lower objective value, while the other figures are difficult to compare, the convergence value should be reported in all experiments.\n\n3. “The average recent iterate“ described in section 3.6 uses recent 3 iterations to compute the average, the reason to choose “3”, and the effectiveness of different choices should be discussed, as well as the “24” used in state features.\n\n4. Since the block-diagonal structure imposed on A_t, B_t, and F_t, how to choose a proper block size? Or how to figure out a coordinate group?\n\n5. The caption in Figure 1,3, “with 48 input and hidden units” should clarify clearly.\nThe curves of different methods are suggested to use different lines (e.g., dashed lines) to denote different algorithms rather than colors only.\n\n6. typo: sec 1 parg 5, “current iterate” -> “current iteration”.\n\n\nConclusion:\n\nSince RL based framework has been proposed in [1] by Li & Malik, this paper tends to solve the high-dimensional problem. With the new observation of invariant in coordinates permutation in neural networks, this paper imposes the block-diagonal structure in the model to reduce the complexity of LQG algorithm. Sufficient experiment results show that the proposed method has better convergence rate than [1]. But comparing to [1], this paper has limited contribution.\n\n[1]: Ke Li and Jitendra Malik. Learning to optimize. CoRR, abs/1606.01885, 2016.",
"Summary of the paper\n---------------------------\nThe paper derives a scheme for learning optimization algorithm for high-dimensional stochastic problems as the one involved in shallow neural nets training. The main motivation is to learn to optimize with the goal to design a meta-learner able to generalize across optimization problems (related to machine learning applications as learning a neural network) sharing the same properties. For this sake, the paper casts the problem into reinforcement learning framework and relies on guided policy search (GPS) to explore the space of states and actions. The states are represented by the iterates, the gradients, the objective function values, derived statistics and features, the actions are the update directions of parameters to be learned. To make the formulated problem tractable, some simplifications are introduced (the policies are restricted to gaussian distributions family, block diagonal structure is imposed on the involved parameters). The mean of the stationary non-linear policy of GPS is modeled as a recurrent network with parameters to be learned. A hatch of how to learn the overall process is presented. Finally experimental evaluations on synthetic or real datasets are conducted to show the effectiveness of the approach.\n\nComments\n-------------\n- The overall idea of the paper, learning how to optimize, is very seducing and the experimental evaluations (comparison to normal optimizers and other meta-learners) tend to conclude the proposed method is able to learn the behavior of an optimizer and to generalize to unseen problems.\n- Materials of the paper sometimes appear tedious to follow, mainly in sub-sections 3.4 and 3.5. It would be desirable to sum up the overall procedure in an algorithm. Page 5, the term $\\omega$ intervening in the definition of the policy $\\pi$ is not defined.\n- The definitions of the statistics and features (state and observation features) look highly elaborated. Can authors provide more intuition on these precise definitions? How do they impact for instance changing the time range in the definition of $\\Phi$) in the performance of the meta-learner?\n- Figures 3 and 4 illustrate some oscillations of the proposed approach. Which guarantees do we have that the algorithm will not diverge as L2LBGDBGD does? How long should be the training to ensure a good and stable convergence of the method?\n- An interesting experience to be conducted and shown is to train the meta-learner on another dataset (CIFAR for example) and to evaluate its generalization ability on the other sets to emphasize the effectiveness of the method. ",
"The following are new compared to (Li & Malik, 2016): \n\n- A partially observable formulation, which allows the use of observation features that are noisier but can be computed more efficiently than state features. Because only the observation features are used at test time, this improves the time and space efficiency of the learned algorithm. \n- Learns an optimization algorithm that works in a stochastic setting (when we have noisy gradients). \n- Introduced features so that the search is only over algorithms that are invariant to scaling of the objective functions and/or the parameters. \n- The update formula is now parameterized as a recurrent net rather than a feedforward net. \n- The block-diagonal structure on the matrices, which allows the method to scale to high-dimensional problems. \n\nAs discussed in Sect. 3.5, the block-diagonal structure is what enables us to learn an optimization algorithm for high-dimensional problems. Because the time complexity of LQG is cubic in the state dimensionality, (Li & Malik, 2016) cannot be tractably applied to the high-dimensional problems considered in our paper. \n\nThe objective values shown in the plots are computed on the training set. However, curves on the test set are similar. \n\nNote that the optimization algorithm is only (meta-)trained *once* on the problem of training on MNIST and is *not* retrained on the problems of (base-)training on TFD, CIFAR-10 and CIFAR-100. The time used for meta-training is therefore a one-time upfront cost; it is analogous to the time taken by researchers to devise a new optimization algorithm. For this reason, it does not make sense to include the time used for meta-training when comparing meta-test time performance. \n\nWe'll clarify the details on hyperparameters in the camera-ready. \n\nRegarding terminology, \"learning what to learn\" is a broader area that subsumes multi-task learning and also includes transfer learning and few-shot learning, for example. \"Learning which model to learn\" is different from the usual base-level learning because the aim is to search over hypothesis classes (model classes) rather than individual hypotheses (model parameters). Note that the use of these terms to refer to multi-task learning and hyperparameter optimization is not some sort of re-branding exercise; it is simply a reflection of how the terms \"learning to learn\" and \"meta-learning\" were used historically. For example, Thrun & Pratt's book on \"Learning of Learn\" (2012) focuses on \"learning what to learn\", and Brazdil et al.’s book on \"Metalearning\" (2008) focuses on \"learning which model to learn\". Because there has never been consensus on the precise definition of \"learning to learn\", the \"what\", \"which\" and \"how\" subsections in Sect. 2 are simply a convenient taxonomy of the diverse range of methods that all fall under the umbrella of \"learning to learn\". ",
"The coordinate group depends on the structure of the underlying optimization problem and should correspond to the set of parameters for which the particular ordering among them has little or no significance. For example, for neural nets, the parameters corresponding to the weights in the same layer should be in the same coordinate group, because their ordering can be permuted (by permuting the units above and below) without changing the function the neural net computes. \n\nThe inability to scale to high-dimensional problems was actually the main limitation of the previous work (Li & Malik, 2016) [1] – it was unclear at the time if this could be overcome (see for example the reviews of [1] at ICLR 2017). Overcoming the scalability issue therefore represents a significant contribution. ",
"Below is an intuitive explanation of the state and observation features:\n\nAverage recent iterate, gradient and objective value are the means over the three most recent iterates, gradients and objective values respectively, unless there are fewer than three iterations in total, in which case the mean is taken over the iterations that have taken place so far. \n\nThe state features consist of the following:\n- The relative change in the average recent objective value compared to five iterations ago, as of every fifth iteration in the 120 most recent iterations; intuitively, this can capture if and by how much the objective value is getting better or worse. \n- The average recent gradient normalized by the element-wise magnitude of the average recent gradient five iterations ago, as of every fifth iteration in the 125 most recent iterations. \n- The normalized absolute change in the average iterate from five iterations ago, as of every fifth iteration in the 125 most recent iterations; intuitively, this can capture the per-coordinate step sizes we used previously. \n\nSimilarly, the observation features consist of the following:\n- The relative change in the objective value compared to the previous iteration\n- The gradient normalized by the element-wise magnitude of the gradient from the previous iteration\n- The normalized absolute change in the iterate from the previous iteration\n\nThe normalization is designed so that the features are invariant to scaling of the objective function and to reparameterizations that involve scaling of the individual parameters. \n\nThe reason that the algorithm learned using the proposed approach does not diverge as L2LBGDBGD does is because the training is done under a more challenging and realistic setting, namely when the local geometries of the objective function are not known a priori. This is the setting under which the learned algorithm must operate at test time, since the geometry of an unseen objective function is unknown. This is the key difference between the proposed method and L2LBGDBGD, and more broadly, between reinforcement learning and supervised learning. L2LBGDBGD assumes the local geometry of the objective function to be known and so requires the local geometries of the objective function seen at test time to match the local geometries of one of the objective functions seen during training. Whenever this does not hold, it diverges. As a result, there is very little generalization to different objective functions. On the other hand, the proposed approach does not assume known geometry and therefore the algorithm it learns is more robust to differences in geometry at test time. \n\nIn reinforcement learning (RL) terminology, L2LBGDBGD assumes that the model/dynamics is known, whereas the proposed method assumes the model/dynamics is unknown. In the context of learning optimization algorithms, the dynamics captures what the next gradient is likely to be given the current gradient and step vector, or in other words, the local geometry of the objective function. \n\nThe reason why the algorithm learned using the proposed approach oscillates in Figs. 3 and 4 is because the batch size is reduced to 10 from 64 (which was the batch size used during meta-training), and so the gradients are noisier. Importantly, the algorithm is able to recover from the oscillations and converge to a good optimum in the end, demonstrating the robustness of the algorithm learned using the proposed approach. \n\nIn practice, about 10-20 iterations of the GPS algorithm are needed to obtain a good optimization algorithm. "
] | [
5,
6,
6,
-1,
-1,
-1
] | [
3,
4,
3,
-1,
-1,
-1
] | [
"iclr_2018_BkM27IxR-",
"iclr_2018_BkM27IxR-",
"iclr_2018_BkM27IxR-",
"Bynx0wHgM",
"BydHF89lz",
"Skd5kh5ef"
] |
iclr_2018_ByuP8yZRb | Censoring Representations with Multiple-Adversaries over Random Subspaces | Adversarial feature learning (AFL) is one of the promising ways for explicitly constrains neural networks to learn desired representations; for example, AFL could help to learn anonymized representations so as to avoid privacy issues. AFL learn such a representations by training the networks to deceive the adversary that predict the sensitive information from the network, and therefore, the success of the AFL heavily relies on the choice of the adversary. This paper proposes a novel design of the adversary, {\em multiple adversaries over random subspaces} (MARS) that instantiate the concept of the {\em volunerableness}. The proposed method is motivated by an assumption that deceiving an adversary could fail to give meaningful information if the adversary is easily fooled, and adversary rely on single classifier suffer from this issues.
In contrast, the proposed method is designed to be less vulnerable, by utilizing the ensemble of independent classifiers where each classifier tries to predict sensitive variables from a different {\em subset} of the representations.
The empirical validations on three user-anonymization tasks show that our proposed method achieves state-of-the-art performances in all three datasets without significantly harming the utility of data.
This is significant because it gives new implications about designing the adversary, which is important to improve the performance of AFL. | rejected-papers | The reviewers tend to agree that the empirical results in this paper are good compared to the baselines. However, the paper in its current form is considered a bit too incremental. Some reviewers also suggested additional theory could help strengthen the paper. | train | [
"B143HDlWM",
"SJtlwgqlf",
"rJNERHjlf",
"By_NEbamM",
"BJHdYMX-z",
"B1S0ipGbz",
"rJ6ncpMWM",
"HyuIqafZG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author"
] | [
"The below review addresses the first revision of the paper. The revised version does address my concerns. The fact that the paper does not come with substantial theoretical contributions/justification still stands out.\n\n---\n\nThe authors present a variant of the adversarial feature learning (AFL) approach by Edwards & Storkey. AFL aims to find a data representation that allows to construct a predictive model for target variable Y, and at the same time prevents to build a predictor for sensitive variable S. The key idea is to solve a minimax problem where the log-likelihood of a model predicting Y is maximized, and the log-likelihood of an adversarial model predicting S is minimized. The authors suggest the use of multiple adversarial models, which can be interpreted as using an ensemble model instead of a single model.\n\nThe way the log-likelihoods of the multiple adversarial models are aggregated does not yield a probability distribution as stated in Eq. 2. While there is no requirement to have a distribution here - a simple loss term is sufficient - the scale of this term differs compared to calibrated log-likelihoods coming from a single adversary. Hence, lambda in Eq. 3 may need to be chosen differently depending on the adversarial model. Without tuning lambda for each method, the empirical experiments seem unfair. This may also explain why, for example, the baseline method with one adversary effectively fails for Opp-L. A better comparison would be to plot the performance of the predictor of S against the performance of Y for varying lambdas. The area under this curve allows much better to compare the various methods.\n\nThere are little theoretical contributions. Basically, instead of a single adversarial model - e.g., a single-layer NN or a multi-layer NN - the authors propose to train multiple adversarial models on different views of the data. An alternative interpretation is to use an ensemble learner where each learner is trained on a different (overlapping) feature set. Though, there is no theoretical justification why ensemble learning is expected to better trade-off model capacity and robustness against an adversary. Tuning the architecture of the single multi-layer NN adversary might be as good?\n\nIn short, in the current experiments, the trade-off of the predictive performance and the effectiveness of obtaining anonymized representations effectively differs between the compared methods. This renders the comparison unfair. Given that there is also no theoretical argument why an ensemble approach is expected to perform better, I recommend to reject the paper.",
"- The authors propose the use of multiple adversaries over random subspaces of features in adversarial feature learning to produce censoring representations. They show that their idea is effective in reducing private information leakage, but this idea alone might not be signifcant enough as a contribution. \n\n- The idea of training multiple adversaries over random subspaces is very similar to the idea of random forests which help with variance reduction. Indeed judging from the large variance in the accuracy of predicting S in Table 1a-c for single adversaries, I suspect one of the main advantage of the current MARS method comes from variance reduction. The author also mentioned using high capacity networks as adversaries does not work well in practice in the introduction, and this could also be due to the high model variance of such high capacity networks. \n\n- The definition of S, the private information set, is not clear. There is no statement about it in the experiments section, and I assume S is the subject identity. But this makes the train-test split described in 4.1 rather odd, since there is no overlap of subjects in the train-test split. We need clarifications on these experimental details. \n\n- Judging from Figure 2 and Table 1, all the methods tested are not effective in hiding the private information S in the learned representation. Even though the proposed method works better, the prediction accuracies of S are still high. \n",
"MARS is suggested to combine multiple adversaries with different roles.\nExperiments show that it is suited to create censoring representations for increased anonymisation of data in the context of wearables.\n\nExperiments a are satisfying and show good performance when compared to other methods.\n\nIt could be made clearer how significance is tested given the frequent usage of the term.\n\nThe idea is slightly novel, and the framework otherwise state-of-the-art.\n\nThe paper is well written, but can use some proof-reading.\n\nReferencing is okay.",
"Following reviewer's comments and concerns, we have revised our paper. \n\n# Major revisions (three points)\n(1) Table 1 and Figure 2. We replace the results of each baseline by optimizing hyper-parameter $lambda$ for each baseline (following reviewer 2's concern). The results show that the proposed method still achieves performance improvements compared to baselines (w/ single adversary and w/ multiple adversaries over entire feature spaces). Please see the Table 1, Figure 2, and related paragraphs for more detail. The procedure for the hyperparameter selection is described at the end of section 4.1. \nNote that, the original version of the manuscript also shows the performance with $\\lambda$ varied for some baselines ($Adv_{0.1}, MA_{0.1}$), so this revision is not entirely new in the final manuscript. \n\n(2) Figure 3. We added analysis of the effect of variance reduction (following reviewer 1's comment). \nIn response to the comment of review 1, we compared the effect of variance reduction among baselines and the proposed method. The results show that MARS tend to give superior performance to MA although both methods have almost similar variance, suggesting that the variance-reduction is not main advantage of the proposed method. Please see the Figure 3-a for more detail. We also add the proper explanation to the beginning of section 3.1. \nWe also add analysis on relationships between accuracy of discriminators and the final performance of AFL, to clarify key factor for the success of AFL. The results support that our underlying assumption, i.e., the capacity/accuracy of the adversary is not the dominant factor. Please refer the Figure 3-b for more detail. \n\n(3) Abstract and paragraph 3--5 in Introduction. \nAs we have mentioned in the responses for reviewer's feedbacks, the primary contribution of our paper is (1) we proposed the novel design of adversary for AFL, and (2) it achieved state-of-the-art performance in several tasks related to the censoring representations. This is significant because the results shed light on the importance of the design of adversary, and gives new implications about the design. It is worth mentioning that, except our paper, all existing studies focus only on the {\\em accuracy/capacity} for designing adversaries, which is not enough for improving the performance of AFL as shown in this paper. Moreover, the task itself is essential for using the power of Deep Neural Networks in many real-world applications, as mentioned in the introduction of the manuscript. \nWe have revised abstract and paragraph 3--5 of the Introduction based on the above discussion. We hope this revision makes the contributions clearer. \n\n# Minor Revisions\n- Delete the Figure 2 in the original manuscript (since it overlaps with Table 1 to some extent). \n- Fix the grammatical and typographical mistakes\n- Add some detail about the experimental setting, following the question of review 1. ",
"In the paper, lambda is chosen differently for the different datasets, but not for the methods. However, lambda effects each method differently as it balances different terms for the various models (unless all models return calibrated probability distributions). If you have experiments for various lambdas, I would encourage you to add those to the paper. One way - without running into space limitations - would be to search for the maximal lambda (for each method separately) so that the method still meets a certain fixed performance on Y, and then report the performance for S. Similar, for example, as you fix recall and report precision. If your method still performs better, I'm more than welcome to change my vote.\n\nThe averaged likelihoods (not avg. log-likelihoods) do not form a likelihood anymore as you cannot normalize it independently of the model. Again, this is not a problem; for example, the SVM's hinge loss doesn't yield a proper log-likelihood either. The problem is that because of this, lambda has a different impact on how you trade-off the performances on Y and S. And there are good chances that this difference is significant: S seems to be easy to predict from the full representation (close to 100% accuracy for Opp-G and Opp-L). The log-likelihood term (weighted by lambda) is apparently close to 0 for the single adversary (which is also a result of the choice of lambda). I would expect that the averaged likelihood is much higher, simply because you will likely have at least one feature subset, which does not reveal S and the loss will be not zero. Hence, for the same lambda, the averaged likelihood would be higher and the optimizer has a higher incentive to change the representation.",
"Thank you for reading and commenting our paper. \n\n> The way the log-likelihoods of the multiple adversarial models are aggregated does not yield a probability distribution as stated in Eq. 2. While there is no requirement to have a distribution here - a simple loss term is sufficient - the scale of this term differs compared to calibrated log-likelihoods coming from a single adversary. Hence, lambda in Eq. 3 may need to be chosen differently depending on the adversarial model. Without tuning lambda for each method, the empirical experiments seem unfair. This may also explain why, for example, the baseline method with one adversary effectively fails for Opp-L. A better comparison would be to plot the performance of the predictor of S against the performance of Y for varying lambdas. The area under this curve allows much better to compare the various methods.\n\nFirst of all, I’m sorry for confusing you about the experimental settings. Although we have mentioned that “ The hyper-parameter is set to 1.0, 1.0, 0.1 for Opp-G, Opp-L, and USC-HAD respectively”, these are {\\em not fixed} thorough out the experiments. We tuned the $lambda$ of baselines ($Adv_{0.1}$ $MA_{0.1}$ in Table 2). It indeed increase the performances; however the proposed method still outperforms the baselines. Note that, we also tested the case where $\\lambda = {0.01, 0.1, 0.2, 1.0}$, but the results are consistent,. though we have removed it due to the tight space limitation. We’d like to clarify this points and add some experimental results with different $\\lambda$ in the final manuscript. \n\nSecondly, I’m afraid there is some misunderstanding regarding the Eq (2). It just averaging the $q_{D_k}$. As the each $q_{D_k}$ is calibrated, the averaged model is also calibrated (the sum of $q_D$ is 1.0). Although it is true that model averaging possibly tend to give smooth values and it make log-likelihood little bit differ, I don’t think it needs special treatments except testing different hyper-parameters, as mentioned above. I’m appreciate if the reviewer could clarify the concern. \n\n> Though, there is no theoretical justification why ensemble learning is expected to better trade-off model capacity and robustness against an adversary. Tuning the architecture of the single multi-layer NN adversary might be as good?\n\nAs the reviewer’s mentioned (and also described in the end of the introduction of our paper), the main contributions of this paper come from empirical validations of the superior performance of the proposed method compared to various baselines tested on various hyper parameters (e.g., Table 2). This is significant because it shade a light on new design consideration for improving the performance of the AFL framework, as mentioned in abstract and introduction of our paper. Considering the importance of the task, we believe introducing the new design consideration and the state-of-the-art method are enough contributions. \n\n\nThanks again for reading of our paper. ",
"Quick Response for AnonnReview3\n\nThank you for reading and commenting our paper. \n\n> It could be made clearer how significance is tested given the frequent usage of the term. \n\nI'm not sure I'm parsing your suggestion correctly. Are you suggesting to turn on/off the reguralization term for each epoch, or to compare the different number of adversaries? I think I could give more specific answer if you could give a bit detail about your suggestion and motivation behind it. \n\n> The paper is well written, but can use some proof-reading.\n\nWe will definitely update our manuscript with proof-reading. Also, we could give some explanation if you could give us a specific parts you are hard to understand. \n\n\nWe look forward to hearing from you regarding our submission. We would be glad to respond to any further questions and comments that you may have. ",
"Thank you for reading and commenting our paper. \n\n> The idea of training multiple adversaries over random subspaces is very similar to the idea of random forests which help with variance reduction.\n\nThank you for the valuable comment. As the reviewer's comment, MARS is similar to random forest in surface (actually both methods could be regarded as the instantiation of more general method \"random subspace methods\"), and it is possible to some parts of the success come from variance reduction property, as with random forests. \n\nHowever, we are not fully agree that the main advantage of the MARS comes from variance reduction, based on the observations: (1) In the first place, the training of the discriminator(s) are {\\em not} suffered from large variance. Specifically, in the task of predicting $s$, almost no performance differences is observed between two subsets of datasets (training and validation) throughout the training. We will add some experimental results in the Appendix parts (2) As shown in figure 4-c, the classification performance of different hyper-parameter $\\alpha$ are almost equally raise at the beginning of training. Note that, $\\alpha=0.0$ corresponds to the MA, and $\\alpha=0.8$ corresponds to the MARS in the Table 1. Since the large variance make the training slower typically, it imply that the superior of the MARS to MA seems not come from only the variance reduction parts. \n\nRather, our results suggests that the usage of random subspaces make it hard to deceive discriminators (indicated by the high variance in figure 4-d). This is intuitive results as the encoder need to deceive discriminators that have various views, rather than the single view point. This makes the encoder need to be more stronger to beat the discriminators, and make the representations more invariant to the sensitive variable $S$ (as shown in Table1). This is what we ephasized as “vulnerableness” in the current manuscript. \n\nAny way, we will definitely add some explanations about this topic (may be with making new subsection at the end of section 3). Again, thanks for the valuable reviews. \n\n\n> The definition of S, the private information set, is not clear. There is no statement about it in the experiments section, and I assume S is the subject identity. But this makes the train-test split described in 4.1 rather odd, since there is no overlap of subjects in the train-test split. We need clarifications on these experimental details. \n\nAs you assumption, the definition of $S$ is user identity. I will make it clearer this in the final manuscript. \n\nI’m sorry for confusing about the experimental settings. Actually, the performance about predicting $S$ is measured by different train-test split. Specifically, all training datasets and a part of test datasets (including new users) are used for train the evaluator $f_{eva}$, and the rest of the dataset is used for the evaluations. \n\n\n> Judging from Figure 2 and Table 1, all the methods tested are not effective in hiding the private information S in the learned representation. Even though the proposed method works better, the prediction accuracies of S are still high. \n\nThis is true that even the proposed method could not effective enough to practical usages especially if considering strong adversaries; however, always do so with science. Moreover, evaluation itself is too conservative to discuss about the practical usages because evaluator have too much labeled datasets. Considering the importance of the task itself, we believe that proposing the new method that achieves state-of-the-art performance on this task is enough to the contributions. We’d like to emphasise this point again at the final manuscript. "
] | [
6,
5,
6,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ByuP8yZRb",
"iclr_2018_ByuP8yZRb",
"iclr_2018_ByuP8yZRb",
"iclr_2018_ByuP8yZRb",
"B1S0ipGbz",
"B143HDlWM",
"rJNERHjlf",
"SJtlwgqlf"
] |
iclr_2018_SJzMATlAZ | Deep Continuous Clustering | Clustering high-dimensional datasets is hard because interpoint distances become less informative in high-dimensional spaces. We present a clustering algorithm that performs nonlinear dimensionality reduction and clustering jointly. The data is embedded into a lower-dimensional space by a deep autoencoder. The autoencoder is optimized as part of the clustering process. The resulting network produces clustered data. The presented approach does not rely on prior knowledge of the number of ground-truth clusters. Joint nonlinear dimensionality reduction and clustering are formulated as optimization of a global continuous objective. We thus avoid discrete reconfigurations of the objective that characterize prior clustering algorithms. Experiments on datasets from multiple domains demonstrate that the presented algorithm outperforms state-of-the-art clustering schemes, including recent methods that use deep networks. | rejected-papers | After careful consideration, I think that this paper in its current form is just under the threshold for acceptance. Please note that I did take into account the comments, including the reviews and rebuttals, noting where arguments may be inconsistent or misleading.
The paper is a promising extension of RCC, albeit too incremental. Some suggestions that may help for the future:
1) Address the sensitivity remark of reviewer 2. If the hyperparameters were tuned on RCV1 instead of MNIST, would the results across the other datasets remain consistent?
2) Train RCC or RCC-DR in an end-to-end way to gauge the improvement of joint optimization over alternating, as this is one of the novel contributions.
3) Discuss how to automatically tune \lambda and \delta_1 and \delta_2. These may appear in the RCC paper, but it's unclear if the same derivations hold when going to the non-linear case (they may in fact transfer gracefully, it's just not obvious). It would also be helpful for researchers building on DCC. | train | [
"S1ruhov4G",
"SyKQPRLEM",
"HJe0S9VEG",
"SyqWgxzxf",
"H1ySNZVgf",
"HJ90m_PeG",
"ByAn4R6XG",
"SJKJMO67z",
"SkV2zAMmG",
"SkhYD5bGG",
"HyxSDcZfG",
"Hy53I9-Mz",
"H1gdIc-zz"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"No, argument about canonical ACC definition is not my focus. Now it reads that the advantages claimed in the paper is conditional on the choice of clustering performance measure. Even if AMI is used, it is still hard to convince that the proposed method brings significant improvement because the authors refuse to compare to recent clustering approaches.\n\nWhat I said \"simply wrong\" means that the authors' rebuttal about various number of clusters is incorrect. I didn't intend to distinguish this point. Actually, lack of convincing evidence about significant performance improvement, i.e. Point 6 in my review, is only one of listed weak points of the work.\n\nStill about the t-SNE visualization of MNIST. If a digit cluster distributes like a snake, it means that the variation of the images of the digit is intrinsically one-dimensional. This is counter-intuitive. I doubt the t-SNE algorithm is not converged yet.\n\nd=10 is another tricky setting. We don't know whether this is suitable in general.\n\n",
"We will not try to change this reviewer's mind. If the ACs wish, we will post a detailed point-by-point rebuttal to the reviewer's latest post.\n\nAs an illustrative example, we briefly again address point (6), because the reviewer's latest post is so strongly worded on this point. (The reviewer's commentary begins with \"The answer is simply wrong.\")\n\nFirst, note that the reviewer appears to have shifted from equating \"purity\" with \"AMI\" (in the initial review) to equating \"purity\" with \"ACC\" (in the latest comment). The reviewer's original comment clearly points to our AMI numbers and states that they are weaker than \"purity\" numbers reported in other papers. But these are completely different metrics, the numbers are incomparable. (And the use of the two incomparable metrics is abundantly clear in both papers, so the reviewer's error is rather glaring.)\n\nIn the reviewer's latest post, the reviewer does not acknowledge the mistake but rather shifts to discussing \"clustering accuracy\". If we interpret the reviewer's comment correctly, the reviewer is now alluding to ACC, a different clustering measure which is reported for completeness in our supplement. Yet here again the reviewer is mistaken, because the \"purity\" measure does not reduce to ACC even when the number of clusters is fixed. No known formula exists for converting AMI or ACC to purity. As far as we can tell, the reviewer's conclusion -- \"Therefore the proposed method reads inferior in accuarcy [sic]\" -- is baseless. If the ACs wish, we can break down the definitions of AMI, ACC, and Purity in detail, and show further, based on the formulas, that the reviewer's statements are unfounded.\n",
"The response is disappointing. Keep saying that I am mistaken will not clarify the issues.\n1) The authors admit there is no theoretical guarantee. I am not asking about hardness. So it has nothing to do with NP-hard. If the work indeed has breakthrough, it should contain some theoretical guarantee or at least explaination that the method must lead to wide margins between the clusters. Unfortunately the authors simply avoid answering this.\n2) It is well known that pixelwise distance is sensitive for comparing two images. Therefore it is also a known drawback in VAE. The current work inherits the same drawback.\n3) I don't think \"redescending M-estimator\" is well-known in ICLR. Elaborating the term \"redesending M-estimator\" can help readers understand the method.\n4) The hyperpameters are calculated in a manner without theoretical guarantee or explanation. How can you say that these are \"principled\". I don't find any grounds that these calculation corresponds to their optimal choice. \n5) The running time analysis should be added to the paper. Now it is completely missing. According to the rebuttal, the proposed method is significantly slower than those in [Ref1-3].\n6) The answer is simply wrong. Fixing the number of clusters, purity can measure clustering accuracy. Therefore the proposed method reads inferior in accuarcy. Moreover, DCD in [Ref3] does not favor more clusters. It can automatically choose the number of clusters.\n7) bh-t-SNE will not give snake-like visualization of MNIST. There must be something wrong in the presented results.\n8) There is no evidence in the paper that the proposed method can give the right number of clusters. Moreover, the resulting number of clusters depends on the value of delta_2, which is tricky to set.\n",
"As authors stated, the proposed DCC is very similar to RCC-DR (Shah & Koltun, 2007). The only difference in (3) from RCC-DR is the decoding part, which is replaced by autoencoder instead of linear transformation used in RCC-DR. Authors claimed that there are three major differences. However, due to the highly nonconvex properties of both formulations, the last two differences hardly support the advantages of the proposed DCC comparing with RCC-DR because the solutions obtained by both optimization approaches are local solutions, unless authors can claim that the gradient-based solver is better than alternating approach in RCC-DR. Hence, DCC is just a simple extension of RCC-DR.\n\nIn Section 3.2, how does the optimization algorithm handle the equality constraints in (5)? It is unclear why the existing autoencoder solver can be used to solve (3) or (5). It seems that the first term in (5) corresponds to the objective of autoencoder, but the last two terms added lead to different objective with respect to variables y. It is better to clarify the correctness of the optimization algorithm.\n\nAuthors claimed that the proposed method avoid discrete reconfiguration of the objective that characterize prior clustering algorithms, and it does not rely on a priori knowledge of the number of ground-truth clusters. However, it seems not true since the graph construction at every epoch depends on the initial parameter delta_2 and the graph is constructed such that f_{i,j}=1 if distance is less than delta_2. As a result, delta_2 is a fixed threshold for graph construction, so it is indirectly related to the number of clusters generated. In the experiments, authors set it as the mean of the bottom 1% of the pairwise distances in E at initialization, and clustering assignment is given by connected component in the last graph. This parameter might be sensitive to the final results.\n\nMany terms in the paper are not well explained. For example, in (1), theta are treated as parameters to optimize, but what is the theta used for? Does the Omega related to encoder and decoder of the parameters in autoencoder. What is the scaled Geman-McClure function? Any reference? Why should this estimator be used?\n\nFrom the visualization results in Figure 1, it is interesting to see that K-means++ can achieve much better results on the space learned by DCC than that by SDAE from Table 2. In Figure 1, the embedding by SDAE (Figure 1(b)) seems more suitable for kmeans-like algorithm than DCC (Figure 1(c)). That is the reason why connected component is used for cluster assignment in DCC, not kmeans. The results between Table 2 and Figure 1 might be interesting to investigate. \n",
"This paper presents a clustering method in latent space. The work extends a previous approach (Shah & Koltun 2017) which employs a continuous relaxation of the clustering assignments. The proposed method is tested on several image and text data sets.\n\nHowever, the work has a number of problems and unclear points.\n\n1) There is no theoretical guarantee that RCC or DCC can give good clusterings. The second term in Eq. 2 will pull z's closer but it can also wrongly place data points from different clusters nearby.\n\n2) The method uses an autoencoder with elementwise least square loss. This is not suitable for data sets such as images and time series.\n\n3) Please elaborate \"redesending M-estimator\" in Section 2. Also, please explicitly write out what are rho_1 and rho_2 in the experiments.\n\n4) The method requires many extra hyperparameters lambda, delta_1, delta_2. Users have to set them by ad hoc heuristics.\n\n5) In each epoch, the method has to construct the graph G (the last paragraph in Page 4) over all z pairs. This is expensive. The author didn't give any running time estimation in theory or in experiments.\n\n6) The experimental results are not convincing. For MNIST its best accuracy is only 0.912. Existing methods for this data set have achieve 0.97 accuracy. See for example [Ref1,Ref2,Ref3]. For RCV1, [Ref2] gives 0.54, but here it is only 0.495.\n\n7) Figure 1 gives a weird result. There is no known evidence that MNIST clusters intrinsically distribute like snakes. They must be some wrong artefacts introduced by the proposed method. Actually t-SNE with MNIST pixels is not bad at all. See [Ref4].\n\n8) It is unknown how to set the number of clusters in proposed method.\n\n\n[Ref1] Zhirong Yang, Tele Hao, Onur Dikmen, Xi Chen, Erkki Oja. Clustering by Nonnegative Matrix Factorization Using Graph Random Walk. In NIPS 2012.\n[Ref2] Xavier Bresson, Thomas Laurent, David Uminsky, James von Brecht. Multiclass Total Variation Clustering. In NIPS 2013.\n[Ref3] Zhirong Yang, Jukka Corander and Erkki Oja. Low-Rank Doubly Stochastic Matrix Decomposition for Cluster Analysis. Journal of Machine Learning Research, 17(187): 1-25, 2016.\n[Ref4] https://sites.google.com/site/neighborembedding/mnist\n\nConfidence: 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature",
"The authors proposed a new clustering algorithm named deep continuous clustering (DCC) that integrates autoencoder into continuous clustering. As a variant of continuous clustering (RCC), DCC formed a global continuous objective for joint nonlinear dimensionality reduction and clustering. The objective can be directly optimized using SGD like method. Extensive experiments on image and document datasets show the effectiveness of DCC. However, part of experiments are not comprehensive enough. \n\nThe idea of integrating autoencoder with continuous clustering is novel, and the optimization part is quite different. The trick used in the paper (sampling edges but not samples) looks interesting and seems to be effective. \n\nIn the following, there are some detailed comments:\n1. The paper is well written and easy to follow, except the definition of Geman-McClure function is missing. It is difficult to follow Eq. (6) and (7).\n2. Compare DCC to RCC, the pros and cons are obvious. DCC does improve the performance of clustering with the cost of losing robustness. DCC is more sensitive to the hyper-parameters, especially embedding dimensionality d. With a wrong d DCC performs worse than RCC on MNIST and similar on Reuters. Since clustering is one unsupervised learning task. The author should consider heuristics to determine the hyper-parameters. This will increase the usability of the proposed method.\n3. However, the comparison to the DL based partners are not comprehensive enough, especially JULE and DEPICT on image clustering. Firstly, the authors only reported AMI and ACC, but not NMI that is reported in JULE. For a fair comparison, NMI results should be included. Secondly, the reported results do not agree with the one in original publication. For example, JULE reported ACC of 0.964 and 0.684 on MNIST and YTF. However, in the appendix the numbers are 0.800 and 0.342 respectively. Compared to the reported number in JULE paper, DCC is not significantly better.\n\nIn general, the paper is interesting and proposed method seems to be promising. I would vote for accept if my concerns can be addressed.\n\nThe author's respond address part of my concerns, so I have adjusted my rating.",
"Good question. In fact this is already in the paper. This comparison is provided in Table 2 (page 8). The top half of this table (\"Clustering in a reduced space learned by SDAE\") shows the accuracy achieved by running various clustering algorithms, including RCC, in a space learned by an Autoencoder. (For reference, DCC results are also listed, in the last column.) Specifically, compare the second-to-last column (Autoencoder + RCC) to the last column (DCC). The DCC results are much better than Autoencoder + RCC.",
"It would be interest to see the comparison to another simple two step baseline, Autoencoder followed by RCC.",
"Dear ACs and reviewers,\n\nDo you have any questions? Are there any remaining concerns?\n\nWe strongly believe that the work is solid, as demonstrated by the extensive experiments. We would be happy to address any remaining questions or concerns.\n\nBest regards,\nThe authors\n",
"We have uploaded a revision that addresses comments brought up in the reviews. In addition, we have posted responses to each individual review. These responses, which address each comment in detail, can be found below.\n",
"Thank you for your work on the paper. We respond to each comment below.\n\nQ: As authors stated, the proposed DCC is very similar to RCC-DR (Shah & Koltun, 2007). The only difference in (3) from RCC-DR is the decoding part, which is replaced by autoencoder instead of linear transformation used in RCC-DR. Authors claimed that there are three major differences. However, due to the highly nonconvex properties of both formulations, the last two differences hardly support the advantages of the proposed DCC comparing with RCC-DR because the solutions obtained by both optimization approaches are local solutions, unless authors can claim that the gradient-based solver is better than alternating approach in RCC-DR. Hence, DCC is just a simple extension of RCC-DR.\n\nA: We do see all three advantages as valuable. (Nonlinear embedding, direct optimization of the joint objective, and scalable optimization that does not rely on least-squares.) Aside from the more expressive nonlinear embedding, the key advantage is that DCC simultaneously optimizes the global objective over all variables, while RCC-DR is an alternating EM-like algorithm.\n\n\nQ: In Section 3.2, how does the optimization algorithm handle the equality constraints in (5)? It is unclear why the existing autoencoder solver can be used to solve (3) or (5). It seems that the first term in (5) corresponds to the objective of autoencoder, but the last two terms added lead to different objective with respect to variables y. It is better to clarify the correctness of the optimization algorithm.\n\nA: The equality constraints in (1), (3), and (5) are written out as constraints only for the sake of exposition. In fact these are not distinct constraints: Instead of Y, we simply use F_\\Theta(X) inside the relevant terms. Y is only used for exposition.\n\n\nQ: Authors claimed that the proposed method avoid discrete reconfiguration of the objective that characterize prior clustering algorithms, and it does not rely on a priori knowledge of the number of ground-truth clusters. However, it seems not true since the graph construction at every epoch depends on the initial parameter delta_2 and the graph is constructed such that f_{i,j}=1 if distance is less than delta_2. As a result, delta_2 is a fixed threshold for graph construction, so it is indirectly related to the number of clusters generated. In the experiments, authors set it as the mean of the bottom 1% of the pairwise distances in E at initialization, and clustering assignment is given by connected component in the last graph. This parameter might be sensitive to the final results.\n\nA: By discrete reconfigurations of the objective we meant that the objective is influenced by the intermediate cluster assignments. In DCC, the graph G is only used to evaluate the stopping criterion (“Should the optimization stop now?”). It does not affect the objective itself. The graph G does not influence or modify the objective function in any way. So there is no discrete reconfiguration of the objective.\n\n\nQ: Many terms in the paper are not well explained. For example, in (1), theta are treated as parameters to optimize, but what is the theta used for? Does the Omega related to encoder and decoder of the parameters in autoencoder. What is the scaled Geman-McClure function? Any reference? Why should this estimator be used?\n\nA: \\theta and \\omega are the encoder and decoder network weights, respectively. \\Omega is simply the notation used for representing the union of parameters in both networks. The revision we posted includes the definition of the scaled Geman-McClure penalty, which is adopted from RCC.\n\n\nQ: From the visualization results in Figure 1, it is interesting to see that K-means++ can achieve much better results on the space learned by DCC than that by SDAE from Table 2. In Figure 1, the embedding by SDAE (Figure 1(b)) seems more suitable for kmeans-like algorithm than DCC (Figure 1(c)). That is the reason why connected component is used for cluster assignment in DCC, not kmeans. The results between Table 2 and Figure 1 might be interesting to investigate.\n\nA: That is an interesting suggestion. It is possible that k-means or a similar algorithm are well-suited for SDAE output. That being said, we caution against drawing major conclusions from a two-dimensional embedding of high-dimensional data. We avoid using k-means because it requires knowing the number of clusters a priory. Not requiring such knowledge is a major advantage of the RCC/DCC family of algorithms.\n",
"Q: 1) There is no theoretical guarantee that RCC or DCC can give good clusterings. The second term in Eq. 2 will pull z's closer but it can also wrongly place data points from different clusters nearby.\n\nA: Clustering is NP-hard. No published deep clustering algorithm provides theoretical guarantees. Constructions exist that will make both classic and deep clustering algorithms fail. For example, k-means can get stuck in a local minimum that has an arbitrarily bad cost. See, for example, Sanjoy Dasgupta, “The hardness of k-means clustering”, 2008. Due to the intractability of NP-hard problems, clustering algorithms are evaluated in terms of empirical performance on standard datasets. \n\n\nQ: 2) The method uses an autoencoder with elementwise least square loss. This is not suitable for data sets such as images and time series.\n\nA: The reviewer is mistaken. Autoencoders are commonly applied to images. Our experiments include multiple datasets of images.\n\n\nQ: 3) Please elaborate \"redesending M-estimator\" in Section 2. Also, please explicitly write out what are rho_1 and rho_2 in the experiments.\n\nA: We explicitly define rho_1 and rho_2 in the revision. “Redescending” is a standard term in robust statistics. See (Shah & Koltun, 2017) and a substantial body of statistics literature.\n\n\nQ: 4) The method requires many extra hyperparameters lambda, delta_1, delta_2. Users have to set them by ad hoc heuristics.\n\nA: The reviewer is mistaken. None of these hyperparameters (lambda, delta_1, delta_2) have to be set “by ad hoc heuristics”. They are set automatically using principled formulae. These formulae are given in (Shah & Koltun, 2017).\n\n\nQ: 5) In each epoch, the method has to construct the graph G (the last paragraph in Page 4) over all z pairs. This is expensive. The author didn't give any running time estimation in theory or in experiments.\n\nA: The reviewer is mistaken. As stated in the paper, the graph G is only constructed “once the continuation scheme is completed”, once per epoch, to evaluate the stopping criterion. The graph is constructed only over z-pairs that are already in the graph E. And this construction is not expensive at all. For example, on MNIST it takes ~1.1 sec using the scipy package. (Note also that this step is also part of the RCC algorithm.)\n\nIn terms of runtime, we do not claim any major advantage, but the runtime is not bad. For instance, on MNIST (the largest dataset considered), the total runtime of conv-DCC is 9030 sec. For DEPICT, this runtime is 12072 sec and for JULE it is 172058 sec. The runtime of DCC is mildly better than DEPICT and more than an order of magnitude better than JULE.\n\n\nQ: 6) The experimental results are not convincing. For MNIST its best accuracy is only 0.912. Existing methods for this data set have achieve 0.97 accuracy. See for example [Ref1,Ref2,Ref3]. For RCV1, [Ref2] gives 0.54, but here it is only 0.495.\n\nA: The reviewer is mistaken. The numbers reported in our paper are according to the AMI metric. The 0.97 accuracy in [Ref1] is using the `purity’ metric. These metrics are substantially different and are not comparable. By way of background, note that the purity metric is biased towards finer-grained clusterings. For example, if each datapoint is set to be a cluster in itself, then the purity of the clustering is 1.0. Purity is a bad metric that is easy to game. It is avoided in recent serious work on clustering.\n\n\nQ: 7) Figure 1 gives a weird result. There is no known evidence that MNIST clusters intrinsically distribute like snakes. They must be some wrong artefacts introduced by the proposed method. Actually t-SNE with MNIST pixels is not bad at all. See [Ref4].\n\nA: First, the t-SNE figure in [Ref4] is plotted using weighted t-SNE whereas we use bh-t-SNE. Second, note that the elongated (“snake-like”) structure also appears in the embedding output of other clustering algorithm (see, e.g., Figure 4 in (Shah & Koltun, 2017)). Third, one should not read much into the detailed planar shapes formed by embedding high-dimensional pointsets into the plane. The clean separation of the clusters is more relevant than the detailed shapes they form in the planar embedding.\n\n\nQ: 8) It is unknown how to set the number of clusters in proposed method.\n\nA: DCC does not require setting the number of clusters in advance. As explained in the paper, this is one of the key advantages of the presented algorithm compared to prior deep clustering algorithms.\n",
"Thank you for your work on the paper. We respond to each comment below.\n\nQ: 1. The paper is well written and easy to follow, except the definition of Geman-McClure function is missing. It is difficult to follow Eq. (6) and (7).\n\nA: Thanks for pointing this out. We addressed this in the revision.\n\n\nQ: 2. Compare DCC to RCC, the pros and cons are obvious. DCC does improve the performance of clustering with the cost of losing robustness. DCC is more sensitive to the hyper-parameters, especially embedding dimensionality d. With a wrong d DCC performs worse than RCC on MNIST and similar on Reuters. Since clustering is one unsupervised learning task. The author should consider heuristics to determine the hyper-parameters. This will increase the usability of the proposed method.\n\nA: We have clarified hyperparameter settings in the revision. In brief, DCC uses three hyperparameters: the nearest neighbor graph parameter ‘k’, the embedding dimensionality ‘d’, and the graduated nonconvexity parameter ‘M’. For fair comparison to RCC and RCC-DR, we fix k=10 (the setting used in (Shah & Koltun, 2017)). The other two hyperparameters were set to d=10 and M=20 based on grid search on MNIST. The hyperparameters are fixed at these values across all datasets. No dataset-specific tuning is done. Other hyperparameters, such as \\lambda, \\delta_i, and \\mu_i, are inherited from RCC and are set automatically as described in the RCC paper.\n\n\nQ: 3. However, the comparison to the DL based partners are not comprehensive enough, especially JULE and DEPICT on image clustering. Firstly, the authors only reported AMI and ACC, but not NMI that is reported in JULE. For a fair comparison, NMI results should be included. \n\nA: We have included NMI results in the revision. (Appendix E.)\n\n\nQ: Secondly, the reported results do not agree with the one in original publication. For example, JULE reported ACC of 0.964 and 0.684 on MNIST and YTF. However, in the appendix the numbers are 0.800 and 0.342 respectively. Compared to the reported number in JULE paper, DCC is not significantly better.\n\nA: In order to report AMI measurements, we reran JULE using publicly shared code from the authors. We were unable to reproduce the results on MNIST despite using the preprocessed MNIST data shared by the authors and keeping all other parameters fixed as suggested on the JULE GitHub repo. The JULE article reports NMI on two versions of the algorithm, JULE-SF and JULE-RC. We report numbers for JULE-RC as authors state that this is the slightly better algorithm. In our experiments, the NMI on MNIST for JULE-SF is 0.912 and the NMI for JULE-RC is 0.900. The measured NMI for each dataset is:\n\n\t\tMNIST\tCoil-100 \tYTF\t\tYaleB\nJULE-SF\t0.912\t\t0.969\t\t0.754\t\t0.994\nJULE-RC\t0.900\t\t0.983\t\t0.587\t\t0.991\n\nIn running JULE on the YTF dataset, we followed a similar protocol to the RCC paper. This processes the YTF data in a slightly different fashion than in the JULE and DEPICT papers, but we adopt the RCC data preparation protocol for consistency with other baselines and experiments. This data preparation protocol yields 10056 samples from 40 subjects, while the version in the JULE paper 10000 samples from 41 subjects. However, it is hard to believe that this small difference would lead to large changes in the resulting accuracy. For reference, our results on YTF for DEPICT are very close to the results reported in the original DEPICT publication. Finally, please note that DCC achieves similar or better accuracy without any knowledge of the number of clusters, whereas JULE and DEPICT do use a priori knowledge of the ground-truth number of clusters.\n"
] | [
-1,
-1,
-1,
6,
3,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
3,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"SyKQPRLEM",
"HJe0S9VEG",
"Hy53I9-Mz",
"iclr_2018_SJzMATlAZ",
"iclr_2018_SJzMATlAZ",
"iclr_2018_SJzMATlAZ",
"SJKJMO67z",
"SkV2zAMmG",
"iclr_2018_SJzMATlAZ",
"iclr_2018_SJzMATlAZ",
"SyqWgxzxf",
"H1ySNZVgf",
"HJ90m_PeG"
] |
iclr_2018_ryjw_eAaZ | Unsupervised Deep Structure Learning by Recursive Dependency Analysis | We introduce an unsupervised structure learning algorithm for deep, feed-forward, neural networks. We propose a new interpretation for depth and inter-layer connectivity where a hierarchy of independencies in the input distribution is encoded in the network structure. This results in structures allowing neurons to connect to neurons in any deeper layer skipping intermediate layers. Moreover, neurons in deeper layers encode low-order (small condition sets) independencies and have a wide scope of the input, whereas neurons in the first layers encode higher-order (larger condition sets) independencies and have a narrower scope. Thus, the depth of the network is automatically determined---equal to the maximal order of independence in the input distribution, which is the recursion-depth of the algorithm. The proposed algorithm constructs two main graphical models: 1) a generative latent graph (a deep belief network) learned from data and 2) a deep discriminative graph constructed from the generative latent graph. We prove that conditional dependencies between the nodes in the learned generative latent graph are preserved in the class-conditional discriminative graph. Finally, a deep neural network structure is constructed based on the discriminative graph. We demonstrate on image classification benchmarks that the algorithm replaces the deepest layers (convolutional and dense layers) of common convolutional networks, achieving high classification accuracy, while constructing significantly smaller structures. The proposed structure learning algorithm requires a small computational cost and runs efficiently on a standard desktop CPU. | rejected-papers | The updated draft has helped to address some of the issues that the reviewers had, however the reviewers believe there are still outstanding issues. With regard to the technical flaw, one reviewer has pointed out that the update changes the story of the paper by breaking the connection between the generative and discriminative model in terms of preserving or ignoring conditional dependencies.
In terms of the experiments, the paper has been improved by the reporting of standard deviation, and comparison to other works. However it is recommended that the authors compare to NAS by fixing the number of parameters and reporting the results to facilitate an apples-to-apples comparison. Another reviewer also recommends comparing to other architectures for a fixed number of neurons. | train | [
"ryilanteG",
"HJZz1Wqef",
"SJGyhgwZz",
"S1DioOtMf",
"SykdYpPbz",
"rJqw9TDbz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The paper proposes an unsupervised structure learning method for deep neural networks. It first constructs a fully visible DAG by learning from data, and decomposes variables into autonomous sets. Then latent variables are introduced and stochastic inverse is generated. Later a deep neural network structure is constructed based on the discriminative graph. Both the problem considered in the paper and the proposed method look interesting. The resulting structure seems nice.\n\nHowever, the reviewer indeed finds a major technical flaw in the paper. The foundation of the proposed method is on preserving the conditional dependencies in graph G. And each step mentioned in the paper, as it claims, can preserve all the conditional dependencies. However, in section 2.2, it seems that the stochastic inverse cannot. In Fig. 3(b), A and B are no longer dependent conditioned on {C,D,E} due to the v-structure induced in node H_A and H_B. Also in Fig. 3(c), if the reviewer understands correctly, the bidirectional edge between H_A and H_B is equivalent to H_A <- h -> H_B, which also induces a v-structure, blocking the dependency between A and B. Therefore, the very foundation of the proposed method is shattered. And the reviewer requests an explicit explanation of this issue.\n\nBesides that, the reviewer also finds unfair comparisons in the experiments.\n\n1. In section 5.1, although the authors show that the learned structure achieves 99.04%-99.07% compared with 98.4%-98.75% for fully connected layers, the comparisons are made by keeping the number of parameters similar in both cases. The comparisons are reasonable but not very convincing. Observing that the learned structures would be much sparser than the fully connected ones, it means that the number of neurons in the fully connected network is significantly smaller. Did the authors compare with fully connected network with similar number of neurons? In such case, which one is better? (Having fewer parameters is a plus, but in terms of accuracy the number of neurons really matters for fair comparison. In practice, we definitely would not use that small number of neurons in fully connected layers.)\n\n2. In section 5.2, it is interesting to observe that using features from conv10 is better than that from last dense layer. But it is not a fair comparison with vanilla network. In vanilla VGG-16-D, there are 3 more conv layers and 3 more fully connected layers. If you find that taking features from conv10 is good for the learned structure, then maybe it will also be good by taking features from conv10 and then apply 2-3 fully-connected layers directly (The proposed structure learning is not comparable to convolutional layers, and what it should really compare to is fully-connected layers.) In such case, which one is better? \nSecondly, VGG-16 is a large network designed for ImageNet data. For small dataset such as CIFAR10 and CIFAR100, it is really overkilled. That's maybe the reason why taking the output of shallow layers could achieve pretty good results.\n\n3. In Fig. 6, again, comparing the learned structure with fully-connected network by keeping parameters to be similar and resulting in large difference of the number of neurons is unfair from my point of view.\n\nFurthermore, all the comparisons are made with respect to fully-connected network or vanilla CNNs. No other structure learning methods are compared with. Reasonable baseline methods should be included.\n\nIn conclusion, due to the above issues both in method and experiments, the reviewer thinks that this paper is not ready for publication.\n",
"This paper tackles the important problem of structure learning by introducing an unsupervised algorithm, which encodes a hierarchy of independencies in the input distribution and allows introducing skip connections among neurons in different layers. The quality of the learnt structure is evaluated in the context of image classification, analyzing the impact of the number of parameters and layers on the performance.\n\nThe presentation of the paper could be improved. Moreover, the paper largely exceeds the recommended page limit (11 pages without references).\n\nMy main comments are related to the experimental section:\n\n- Section 5 highlights that experiments were repeated 5 times; however, the standard deviation of the results is only reported for some cases. It would be beneficial to include the standard deviations of all experiments in the tables summarizing the obtained results.\n\n- Are the differences among results presented in table 1 (MNIST) and table 2 (CIFAR10) statistically significant?\n\n- It is not clear how the numbers of table 4 were computed (size replaced, size total, t-size, replaced-size). Would it be possible to provide the number of parameters of the vanilla model, the pre-trained feature extractor and the learned structure separately?\n\n- In section 5.2., there is only one sentence mentioning comparisons to alternative approaches. It might be worth expanding this and including numerical comparisons.\n\n- It seems that the main focus of the experiments is to highlight the parameter reduction achieved by the proposed algorithm. There is a vast literature on model compression, which might be worth reviewing, especially given that all the experiments are performed on standard image classification tasks.\n\n\n\n",
"Authors propose a deep architecture learning algorithm in an unsupervised fashion. By finding conditional in-dependencies in input as a Bayesian network and using a stochastic inverse mechanism that preserves the conditional dependencies, they suggest an optimal structure of fully connected hidden layers (depth, number of groups and connectivity). Their algorithm can be applied recursively, resulting in multiple layers of connectivity. The width of each layer (determined by number of neurons in each group) is still tuned as a hyper-parameter.\n\nPros:\n- Sound derivation for the method.\n- Unsupervised and fast algorithm. \nCons:\n- Poor writing, close to a first draft. \n- Vague claims of the gain in replacing FC with these structures, lack of comparison with methods targeting that claim.\n - If the boldest claim is to have a smaller network, compare results with other compression methods.\n - If it is the gain in accuracy compare with other learn to learn methods and show that you achieve same or higher accuracy. The NAS algorithm achieves 3.65% test error. With a smaller network than the proposed learned structure (4.2M vs 6M) here they achieve slightly worse (5.5% vs 4.58%) but with a slightly larger (7.1M vs 6M) they achieve slightly better results (4.47% vs 4.58%). The winner will not be clear unless the experiments fixes one of variables or wins at both of them simultaneously.\n\nDetailed comments:\n\n- Results in Table 4 mainly shows that replacing fully connected layer with the learned structures leads to a much sparser connectivity (smaller number of parameters) without any loss of accuracy. Fewer number of parameters usually is appealing either because of better generalizability or less computation cost. In terms of generalizability, on most of the datasets the accuracy gain from the replacement is not statistically significant. Specially without reporting the standard deviation. Also the generalizability impact of this method on the state-of-the-art is not clear due to the fact that the vanilla networks used in the experiments are generally not the state-of-the-art networks. Therefore, it would be beneficial if the authors could show the speed impact of replacing FC layers with the learned structures. Are they faster to compute or slower?\n- The purpose of section 5.1 is written as number of layers and number of parameters. But it compares with an FC network which has same number of neurons-per-layer. The rest of the paper is also about number of parameters. Therefore, the experiments in this section should be in terms of number of parameters as well. Also most of the numbers in table 1 are not significantly different. \n\nSuggestions for increasing the impact:\n\nThis method is easily adaptable for convolutional layers as well. Each convolutional kernel is a fully connected layer on top of a patch of an image. Therefore, the input data rather than being the whole image would be all patches of all images. This method could be used to learn a new structure to replace the KxK fully connected transformation in the convolutional layer. \n\nThe fact that this is an unsupervised algorithm and it is suitable for replacing FC layers suggests experimentation on semi-supervised tasks or tasks that current state-of-the-art relies more on FC layers than image classification. However, the experiments in this paper are on fully-labeled image classification datasets which is possibly not a good candidate to verify the full potential of this algorithm.",
"We would like to thank the reviewer for the thorough review and important suggestions.\n\nreviewer:\n>> “Promising method, inconclusive results”\n\nOur response:\nWe significantly improved the clarity of our main experimental results (Table 4). We reordered the columns and report the absolute sizes of: “feature extraction”, replaced size, and learned structure size. We also included, as requested, a comparison to recent compression (pruning) methods.\n-----------------------\n\nreviewer:\n>> \"If the boldest claim is to have a smaller network, compare results with other compression methods.\"\n\nOur response:\nWe added a comparison to recent model compression (pruning) methods. In all the compared cases, our algorithm learns smaller networks while preserving accuracy. Compression methods are commonly supervised and aimed at reducing the network size. In contrast to pruning methods that prune all the layer, our algorithm keeps the first few layers (“feature extraction”) of the network intact and removes the deeper layers altogether. It then learns a new deep structure in an unsupervised manner (and is very fast; Matlab implementation on CPU). The total size of the network, feature extraction+learned structure is smaller than that of networks resulting from pruning methods.\n\n Compared to the NAS algorithm, our method is on par in terms of accuracy and model size but significantly faster. \n-----------------------\n\nreviewer:\n>> \"The purpose of section 5.1 is written as number of layers and number of parameters. But it compares with an FC network which has same number of neurons-per-layer.\"\n\nOur response:\nThe comparison to FC networks is where both networks have similar number of parameters and not number of neurons. We corrected and clarified this in the paper.\n-----------------------\n\nreviewer: (Suggestions for increasing the impact 1)\n>> \"This method is easily adaptable for convolutional layers as well.\"\n\nOur response:\nThis is a very important idea. In fact, we have been working on it for the past several months. However, there are additional concepts that need to be introduced, which we believe will complicate the current paper. For example, it is required to derive a new conditional independence test for comparing two patches conditioned on a set of other patches. Moreover, the varying size of the receptive field, as a function of the network depth, should be accounted for in the test (recall that only the input image pixels are considered in all the independence tests). \nIn this paper, we lay the foundations of the algorithm and demonstrate its effectiveness by learning the \"classifier\" part of the network (replacing the deepest of both convolutional and FC layers).\n-----------------------\n\nReviewer: (Suggestions for increasing the impact 2)\n>> The fact that this is an unsupervised algorithm ... However, the experiments in this paper ... possibly not a good candidate to verify the full potential of this algorithm.\"\n\nOur response:\nWe agree that the experiments do not cover the full potential of the algorithm; however, the experiments in this paper demonstrate the key idea of learning an efficient structure from unlabeled data. Efficiency is demonstrated by learning structures that are significantly smaller (while retaining classification accuracy or improving it) than a stack of convolutional and FC layers, as used in a range of common topologies.",
"We'd like to thank the reviewer for the feedback.\nAs for the points that were raised:\n------------------------\nReviewer's points: \n>> It would be beneficial to include the standard deviations of all experiments in the tables summarizing the obtained results.\n>> Are the differences among results presented in table 1 (MNIST) and table 2 (CIFAR10) statistically significant?\n\nOur response:\nAs indicated in the paper, we have recorded the standard deviation and will edit the paper to add this missing data that will prove the significance of the differences.\n \n-----------------------\nReviewer's point: \n>> It is not clear how the numbers of table 4 were computed (size replaced, size total, t-size, replaced-size). Would it be possible to provide the number of parameters of the vanilla model, the pre-trained feature extractor and the learned structure separately?\n\nOur response:\nFirst, thanks for the comment. we will update the table to be clearer and easier to digest.\n\nWe'll walk through one example from that table and clarify, hoping this will help understanding the rest of the lines:\nLets refer to line #2 , analyzing the \"MNIST-Man\" topology. The vanilla topology was manually constructed (thus the name MNIST-Man), its total size is 127K parameters and it achieve a classification accuracy of 99.35%\nIf we look at the network as composed of a 'head' (from the first conv layer up to a certain layer at the depth of the network) and a 'tail' (all the subsequent layers up to the softmax layer), then what we did is to throw out the 'tail' and replace it with a learned structure (similar to what is done in transfer learning, only here the 'tail' is much larger). In this specific case, the tail whose size is 104K parameters in the original network was replaced by a learned structure whose size is 24% (0.24 - taken from the 'replaced size' column) of the 107K (i.e. ~26K parameters) which reflects a reduction of 4.2X in tail size.\nThe overall size of the new network (original head + learned tail) is therefore 49K (=23K (head size) + 26K (tail size)) 49K which is 38% (0.38='t-size') of the vanilla network size (127K) and achieves a classification accuracy of 99.45%\n\n------------------------------------\nReviewer's point:\n>> In section 5.2., there is only one sentence mentioning comparisons to alternative approaches. It might be worth expanding this and including numerical comparisons.\n\nOur response:\nIn section 5.2 we have mentioned a comparison to one of the prominent papers (at the time of submission ) indicating the following:\n- On CIFAR-10 our method, based on Wide Resnet topology has achieved 95.42% accuracy with network size of 6M parameters\n- NAS (Zoph et al 2016) have achieved 94.5% and 95.53% accuracy for networks of sizes 4.2M and 7.1M respectively\n- DeepArchitect (Negrinho et al 2017) report 89% accuracy on CIFAR-10 . They havent provided any further details to conduct elaborate comparison\n- Baker et al. achieved 93.08% accuracy on a single top-performing model\n- Real et al. achieved 94.6% accuracy with network size of 5.4M parameters\n- Other related works (Smithson et al. , Miikkulainen et al.) have not tested on CIFAR-10 or used significantly different metric thus not easily comparable to our methods\n \nIt’s interesting to note that none of the other papers have provided statistical significance measures (std dev) in their results and represented the capacity of the network by its size - measured by number of parameters.",
"We'd like to thank the reviewer for the feedback.\nAs for the points that were raised:\n\n---------------------------------\nReviewer's point:\n>> the reviewer indeed finds a major technical flaw in the paper.... \n\nOur response:\nOur method is indeed based on the preservation of the conditional dependencies, encoded by the generative model, in the discriminative model. The reviewer's observation regarding the disability of the inverse model to preserve conditional dependencies among the observed variable is correct, as indeed apparent in figure (3). \n*However*, an important observation that we should have made clear, is that since we’re interested in learning a discriminative model that infers the latent variables (h) given the observed (A,B,C…) , the only relevant conditional dependencies that must be preserved are those among the hidden variables, and between the hidden variables and the observed. The conditional dependencies among the observed variables are not relevant to the inference of the hidden variable given the observed, thus its preservation is not handled during the model inversion. In the paper's appendix, we elaborate on the theoretical correctness of the inversion process and refer to Paige & Wood (2016) that proves the validity of the method. Specifically, refer to Figure (1) in Paige & Wood (2016). If we were interested in preserving the conditional dependencies among the observed variables, we would have ended up with the middle (b) structure. Since we're not interested in those dependencies, we proceed and eventually end up with the right structure (c)\n*We’d like to thank the reviewer for pointing out this important observation. We’ll edit the paper to better clarify it.*\n \n\n------------------------------------\nReviewer's point:\n>> In section 5.1, although the authors show that the learned structure achieves 99.04%-99.07% compared with 98.4%-98.75% for fully connected layers, the comparisons are made by keeping the number of parameters similar in both cases.... \n\n\nOur response:\nIn our evaluation we followed the common reporting protocol we have encountered when surveying the literature on the topic. Other papers used the number of parameters as a measure of network capacity (e.g. Real et al 2017, Zoph et al 2017).\nHaving said that, referring to figure 6 we can see that increasing the number of neurons in the fully connected layers, the accuracy eventually converges to a limit that is significantly (statistically-wise) below the learned structure. \n\n-----------------------------------------------\nReviewer's point: \n>>In section 5.2, .... If you find that taking features from conv10 is good for the learned structure, then maybe it will also be good by taking features from conv10 and then apply 2-3 fully-connected layers directly...\nSecondly, VGG-16 is a large network designed for ImageNet data. For small dataset such as CIFAR10 and CIFAR100, it is really overkilled. That's maybe the reason why taking the output of shallow layers could achieve pretty good results\n\n\nOur response:\nAs for the 1st comment regarding removal of the last conv layers from VGG - that's a correct observation. Please refer to figure 6 in our paper, that describes the result of such experiment on the three datasets we used for evaluation.\n\nAs for the 2nd comment regarding VGG being overkill for CIFAR - Note that the VGG we used for CIFAR is much smaller than the version used for ImageNet (15M vs 130M parameters). In addition, in our experiments we modified the network’s size per each dataset where it made sense (SVHN and CIFAR-10) in order to increase our coverage. VGG was chosen for CIFAR-10 as it is commonly used as a reference structure in the relevant literature and appears in the CIFAR-10 results leader-board.\nWe had also conducted an experiment (see table 3 in the paper) in which we cut VGG at different layers and measured the accuracy. We observed graceful reduction in accuracy as we removed conv layers.\n"
] | [
4,
5,
5,
-1,
-1,
-1
] | [
4,
2,
3,
-1,
-1,
-1
] | [
"iclr_2018_ryjw_eAaZ",
"iclr_2018_ryjw_eAaZ",
"iclr_2018_ryjw_eAaZ",
"SJGyhgwZz",
"HJZz1Wqef",
"ryilanteG"
] |
iclr_2018_rk9kKMZ0- | LEAP: Learning Embeddings for Adaptive Pace | Determining the optimal order in which data examples are presented to Deep Neural Networks during training is a non-trivial problem. However, choosing a non-trivial scheduling method may drastically improve convergence. In this paper, we propose a Self-Paced Learning (SPL)-fused Deep Metric Learning (DML) framework, which we call Learning Embeddings for Adaptive Pace (LEAP). Our method parameterizes mini-batches dynamically based on the \textit{easiness} and \textit{true diverseness} of the sample within a salient feature representation space. In LEAP, we train an \textit{embedding} Convolutional Neural Network (CNN) to learn an expressive representation space by adaptive density discrimination using the Magnet Loss. The \textit{student} CNN classifier dynamically selects samples to form a mini-batch based on the \textit{easiness} from cross-entropy losses and \textit{true diverseness} of examples from the representation space sculpted by the \textit{embedding} CNN. We evaluate LEAP using deep CNN architectures for the task of supervised image classification on MNIST, FashionMNIST, CIFAR-10, CIFAR-100, and SVHN. We show that the LEAP framework converges faster with respect to the number of mini-batch updates required to achieve a comparable or better test performance on each of the datasets. | rejected-papers | Although paper has been improved with new quantitative results and additional clarity, the reviewers agree though that larger-scale experiments would better highlight the utility of the method. There are some concerns with computational cost, despite the fact that the two networks are trained asynchronously. A baseline against a single, asynchronously trained network (multiple GPUs) would help strengthen this point. Some reviewers expressed concerns with novelty. | train | [
"ry9RWezWM",
"S1p86uteG",
"Byjs3NyZz",
"rJRhbuTXf",
"r1UD-d6mf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author"
] | [
"The authors purpose a method for creating mini batches for a student network by using a second learned representation space to dynamically selecting examples by their 'easiness and true diverseness'. The framework is detailed and results on MNIST, cifar10 and fashion-MNIST are presented. The work presented is novel but there are some notable omissions: \n - there are no specific numbers presented to back up the improvement claims; graphs are presented but not specific numeric results\n- there is limited discussion of the computational cost of the framework presented \n- there is no comparison to a baseline in which the additional learning cycles used for learning the embedding are used for training the student model.\n- only small data sets are evaluated. This is unfortunate because if there are to be large gains from this approach, it seems that they are more likely to be found in the domain of large scale problems, than toy data sets like mnist. \n\n**edit\nIn light of the changes made, and in particular the performance gains achieved on CIFAR-100, i have increased my ratting from a 4 to a 6",
"(Summary)\nThis paper is about learning a representation with curriculum learning style minibatch selection in an end-to-end framework. The authors experiment the classification accuracy on MNIST, FashionMNIST, and CIFAR-10 datasets.\n\n(Pros)\nThe references to the deep metric learning methods seem up to date and nicely summarizes the recent literatures.\n\n(Cons)\n1. The method lacks algorithmic novelty and the exposition of the method severely inhibits the reader from understand the proposed idea. Essentially, the method is described in section 3. First of all, it's not clear what the actual loss the authors are trying to minimize. Also, \\min_v E(\\theta, v; \\lambda, \\gamma) is incorrect. It looks to me like it should be E \\ell (...) where \\ell is the loss function. \n\n2. The experiments show almost no discernable practical gains over 'random' baseline which is the baseline for random minibatch selection.\n\n(Assessment)\nClear rejection. The method is poorly written, severely lacks algorithmic novelty, and the proposed approach shows no empirical gains over random mini batch sampling.",
"While the idea is novel and I do agree that I have not seen other works along these lines there are a few things that are missing and hinder this paper significantly.\n\n1. There are no quantitative numbers in terms of accuracy improvements, overhead in computation in having two networks.\n2. The experiments are still at the toy level, the authors can tackle more challenging datasets where sampling goes from easy to hard examples like birdsnap. MNIST, FashionMNIST and CIFAR-10 are all small datasets where the true utility of sampling is not realized. Authors should be motivated to run the large scale experiments.\n\n",
"We thank all of the reviewers for their careful review of our paper, and for the valuable comments and constructive criticism that ensued. We performed a major revision to the paper to take all of them into account, and in the process, we believe the paper has improved significantly. These are detailed below:\n\nR1 Methodology clarification\n\nWe made significant updates to the methodology in Section 3. In Section 3.1, we provide a detailed training algorithm for the embedding CNN which uses the Magnet loss to form a representation space consisting of $K$ clusters for $C$ classes by adaptive density discrimination. This results in a training set $D$ partitioned into learned representation space, $D_K^c$, while maintaining \tintra-class variation and inter-class similarity. The details of the objective function for the LEAP framework are added in Section 3.2, which is given by:\n\n\\min_{\\theta, \\mathcal{W}} \\mathbb{E}(\\theta, \\mathcal{W}; \\lambda, \\gamma) = \\sum_{i=1}^{n}w_i\\mathcal{L}(y_i, f(x_i,\\theta)) - \\lambda \\sum_{i=1}^{n}w_i - \\gamma\\|\\mathcal{W}\\|_{2,1}, \\ \\text{s.t} \\ \\mathcal{W} \\in [0,1]^{n}\n\nIn LEAP, we assume that a dataset contain $N$ samples, $\\mathcal{D} = \\{\\mathbf{x}_n\\}_{n=1}^{N}$, is grouped into $K$ clusters for each class $c$ through the Magnet loss to get: $\\{\\mathcal{D}^{k}\\}_{k=1}^K$, where $\\mathcal{D}^{k}$ corresponds to the $k^{th}$ cluster, $n_k$ is the number of samples in each cluster and $\\sum_{k=1}^{K}n_k = N$. A weight vector is $\\mathcal{W}^{k} = (\\mathcal{W}_1^k,\\ldots,\\mathcal{W}_{n_k}^k)^T$, where each $\\mathcal{W}_{n_k}^k$ is assigned a weight $[0,1]^{n_k}$ for each sample in cluster $k$ for $K$ clusters. \n\nThe easiness and true diverseness terms are given by $\\lambda$ and $\\gamma$. We use the negative $l_1$-norm: $-\\|\\mathcal{W}\\|_1$ to select easy samples over hard samples. The negative $l_2$-norm is used to disperse non-zero elements of the weights $\\mathcal{W}$ across a large number of clusters so that we can get a diverse set of training samples. \n\nIn addition, we give specific details on the LEAP algorithm (Section 3.2) for training the student CNN, where we indicate how the embedding CNN and student CNN are used in conjunction. In this subsection, we also present the self-paced sample selection strategy, which specifies how the training samples are selected based on the “easiness” and “true diverseness” according to the student CNN model, such that we solve $\\min_{\\mathcal{W}}\\mathbb{E}(\\theta, \\mathcal{W}; \\lambda, \\gamma)$. If the cross-entropy loss, $\\mathcal{L}(y_i^{k}, f(x_i^{k},\\theta))$, is less than $(\\lambda + \\gamma\\frac{1}{\\sqrt{i}+\\sqrt{i-1}})$, then we assign a weight $\\mathcal{W}_i^{k} = 1$, otherwise $\\mathcal{W}_i^{k} = 0$. $i$ is the training instance’s rank w.r.t. its cross-entropy loss value within its cluster. The instance with a smaller loss than the assigned threshold will be selected during training. Therefore, the new $\\mathcal{W}$ becomes equal to $\\min_{\\mathcal{W}}\\mathbb{E}(\\theta, \\mathcal{W}; \\lambda, \\gamma)$. Next, we update the learning pace for $\\lambda$ and $\\gamma$.\n",
"\n\nR1, R2, R4 Quantitative results to backup improvement claims\n\nA table with a summary of the experimental results is provided in Section 5. Please refer to the latest revision for the updated Table 1. Here, we present the test accuracy (%) results across all datasets including: MNIST, Fashion-MNIST, CIFAR-10, CIFAR-100, and SVHN for the following sampling methods: Learning Embeddings for Adaptive Pace (LEAP), Self-Paced Learning with Diversity (SPLD), and Random. The test accuracy results of MNIST, Fashion-MNIST, and CIFAR-10 are averaged over 5 runs. The results for CIFAR-100 and SVHN are averaged over 4 runs. The results show that there is a noticeable increase in test performance across all datasets with the LEAP dynamic sampling strategy, especially for the CIFAR-100 dataset.\n\nR2, R4 Computational cost of this framework\n\nWe agree that training two complex CNN architectures (i.e. VGG-16, ResNet-18, etc.) would raise concerns for overhead in computation. However, we would like to clarify that the embedding CNN and student CNN are asynchronously trained in parallel by using multiprocessing to share data between processes in a local environment using arrays and values. The idea is to have an embedding CNN that is adaptively sculpting a representation space, while the student CNN is being trained. The student CNN leverages the $K$ cluster representations constructed by the embedding CNN, to select samples based on the “easiness” from each of the $K$ clusters for each class, $c$ in $C$ classes. This way we are ensuring that the samples that the student model considers “easy” also maintains diversity, which is important for constructing mini-batches iteratively. Therefore, the extra training cost of the embedding CNN can be mitigated by having it train in parallel to the actual classification model. This setup is more apparent in Section 3, which contains more specific and updated details of the methodology for both the embedding CNN and student CNN. \n\nR1, R2, R4 Experiments on complex datasets\n\nWe conducted experiments on two additional datasets, SVHN and CIFAR-100 which is considered a more fine-grained visual recognition dataset. We used a WideResNet for the student CNN and VGG-16 for the embedding CNN to train on CIFAR-100 using LEAP. The specific training scheme used for CIFAR-100 is detailed in Section 4.4. The CIFAR-100 experiments revealed that we achieve a noticeable gain in performance when using the LEAP framework with a test accuracy of 79.17% \\pm 0.24%. The LEAP framework outperforms the baselines, SPLD and Random, by 4.50% and 3.72%, respectively. Effectively, we saw that on a more challenging fine-grained classification task, the LEAP framework performs really well. While we agree with the reviewers that the true utility of our framework can be realized in large-scale problems (i.e. BirdSnap, ImageNet, etc.), we have yet to perform those experiments.\n\nThe MNIST experiments were mainly performed to show that the LEAP framework can be employed end-to-end for a simple supervised classification task. Then, we extended this to Fashion-MNIST which is considered a direct drop-in replacement for MNIST. Fashion-MNIST served to be another small classification dataset that can be used to test and verify the feasibility of our approach, which also served to be successful. CIFAR-10 experiments showed that we can learn a representation space with $K$ clusters for each class in the dataset, by extracting features from RGB images and computing the Magnet loss with the embedding CNN. Then, we showed that we can use this learned representation space to adaptively sample “easy” training instances diversely from $K$ clusters for each classified class."
] | [
6,
3,
4,
-1,
-1
] | [
3,
4,
4,
-1,
-1
] | [
"iclr_2018_rk9kKMZ0-",
"iclr_2018_rk9kKMZ0-",
"iclr_2018_rk9kKMZ0-",
"iclr_2018_rk9kKMZ0-",
"iclr_2018_rk9kKMZ0-"
] |
iclr_2018_SkFEGHx0Z | Nearest Neighbour Radial Basis Function Solvers for Deep Neural Networks | We present a radial basis function solver for convolutional neural networks that can be directly applied to both distance metric learning and classification problems. Our method treats all training features from a deep neural network as radial basis function centres and computes loss by summing the influence of a feature's nearby centres in the embedding space. Having a radial basis function centred on each training feature is made scalable by treating it as an approximate nearest neighbour search problem. End-to-end learning of the network and solver is carried out, mapping high dimensional features into clusters of the same class. This results in a well formed embedding space, where semantically related instances are likely to be located near one another, regardless of whether or not the network was trained on those classes. The same loss function is used for both the metric learning and classification problems. We show that our radial basis function solver outperforms state-of-the-art embedding approaches on the Stanford Cars196 and CUB-200-2011 datasets. Additionally, we show that when used as a classifier, our method outperforms a conventional softmax classifier on the CUB-200-2011, Stanford Cars196, Oxford 102 Flowers and Leafsnap fine-grained classification datasets. | rejected-papers | This paper proposes a non-parametric method for metric learning and classification. One of the reviewers points out that it can be viewed as an extension of NCA. There is in fact a non-linear version of NCA that was subsequently published, see [1]. In this sense, the approach here appears to be a version of nonlinear NCA with learnable per-example weights, approximate nearest neighbour search, and the allowance of stale exemplars. In this view, there is concern from the reviewers that there may not be sufficient novelty for acceptance.
The reviewers have concerns with scalability. It would be helpful to include clarification or even some empirical results on how this scales compared to softmax. It is particularly relevant for larger datasets like Imagenet, where it may be impossible to store all exemplars in memory.
It is also recommended to relate this approach to metric-learning approaches in few-shot learning. Particularly to address the claim that this is the first approach to combine metric learning and classification.
[1]: Learning a Nonlinear Embedding by Preserving Class Neighbourhood Structure. Ruslan Salakhutdinov and Geoffrey Hinton. AISTATS 2007 | train | [
"r1jeC_Kgf",
"BJ1FJQ5lG",
"SkzzOIcez",
"SyHYEUjGz",
"ryz17UoMf",
"SJnwxLjzz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"(Summary)\nThis paper proposes weighted RBF distance based loss function where embeddings for cluster centroids and data are learned and used for class probabilities (eqn 3). The authors experiment on CUB200-2011, Cars106, Oxford 102 Flowers datasets.\n\n(Pros)\nThe citations and related works cover fairly comprehensive and up-to-date literatures on deep metric learning.\n\n(Cons)\nThe proposed method is unlikely to scale with respect to the number of classes. \"..our approach is also free to create multiple clusters for each class..\" This makes it unfair to deep metric learning baselines in figures 2 and 3 because DMP baselines has memory footprint constant in the number of classes. In contrast, the proposed method have linear memory footprint in the number of classes. Furthermore, the authors ommit how many centroids are used in each experiments.\n\n(Assessment)\nMarginally below acceptance threshold. The method is unlikely to scale and the important details on how many centroids the authors used in each experiments is omitted.",
"- The paper proposes to use RBF kernel based neurons with each training data point as a center of\n one of the RBF kernel neuron. (i) Kernel based neural networks have been explored before [A] and\n (ii) ideas similar to the nearest neighbour based efficient but approximate learning for mixture\n of Gaussians like settings have also been around, e.g. in traning GMMs [B]. Hence I would consider\n the novelty to be very low \n- The paper says that the method can be applied to embedding learning and classification, which were\n previously separate problems. This is largely incorrect as many methods for classification,\n especially in zero- and few-shots settings (on some of the datasets used in the paper) are using\n embedding learning [C], one of the cited and compared with paper (Sohn 2016) also does both\n (mostly these methods use k-NN classifier with Euclidean distance between learned embeddings)\n- It seems that the method thus is adding a kernel neuron layer, with the number equal to the number\n of training samples, centers initialized with the training samples, followed by a normalized\n voting based on the distance of the test example with training examples of different classes\n (approximately a weighted k-NN classifier)\n- The number of neurons in the last layer thus scales with the number of training examples, which\n can be prohibitively large \n- It is difficult to understand what exactly is the embedding; if the number of neurons in the\n RBF layer is equal to the number of training examples then it seems the embedding is the activation\n of the layer before that (Fig1 also seems to suggest this). But the evaluation is done with\n different embedding sizes, which suggests that another layer was inserted between the last FC\n layer of the base network and the RBF layer. In that case the empirical validation is not fair as\n the network was made deeper.\n- Also, it is a bit confusing that as training proceeds the centers change (Sec3.3 first few lines),\n so the individual RBF neurons, eventually, do not necessarily correspond to the training examples\n they were initialized with, but the final decision (Eq4) seems to be taken assuming that the\n neurons do correspond to the training examples (and their classes). While the training might\n ensure that the centers do not move so much, this should be explicitly discussed and clarified.\n \nOverall, the novelty of the paper seems to be low and it is difficult to understand what exactly is\nbeing done. \n\n[A] Xu et al., Kernel neuron and its training algorithm, ICONIP 2001\n[B] Verbeek et al., Efficient greedy learning of gaussian mixture models, Neural Computation 2003\n[C] Xian et al., Latent Embeddings for Zero-shot Classification, CVPR 2016",
"The authors propose a loss that is based on a RBF loss for metric learning and incorporates additional per exemplar weights in the index for classification. Significant improvements over softmax are shown on several datasets.\n\nIMHO, this could be a worthwhile paper, but the framing of the paper into existing literature is lacking and thus it appears as if the authors are re-inventing the wheel (NCA loss) under a different name (RBF solver).\n\nThe specific problems are:\n- The authors completely miss the connection to NCA loss (https://papers.nips.cc/paper/2566-neighbourhood-components-analysis.pdf) and thus appear to be re-inventing the wheel.\n - The proposed metric learning scenario is exactly as proposed in the NCA loss works, while the classification approach adds an interesting twist by learning per exemplar weights. I haven't encountered this before and it could make an interesting proposal. Of course the benefit of this should be evaluated in ablation studies( Tab 3 shows one experiment with marginal improvements).\n- The authors' use of 'solver' seems uncommon and confusing. What is proposed is a loss in addition to building a weighted index in the case of classification.\n- In the metric learning comparison with softmax (end of page 9) the authors mentions that a Gaussian standard deviation for softmax is learned. It appears as if the authors use the softmax logits as embedding whereas the more common approach is to use the bottleneck layer. This is also indicated by the discussion at the end of page 10 where the authors mention that softmax is restricted to axis aligned embeddings. All softmax metric learning experiments should be carried out on appropriately sized bottleneck layers.\n- Some of the motivations of what the various methods learn seem flawed, e.g. triplet loss CAN learn multiple modes per class and there is nothing in the Softmax loss that encourages the classes to fill a large region of the space.\n- Why don't the authors compare on ImageNet?\n\nSome positive points:\n- The authors mention in Sec 3.3 that updating the RBF centres is not required. This is a crucial point that should be made a centerpiece of this work, as there are many metric learning works that struggle with this. Additional experiments that can investigate this point would greatly contribute to a well rounded paper.\n- The numbers reported in Tab 1 show very significant improvements\n\nIf the paper was re-framed and builds on top of the already existing NCA loss, there could be valuable contributions in this paper. The experimental comparisons are lacking in some respect, as the comparison with Softmax as a metric learning method seems uncommon, i.e. using the logits instead of the bottleneck layer. I encourage the authors to extend the paper and flesh out some of the experiments and then submit it again.",
"Thank you for your comments. The main points of your review are addressed below.\n\n-Scalability with the number of classes.\n\nScalability in terms of computation: As the number of classes (and therefore the number of training examples) increases, the number of RBF centres also increases. Our approach is scalable to large numbers of training examples, as we use fast approximate nearest neighbour search to obtain approximate nearest neighbour subsets for computing the loss and gradients. Fast Approximate Nearest Neighbour Graphs (Harwood and Drummond, 2016) make our approach scalable up to a very large number of training set examples, as well as a large embedding dimension.\n\nScalability in terms of performance: Figure 4 suggests that the margin between our approach and softmax classification performance will shrink as the number of training examples per class becomes larger. However, there is nothing to suggest that our approach will scale poorly as the number of classes increases.\n\n\n-Memory footprint of DML approaches and fair comparison.\n\nThe memory footprint of our approach during training is linear with the number of training set examples, not the number of classes. This is the same for the DML baselines, which record each training set embedding in order to collect statistics and perform triplet selection. For example, Kumar et al. (2017) perform smart mining over all of the training set embeddings to select triplets. As such, our comparisons to the DML baselines are fair.\n\n\n-Number of centroids used.\n\nThe number of centroids is not a hyperparameter of our model. Our approach is free to form as many clusters for a given class as best represents the data. This is an advantage of our approach compared to other deep metric learning approaches that attempt to learn local similarity, since we do not have to determine the desired number of clusters or the cluster size before training. \n\nThank you again for your review.",
"Thank you for your comments and review. The major points raised are addressed below.\n\n-Kernel based neurons.\n\nKernel based neurons have been explored before and we discuss this briefly in our literature review (such as the work by Broomhead and Lowe (1988)). Unlike kernel neuron approaches, our RBF neurons are not learnable parameters of the model. Rather, the RBF centres are coupled to high dimensional training set feature embeddings. Further, we introduce a learnable per exemplar weight for the RBF centres. We also show how to make training tractable by allowing RBF centres to become out-of-date with the training embeddings for periods of time during training. Finally, we demonstrate how to make our approach scalable with the number of training examples and the embedding dimension, by leveraging fast approximate nearest neighbour search.\n\n\n-Other approaches for both classification and metric learning.\n\nAs stated, embedding learning approaches have been used for zero or few shot classification scenarios, but these approaches do not scale well beyond these settings of impoverished training data. This is seen in the comparison between triplet and softmax for classification in Rippel et al. (2016). On the same and similar datasets as experimented with in our paper, triplet deep metric learning approaches under perform softmax for classification tasks by up to 10% (Rippel et al. 2016). Contrary to this, our metric learning approach outperforms softmax on such datasets. Although there are a few other examples of metric learning approaches that have been used for classification, such as in Sohn (2016), the advantage over a softmax classifier is inconsistent between datasets.\n\n\n-The embedding and fair comparison to other metric learning approaches.\n\nNo extra depth is added to the base networks to which we compare. In the classification experiments, the softmax baseline networks have a final FC layer, with the number of channels equal to the number of classes, and softmax loss applied to the output of this layer. Our approach removes the softmax and final FC layer and replaces them with our RBF loss layer. This means the embedding is the output from the layer immediately before the final FC layer in the softmax network (e.g. the 4096 dimension FC7 for VGG16 or the 2048 dimension final average pooling layer for ResNet). As such, the model capacity of our approach is reduced compared to softmax.\n\nFor the comparison to deep metric learning approaches (Table 1), we follow the exact same set-up used in the papers to which we compare. These approaches insert an additional FC layer after the final average pooling layer of GoogLeNet, in order to achieve the desired embedding dimension. The compared approaches use an embedding dimension of 64 and we show that our approach outperforms these methods at this sized embedding. Additionally, we show that our approach is able to take advantage of a larger embedding space, while triplet based approaches do not see the same benefit from increasing the embedding dimension (Figures 2 and 3).\n\n\n-RBF centres moving during training.\n\nThe centres are updated at regular intervals during training (every 1, 5 or 10 epochs, for example). This is shown to be sufficient for the model to learn. Any testing of the network is carried out with fully updated centres (i.e. centres that correspond exactly to the training examples). Practically, this is done by doing a full forward pass of the training data at the predefined interval during training and updating the model parameters that correspond to the RBF centres. At the completion of training, the centres are again updated to correspond with the training examples, before testing/deploying the model.\n\nThank you again for your comments.",
"Thank you for review and suggestions to improve the work. We address the main points of your review below.\n\n-NCA loss.\n\nThere is indeed a strong connection between our work and NCA loss, and we thank the reviewer for pointing us towards this missed reference. However, this does not detract from the following novel contributions of our paper. Firstly, our approach is scalable both in the number of training examples and the embedding dimension, due to the leveraging of fast approximate nearest neighbour search. Further, our approach is contextualised amongst current deep metric learning approaches and applied to the domains of transfer learning and classification. We show how to train a deep neural network using our loss function to achieve state-of-the-art transfer learning/embedding space learning results, while also outperforming softmax-based classification. These two different target domains are rarely tackled simultaneously in the literature. Additionally, our approach addresses the issues associated with nearest neighbour based learning, by allowing RBF centres to become out-of-date with the training embeddings. We further perform an analysis on the number of nearest neighbours required for the model to learn, when the centres and training embeddings drift apart. Unlike the linear transformation in NCA, our approach learns a non-linear transformation from the input space to the embedding space. We also study the importance of the embedding dimension, which is not addressed in the NCA work. Finally, our approach includes a learnable weight per exemplar, strengthening the classification capability of the model.\n\n \n-Clarification on using bottleneck layer for softmax.\n\nWe do in fact use the bottleneck layer for these experiments, not the softmax logits. The FC7 layer of VGG16, with a 4096 dimension output, is used for both our approach and the softmax metric learning experiments.\n\n\n-Triplet loss and multiple modes per class.\n\nAs triplet loss demands that semantically related instances are located nearby, and the only form of supervisory semantic information used is the class labels, standard triplet loss approaches will attempt to form a single cluster per class. The local structure of the space isn’t considered, meaning that any notion of intra-class similarity is lost. Although there are some approaches that attempt to represent local similarity, these require the parameters of the embedding space, such as the number of modes per class or the cluster size, to be determined before training. This is not ideal as this information cannot be determined by simply looking at the input space. Our approach, however, makes no assumptions about how the embedding space should be structured and allows clusters to form freely.\n\nThank you again for your comments."
] | [
5,
3,
4,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1
] | [
"iclr_2018_SkFEGHx0Z",
"iclr_2018_SkFEGHx0Z",
"iclr_2018_SkFEGHx0Z",
"r1jeC_Kgf",
"BJ1FJQ5lG",
"SkzzOIcez"
] |
iclr_2018_rJ695PxRW | Discovering Order in Unordered Datasets: Generative Markov Networks | The assumption that data samples are independently identically distributed is the backbone of many learning algorithms. Nevertheless, datasets often exhibit rich structures in practice, and we argue that there exist some unknown orders within the data instances. Aiming to find such orders, we introduce a novel Generative Markov Network (GMN) which we use to extract the order of data instances automatically. Specifically, we assume that the instances are sampled from a Markov chain. Our goal is to learn the transitional operator of the chain as well as the generation order by maximizing the generation probability under all possible data permutations. One of our key ideas is to use neural networks as a soft lookup table for approximating the possibly huge, but discrete transition matrix. This strategy allows us to amortize the space complexity with a single model and make the transitional operator generalizable to unseen instances. To ensure the learned Markov chain is ergodic, we propose a greedy batch-wise permutation scheme that allows fast training. Empirically, we evaluate the learned Markov chain by showing that GMNs are able to discover orders among data instances and also perform comparably well to state-of-the-art methods on the one-shot recognition benchmark task. | rejected-papers | The problem of discovering ordering in an unordered dataset is quite interesting, and the authors have outlined a few potential applications. However, the reviewer consensus is that this draft is too preliminary for acceptance. The main issues were clarity, lack of quantitative results for the order discovery experiments, and missing references. The authors have not yet addressed these issues with a new draft, and therefore the reviewers have not changed their opinions. | train | [
"B1ySxEolG",
"Hy97waqxM",
"r1bzglTgG",
"rksTsaYfM",
"ryXIsptMG",
"HJFOc6tfz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"\nThe authors deal with the problem of implicit ordering in a dataset and the challenge of recovering it, i.e. when given a random dataset with no explicit ordering in the samples, the model is able to recover an ordering. They propose to learn a distance-metric-free model that assumes a Markov chain as the generative mechanism of the data and learns not only the transition matrix but also the optimal ordering of the observations.\n\n\n> Abstract\n“Aiming to find such orders, we introduce a novel Generative Markov Network (GMN) which we use to extract the order of data instances automatically. ”\nI am not sure what automatically refers here to. Do the authors mean that the GMN model does not explicitly assume any ordering in the observed dataset? This needs to be better stated here. \n“Aiming to find such orders, we introduce a novel Generative Markov Network (GMN) which we use to extract the order of data instances automatically; given an unordered dataset, it outputs the best -most possible- ordering.”\n\nMost of the models assume an explicit ordering in the dataset and use it as an integral modelling assumption. Contrary to that they propose a model where no ordering assumption is made explicitly, but the model itself will recover it if any.\n\n> Introduction\nThe introduction is fairly well structured and the example of the joint locations in different days helps the reader. \n\nIn the last paragraph of page 1, “we argue that … a temporal model can generate it.”, the authors present very good examples where ordered observations (ballerina poses, video frames) can be shuffled and then the proposed model can recover a temporal ordering out of them. What I would like to think also here is about an example where the recovered ordering will also be useful as such. An example where the recovered ordering will increase the importance of the inferred solution would be more interesting..\n\n\n\n2. Related work\nThis whole section is not clear how it relates to the proposed model GMN. Rewriting is strongly suggested. \nThe authors mention Deep Generative models and One-shot learning methods as related work but the way this section is constructed makes it hard for the reader to see the relation. It is important that first the authors discuss the characteristics of GMN that makes it similar to Deep generative models and the one-shot learning models. They should briefly explain the characteristics of DGN and one-shot learning so that the readers see the relationship. \nAlso, the authors never mention that the architecture they propose is deep.\n \nRegarding the last paragraph of page 2, “Our approach can be categorised … can be computed efficiently.”:\nNot sure why the authors assume that the samples can be sampled from an unmixed chain. An unmixed chain can also result in observing data that do not exhibit the real underlying relationships. Also the authors mention couple of characteristics of the GMN but without really explaining them. What are the explicit and implicit models [1] … this needs more details. \n\n[1] P. J. Diggle and R. J. Gratton. Monte Carlo methods of inference for implicit statistical models. Journal of the Royal Statistical Society. Series B (Methodological), pages 193–227, 1984. \n\n“Second, prior approaches were proposed based on the notion of denoising models. In other words, their goal was generating high-quality images; on the other hand, we aim at discovering orders in datasets.” —>this bit is confusing. Do the authors mean that prior approaches were considering the observed ordering as part of the model assumptions and were just focusing on the denoising? \n\n3. Generative Markov models\nFirst, I would like to draw the attention of the authors on the terminology they use. The states here are not the latent states usually referred in the literature of Markov chains. The states here are observed and should not be confused with the emissions also usually stated in the corresponding literature. There are as many states as the number of observations and not differentiation is made for ties. All these are based on my understanding of the model.\n\nIn the Equation just before equation (1), on the left hand side, shouldn’t \\pi be after the `;’. It’s an average over the possible \\pi. We cannot consider the average over \\pi when we also want to find the optimal \\pi. The sum doesn’t need to be there. Shouldn’t it just be max_{\\theta, \\pi} log P({s_i}^{n}_{i=1}; \\pi, \\theta) ?\nEquation (1), same. The summation over the possible \\pi is confusing. It’s an optimisation problem…\n\npage 4, section 3.1: The discussion about the use of Neural Net for the construction of the transition matrix needs expansion. It is unclear how the matrix is constructed. Please add more details. E.g. use of soft-max non-linear transformation so that the output of the Neural Net can be interpreted as the probabilities of jumping to one of the possible states. In this fashion, we map the input (current state) and transform it to the probability gf occupying states at the next time step.\n\nWhy this needs expansion: The construction of the transition matrix is the one that actually plays the role of the distance metric in the related models. More specifically, the choice of the non-linear function that outputs the transition probability is crucial; e.g. a smooth function will output comparable transition probabilities to similar inputs (i.e. similar states). \n\nsection 3.2: \nMy concern about averaging over \\pi applies on the equations here too. \n\n“However, without further assumption on the structure of the transitional operator..”—> I think the choice of the nonlinear function in the output node of the NN is actually related to the transition matrix and defines the probabilities. It is a confusing statement to make and authors need to discuss more about it. After all, what is the driving force of the inference? This is a problem/task where the observations are considered in a number of different permutations. As such, the ordering is not fixed and the main driving force regarding the best choice of ordering should come from the architecture of the transition matrix; what kind of transitions does the Neural Net architecture favour? Distance free metric but still assumptions are made that favour specific transitions over others. \n\n“At first, Alg. 1 enumerates all the possible states appearing in the first time step. For each of the following steps, it finds the next state by maximizing the transition probability at the current step, i.e., a local search to find the next state. ” —> local search in the sense that the algorithm chooses as the next state the state with the biggest transition probability (to it) as defined in the Neural Net (transition operator) output? This is a deterministic step, right? \n\n4.1 DISCOVERING ORDERS IN DATASETS \nNice description of the datasets. In the <MSR_SenseCam> the choice of one of the classes needs to be supported. Why? What do the authors expect to happen if a number of instances from different classes are chosen? \n\n4.1.1 IMPLICIT ORDERS IN DATASETS \nThe explanation of the inferred orderings for the GMN and Nearest Neighbour model is not clear. In figure 2, what forces the GMN to make distinguishable transitions as opposed to the Nearest neighbour approach that prefers to get stuck to similar states? Is it the transition matrix architecture as defined by the neural network? \n\n>> Figure 10: why use of X here? Why not keep being consistent by using s?\n\n*** DO the authors test the model performance on a ordered dataset (after shuffling it…) ? Is the model able of recovering the order? **\n",
"[After rebuttal]: \nI appreciate the effort the authors have put into the rebuttal, but I do not see a paper revision or new results, so I keep my rating.\n\n---\n\nThe paper proposes “Generative Markov Networks” - a deep-learning-based approach to modeling sequences and discovering order in datasets. The key ingredient of the model is a deep network playing the role of a transition operator in Markov chain, trained via Variational Bayes, similar to a variational autoencoder (but with non-identical input and output images). Given an unordered dataset, the authors maximize its likelihood under the model by alternating gradient ascent steps on the parameters of the network and greedy reordering of the dataset. The model learns to find reasonable order in unordered datasets, and achieves non-trivial performance on one-shot learning. \n\nPros:\n1) The one-shot learning results are promising. The method is conceptually more attractive than many competitors, because it does not involve specialized training on the one-shot classification task. The ability to perform unsupervised fine-tuning on the target test set is also appealing.\n2) The idea of explicitly representing the neighborhood structure within a dataset is generally interesting and seems related to the concept of low-dimensional image manifold. It’s unclear why does this manifold have to be 1-dimensional, though.\n\nCons:\n1) The motivation of the paper is not convincing. Why does one need to find order in unordered datasets? The authors do not really discuss this at all, even though this seems to be the key task in the paper, as reflected in the title. What does one do with this order? How does one even evaluate if a discovered order is good or not?\n2) The one-shot classification results are to me the strongest part of the paper. However, they are rushed and not analyzed in detail. It is unclear which components of the system contribute to the performance. As I understand the method, the authors effectively select several neighbors of the labeled samples and then classify the remaining samples based on the average similarity to these. What if the same procedure is performed with a different similarity measure, not the one learned by GMN? I am not convinced that the proposed method is well tuned for the task. Why is it useful to discover one-dimensional structure, rather than learning a clustering or a metric? Could it be that with a different similarity measure (like the distance in the feature space of a network trained on classification) this procedure would work even better? Or is GMN especially good for this task? If so. why?\n3) The experiments on dataset ordering are not convincing. What should one learn from those? There are no quantitative results, just a few examples (and more in the supplement). The authors even admit that “Comparing to the strong ordering baseline Nearest Neighbor sorting, one could hardly tell which one is better”. Nearest neighbor with Euclidean metric is not a strong baseline at all, and not being able to tell if the proposed method is better than that is not a good sign.\n4) The authors call their method distance-metric-free. This is strange to me. The loss function used during training of the network is a measure of similarity between two samples (may or may not be a proper distance metric). So the authors do assume having some similarity measure between the data points. The distance-metric-free claim is similar to saying that negative log-likelihood of a Gaussian has nothing to do with Euclidean distance. \n5) The experiments on using the proposed model as a generative model are confusing. First, the authors do not generate the samples directly, but instead select them from the dataset - this is quite unconventional. Then, the NN baseline is obviously doomed to jump between two samples - the authors could come up with a better baseline, for instance linearly extrapolating based on two most recent samples, or learning the transition operator with a simple linear model. \n6) I am puzzled by the hyperparameter choices. It seems there was a lot of tuning behind the scenes, and it should be commented on. The parameters are very different between the datasets (top of page 7), why is that? Why do they have to differ so much - is the method very unstable w.r.t. the parameters? How can it be that b_{overlap} = b ? Also, in the one-shot classification results, the number of sampled neighbors is 1 without fine-tuning and 5 with fine-tuning - this is strange and not explained.\n7) This work seems related to simultaneous clustering and representation learning, in that it combines discrete reordering and continuous deep network training. The authors should perhaps mention this line of work. See e.g. Yang et al. “Joint Unsupervised Learning of Deep Representations and Image Clusters”, CVPR 2016.\n\nTo conclude, the paper has some interesting ideas, but the presentation is not convincing, and the experiments are substandard. Therefore at this point I cannot recommend the paper for publication.",
"The paper is about learning the order of an unordered data sample via learning a Markov chain. The paper is well written, and experiments are carefully performed. The math appears correct and the algorithms are clearly stated. However, it really is unclear how impactful are the results.\n\nGiven that finding order is important, A high level question is that given a markov chain's markov property, why is it needed to estimate the entire sequence \\pi star at all? Given that the RHS of the first equation in section 3.2 factorizes, why not simply estimate the best next state for every data s_i?\n\nIn the related works section, there are past generative models which deserve mentions: Deep Boltzmann Machines, Deep Belief Nets, Restricted Boltzmann Machines, and Neural Autoregressive Density Estimators.\n\nEquation 1, why is P(\\pi) being multiplied with the probability of the sequence p({s_i}) ? are there other loss formulations here?\n\nAlg 1, line 7, are there typos with the subscripts?\n\nSection 3.1 make sure to note that f(s,s') sums to 1.0, else it is not a proper transition operator.\n\nSection 3.4, the Bernoulli transition operators very much similar to RBMs, where z is the hidden layer, and there are a lot of literature related to MCMC with RBM models.\n\nDue the complexity of the full problem, a lot of simplification are made and coordinate descent is used. However there are no guarantees to finding the optimal order and a local minimum is probably always reached. Imagining a situation where there are two distinct clusters of s_i, the initial transition operator just happen to jump to the other cluster. This would produce a very different learned order \\pi compared to a transition operator which happen to be very local. Therefore, initialization of the transition operator is very important, and without any regularization, it's not clear what is the point of learning a locally optimal ordering.\n\nMost of the ordering results are qualitative, it would be nice if a dataset with a ground truth ordering can be obtained and we have some quantitative measure. (such as the human pose joint tracking example given by the authors)\n\nIn summary, there are some serious concerns on the impact of this paper. However, this paper is well written and interesting.\n\n\n\n",
"We thank the Reviewer for pointing out the possible improvements on the paper.\n\n1. [Concerns on the Motivation and Quantitative Results]\n\nConsider the task of studying evolutions for galaxy or star systems. Usually, the process takes millions or even billions of years, and it is infeasible for a human to collect successive data points manifesting meaningful changes. Therefore, we propose to recover the evolution when just providing a snapshot of thousands of data points. Similar arguments can be made in the study of slow-moving human diseases such as Parkinson's. On the opposite side, the cellular or molecular processes are too fast to permit entire trajectories. In these applications, scientists would like to recover the order from non-sequenced and individual data, which can further benefit the following researches such as learning dynamic systems, observing specific patterns in the data stream, and performing comparisons on different sequences. We will add these comments in the revised manuscript.\n\nAdditionally, in the revised manuscript, we will provide the quantitative results that compare our proposed algorithm with the true order and other methods in some order-given datasets.\n\n2. [Concerns on the One-Shot Learning Experiments]\n\nTo clarify, given a labeled data, we do not select nearest neighbor data for it. Instead, we treat our proposed GMN as a generative model and then generate a sequence of data. Consider the 5-way (i.e., 5 classes) 1-shot (i.e., 1 labeled data per class) task; now we'll have 5 sequences for different categories. Next, we determine the class of unlabeled data based on the fitness within each sequence, which means we determine the class based on the highest generation probability (see Eq. (4)). On the other hand, all the other approaches are deterministic models, which are not able to generate data. Note that, we only have 1 labeled data per class at testing time.\n\n3. [Nearest Neighbor as a strong baseline]\n\nAs far as we know, there is not much prior work on discovering the order in an unordered dataset. Therefore, we consider Nearest Neighbor as a baseline method. We will avoid the \"strong\" word in the revised manuscript.\n\n4. [Distance Metric Free]\n\nWe do not intend to claim the negative log-likelihood of a Gaussian has nothing to do with Euclidean distance. We aim to propose an algorithm that can discover the order based on the Markov chain generation probability. This is compared to the Nearest Neighbor sorting, which requires a pre-defined distance metric. To avoid the confusion, we will rephrase distance-metric-free term in the revised manuscript.\n\n5. [Concerns on Generative Model Experiments]\n\nWe will rephrase the section to avoid confusion with conventional experiments in the generative model. \n\nFig. 2 illustrates the advantage of using our proposed algorithm for searching next state. Our transition operator is trained to recover the order in the entire dataset, and thus it could significantly reduce the problem of being stuck in similar states. Note that this is all carried out under a unified model. Therefore, we adopt Nearest Neighbor search as a baseline comparison. To provide more thorough experiments, we will also provide the suggested baseline \"linearly extrapolating based on two most recent samples\" in the revised manuscript.\n\n6. [Concerns on the Hyper Parameters]\n\nOur proposed algorithm is not very sensitive to the choice of hyperparameters. First, the total number of data in various datasets are different. For example, MNIST, Horse, and MSR_SenseCam have 60,000, 328, and 362 instances, respectively. Second, we can feed the entire dataset into a batch when the total number of data is small. That is, we can have b = 328 and 362 for Horse and MSR_SenseCam dataset, respectively. And the corresponding overlaps between batches (i.e., b_overlap) would be 328 and 362. Please see Alg. 2 for more details.\n\n7. [Concerns on Related Works]\n\nAlthough we do not focus on clustering, we will add the discussion with the suggested paper in the revised manuscript\n\n",
"1. [Concern on the Abstract] \nThe term \"automatically\" refers to the meaning that our proposed GMN assumes this order can be learned even though it is not given explicitly. We will clarify this in the revised manuscript.\n\n\n2. [Concern on the Introduction] \nConsider the task of studying evolutions for galaxy or star systems. Usually, the process takes millions or even billions of years, and it is infeasible for a human to collect successive data points manifesting meaningful changes. Therefore, we propose to recover the evolution when just providing a snapshot of thousands of data points. Similar arguments can be made in the study of slow-moving human diseases such as Parkinson's. On the opposite side, the cellular or molecular processes are too fast to permit entire trajectories. In these applications, scientists would like to recover the order from non-sequenced and individual data, which can further benefit the following researches such as learning dynamic systems, observing specific patterns in the data stream, and performing comparisons on different sequences. We will add these comments in the revised manuscript.\n\n3. [Concern on Related Work]\nWe thank the Reviewer for providing helpful suggestions for improving Related Work section. We will make more clear connections between our proposed GMN and Deep Generative Models as well as One-Shot Learning Models. Moreover, since we utilize deep neural networks for amortizing the large state space in the transitional operator, we consider our model as a deep model.\n\nAll previous works build on a strong assumption that the chain needs to be mixed, while in practice it’s very hard to judge whether a chain is mixing or not. As a comparison, our model is free of this assumption, because the underlying model does not build on any property related to the stationary distribution. It is not our intent to claim that the unmixed chain can result in exhibiting real data relationships. We will clarify this as well as the differences between \"implicit\" and \"explicit\" model in the revised manuscript.\n\nAdditionally, prior work proposed to learn the Markov chain such that the data are gradually denoised from low-quality to high-quality images. On the other hand, our model aims to order the data by assuming the order follows Markov chain data generation order. \n\n4. [Concerns on the Generative Markov Models]\n\nYes, we agree that it’s very important to describe in more detail on how to construct the transitional operators using neural networks. As the reviewer has pointed out, this essentially plays the role of the implicit distance metric in our model. We thank the reviewer for this suggestion and we will definitely expand the discussion in a revised version. In the current version, we briefly discuss the neural network parametrization in Sec. 3.4. More specifically, we consider two distribution families (Bernoulli for binary-valued state and Gaussian for real-valued state). Also, this is a proper transitional operator. That is, sum of f(s,s') is 1.0. We use the conditional independence assumption which is also adopted in Restricted Boltzmann Machines. We will note this in the revised manuscript. \n\n5. [Concerns on Sec. 4.1]\n\nWe randomly partition the entire datasets into batches, which means that, in each batch, we do not assume all the classes are available nor an equal number of instances per class. We will clarify this in the revised manuscript.\n\n6. [Concerns on Sec. 4.1.1]\n\nFig. 2 illustrates the advantage of using our proposed algorithm for searching next state. Our transitional operator is trained to recover the order in the entire dataset, and thus it could significantly reduce the problem of stucking in similar states. The distinguishable transitions benefit from our algorithm instead of the architecture design for the transitional operator. However, the parametrization from Neural Network is also crucial. Neural Network serves as an universal function approximator, which enables us to amortize the large state space for every single state in a unified model.\n\n7. [Concerns on the Consistency between x and s]\n\nWe will unify the notation in the revised manuscript.\n\n8. [Evaluation on Ordered Dataset]\n\nWe do provide the evaluation with the ordered dataset (Moving MNIST) in Supplementary. In the revised manuscript, we will also provide the quantitative results that compare our proposed algorithm with the true order and other methods for more order-given datasets.\n\n\n",
"1. [Impact of the Results] \n\nWe think finding the implicit order in a given set is an important problem and the proposed method could be applied in various domains, including studying galaxy evolutions/human diseases, and recovering videos from image frames.\n\n2. [Why estimating the entire sequence \\pi?]\n\nIn our algorithmic development we indeed estimate the best next state for each given state in the dataset (See Alg. 1, Line 5). But such greedy heuristics is a local search strategy and does not guarantee the globally optimal ordering that maximizes the likelihood function. \n\nOn the other hand, we have also conducted experiments for estimating the best next state given every state s_i. Unfortunately, this makes the learned Markov chain stuck in a few dominant modes. To fix this, we treat the Markov chain generation process as the permutation (i.e., an implicit order) of the data. This modification encourages the state to explore different states without having the issue of collapsing into few dominant modes. We will clarify this in the revised manuscript.\n\n3. [Permutation \\pi]\n\nWe assume the dataset exhibits an implicit order \\pi^* which follows the generation process in a Markov chain. However, the direct computation is computationally intractable (i.e., the total number of data may be too large). In Sec. 3.3, we relax the learning of the order from the entire dataset into different batches of the dataset. To ensure an ergodic Markov chain, we assure the batches overlap with each other.\n\n\n4. [Related Generative Models] \n\nWe will add the discussions in related work for Deep Boltzmann Machines, Deep Belief Nets, Restricted Boltzmann Machines, and Neural Autoregressive Density Estimators.\n\n5. [Typos and Clarifications] \n\nThere is an additional term (a typo) \\sum_{\\pi \\in \\Pi(n)} in Eq. (1). However, the prior of permutation (i.e., \\pi) may not be uniform, and thus P(\\pi) should not be avoided in Eq. (1). \n\nThere is also a typo in line 7, Alg. 1. \n\nWe will fix these typos in the revised manuscript.\n\n6. [Transitional Operator]\n\nSum of f(s,s') is 1.0. We use the conditional independence assumption which is also adopted in Restricted Boltzmann Machines. We will note this in the revised manuscript. Other MCMC approaches related to RBM will also be discussed in Sec. 3.4 in the revised manuscript.\n\n7. [No guarantees to finding the optimal order]\n\nIn the revised version we have shown that finding the globally optimal order in a given Markov chain and a dataset is NP-complete, hence there is no efficient algorithm that can find the optimal order. We argue that in this sense, locally optimal order obtained using greedy heuristics is favorable in many real-world applications.\n\n8. [Concern on Initialization]\n\nWe have tried three different initializations in our experiments. The first is to use Nearest Neighbor with Euclidean distance to suggest an initial order, and then train the transitional operator based on this order in few iterations (i.e., 5 iterations). The second is replacing Euclidean distance with L1-distance. The third is random initialization. We observe that even for the random initialization, the order recovered from our proposed algorithm still leads to a reasonable one that avoids unstable jumps between two distinct clusters. Therefore, we argue that the initialization may not be so crucial to our algorithm. We will add the discussion in the revised manuscript.\n\n9. [Quantitative Results]\n\nIn the revised manuscript, we will provide the quantitative results that compare our proposed algorithm with the true order and other methods for some order-given datasets. \n\n"
] | [
4,
4,
4,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1
] | [
"iclr_2018_rJ695PxRW",
"iclr_2018_rJ695PxRW",
"iclr_2018_rJ695PxRW",
"Hy97waqxM",
"B1ySxEolG",
"r1bzglTgG"
] |
iclr_2018_SyuWNMZ0W | Directing Generative Networks with Weighted Maximum Mean Discrepancy | The maximum mean discrepancy (MMD) between two probability measures P
and Q is a metric that is zero if and only if all moments of the two measures
are equal, making it an appealing statistic for two-sample tests. Given i.i.d. samples
from P and Q, Gretton et al. (2012) show that we can construct an unbiased
estimator for the square of the MMD between the two distributions. If P is a
distribution of interest and Q is the distribution implied by a generative neural
network with stochastic inputs, we can use this estimator to train our neural network.
However, in practice we do not always have i.i.d. samples from our target
of interest. Data sets often exhibit biases—for example, under-representation of
certain demographics—and if we ignore this fact our machine learning algorithms
will propagate these biases. Alternatively, it may be useful to assume our data has
been gathered via a biased sample selection mechanism in order to manipulate
properties of the estimating distribution Q.
In this paper, we construct an estimator for the MMD between P and Q when we
only have access to P via some biased sample selection mechanism, and suggest
methods for estimating this sample selection mechanism when it is not already
known. We show that this estimator can be used to train generative neural networks
on a biased data sample, to give a simulator that reverses the effect of that
bias. | rejected-papers | The reviewers agree that the problem being addressed is interesting, however there are concerns with novelty and with the experimental results. An experiment beyond dealing with class imbalance would help strengthen this paper, as would experiments with other kinds of GANs. | train | [
"H15HuyMlz",
"SJ0Oxotlf",
"B1X0w52xz",
"H1NHCc3-f",
"HJckC9h-M",
"Bk-Npc2WG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"This paper proposes an importance-weighted estimator of the MMD, in order to estimate the MMD between distributions based on samples biased according to a known scheme. It then discusses how to estimate the scheme when it is unknown, and further proposes using it in either the MMD-based generative models of Y. Li et al. (2015) / Dziugaite et al. (2015), or in the MMD GAN of C.-L. Li et al. (2017).\n\nThe estimator itself is natural (and relatively obvious), though it has some drawbacks that aren't fully discussed (below).\n\nThe application to GAN-type learning is reasonable, and topical. The first, univariate, experiment shows that the scheme is at least plausible. But the second experiment, involving a simple T ratio based on whether an MNIST digit is a 0 or a 1, doesn't even really work! (The best model only gets the underrepresented class from 20% up to less than 40%, rather than the desired 50%, and the \"more realistic\" setting only to 33%.)\n\nIt would be helpful to debug whether this is due to the classifier being incorrect, estimator inaccuracies, or what. In particular, I would try using T based on a pretrained convnet independent of the autoencoder representation in the MMD GAN, to help diagnose where the failure mode comes from.\n\nWithout at least a working should-be-easy example like this, and with the rest of the paper's technical contribution so small, I just don't think this paper is ready for ICLR.\n\nIt's also worth noting that the equivalent algorithm for either vanilla GANs or Wasserstein GANs would be equally obvious.\n\nEstimator:\n\nIn the discussion about (2): where does the 1/m bias come from? This doesn't seem to be in Robert and Casella section 3.3.2, which is the part of the book that I assume you're referring to (incidentally, you should specify that rather than just citing a 600-page textbook).\n\nMoreover, it is worth noting that Robert and Cassela emphasize that if E[1 / \\tilde T] is infinite, the importance sampling estimator can be quite bad (for example, the estimator may have infinite variance). This happens when \\tilde T puts mass in a neighborhood around 0, i.e. when the thinned distribution doesn't have support at any place that P does. In the biased-observations case, this is in some sense unsurprising: if we don't see *any* data in a particular class of inputs, then our estimates can be quite bad (since we know nothing about a group of inputs that might strongly affect the results). In the modulating case, the equivalent situation is when F(x) lacks a mean, which seems less likely. Thus although this is probably not a huge problem for your case, it's worth at least mentioning. (See also the following relevant blog posts: https://radfordneal.wordpress.com/2008/08/17/the-harmonic-mean-of-the-likelihood-worst-monte-carlo-method-ever/ and https://xianblog.wordpress.com/2012/03/12/is-vs-self-normalised-is/ .)\n\nThe paper might be improved by stating (and proving) a theorem with expressions for the rate of convergence of the estimator, and how they depend on T.\n\n\nMinor:\n\nAnother piece of somewhat-related work is Xiong and Schneider, Learning from Point Sets with Observational Bias, UAI 2014.\n\nSutherland et al. 2016 and 2017, often referenced in the same block of citations, are the same paper.\n\nOn page 3, above (1): \"Since we have projected the distributons into an infinite-dimensional space, the distance between the two distributions is zero if and only if all their moments are the same.\" An infinite-dimensional space isn't enough; the kernel must further be characteristic, as you mention. See e.g. Sriperumbuder et al. (AISTATS 2010) for more details.\n\nFigure 1(b) seems to be plotting only the first term of \\tilde T, without the + 0.5.",
"This paper addresses the problem of sample selection bias in MMD-GANs. Instead of having access to an i.i.d. sample from the distribution of interest, it is assumed that the dataset is subject to sample selection bias or the data has been gathered via a biased sample selection mechanism. Specifically, the observed data are drawn from the modified distribution T(x)P(x) where P(x) is the true distribution we aim to estimate and T(x) is an appropriately scaled \"thinning function\". Then, the authors proposed an estimate of the MMD between two distributions using weighted maximum mean discrepancy (MMD). The idea is in fact similar to an inverse probability weighting (IPW). They considered both when T(x) is known and when T(x) is unknown and must be estimated from the data. The proposed method was evaluated using both synthetic and real MNIST dataset. \n\nIn brief, sample selection bias is generally a challenging problem in science, statistics, and machine learning, so the topic of this paper is interesting. Nevertheless, the motivation for investigating this problem specifically in MMD-GANs is not clear. What motivated you to study this problem specifically for GAN in the first place? How does solving this problem help us understand or solve the sample selection bias in general? Will it shed light on how to improve the stability of GAN? Also, the experiment results are too weak to make any justified conclusion.\n\nSome comments and questions:\n\n- How is sample selection bias related to the stability issue of training GAN? Does it worsen the stability?\n- Have estimators in Eq. (2) and Eq. (3) been studied before? Are there any theoretical guarantees that this estimate will convergence to the true MMD? \n- On page 5, why T(men) = 1 and T(women) equals to the sample ratio of men to women in labeled subset?\n- Can we use clustering to estimate the thinning function?",
"This paper presents a modification of the objective used to train generative networks with an MMD adversary (i.e. as in Dziugaite et al or Li et al 2015), where importance weighting is used to evaluate the MMD against a target distribution which differs from the data distribution. The goal is that this could be used to correct for known bias in the training data — the example considered here is for class imbalance for known, fixed classes.\n\nUsing importance sampling to estimate the MMD is straightforward only if the relationship between the data-generating distribution and the desired target distribution is somehow known and computable. Unfortunately the treatment of how this can be learned in general in section 4 is rather thin, and the only actual example here is on class imbalance. It would be good to see a comparison with other approaches for handling class imbalance. A straightforward one would be to use a stratified sampling scheme in selecting minibatches — i.e. rather than drawing minibatches uniformly from labeled data, select each minibatch by sampling an equal number of representatives from each class from the data. (Fundamentally, this requires explicit labels for whatever sort of bias we wish to correct for, for every entry in the dataset.) I don't think the demonstration of how to compute the MMD with an importance sampling estimate is a sufficient contribution on its own.\n\nAlso, I am afraid I do not understand the description of subfigures a through c in figure 1. The target distribution p(x) is given in 1(a), a thinning function in 1(b), and an observed distribution in 1(c). As described, the observed data distribution in 1(c) should be found by multiplying the density in 1(a) by the function in 1(b) and then normalizing. However, the function \\tilde T(x) in 1(b) takes values near zero when x < 0, meaning the product \\tilde T(x)p(x) should also be near zero. But in figure 1(c), the mode of p(x) near x=0 actually has higher probability than the mode near x=2, despite the fact that there \\tilde T(x) \\approx 0.5. I think this might simply be a mistake in the definition of \\tilde T(x), and that rather it should be 1.0 - \\tilde T(x), but in any case this is quite confusing.\n\nI also am confused by the results in figure 2. I would have thought that the right column, where the thinning function is used to correct for the class imbalance, would then have approximately equal numbers of zeros and ones in the generative samples. But, there are still more zeros by a factor of around 2.\n\nMinor note: please double-check references, there seem to be some issues; for example, Sutherland et al is cited twice, once as appearing at ICML 2016 and once as appearing at ICML 2017.\n\n",
"Thank you. You are correct about the proportion mismatch. While we move in the correct direction [amplifying the frequency of a target class in a partially labeled data set], we miss the desired theoretical distribution. We have identified some issues that may be contributing to this, and will include corrections in revised work. We agree that the thinning and weighting method can equivalently be applied in other GAN settings, and now see it as a method that applies to estimators in general. Your mention of failure modes and convergence is also appreciated, and will guide our future work.",
"Thank you. You are correct that this thinning and weighting approach is applicable to any estimator under a biased sampling mechanism. We expect to broaden our discussion in a revision. Why GAN? We are excited about this approach in GANs because we recognized the issue of propagating bias in these generative models, and sought to correct it with a distributional discrepancy metric. We agree that more discussion of stability and convergence would strengthen the work, and are considering other thinning function techniques to make the model more applicable to practitioners. For example, in a large, unlabeled data setting, a practitioner could say, “Generate more like these 10 items.” We believe this will be a useful and theoretically well-founded adaptation for a variety of realistic data settings.",
"Thank you. Class imbalance in a large-scale, unlabeled data setting remains an important and common problem, which we hope this will address. We agree that our first approach at a thinning function is simplistic, and are considering a more complex classifier, for an active-learning approach, where a practitioner could say, “The generated set needs more like these 10”. Also, you are correct about the figures: in the first case, the function is correct but the image was wrong, and we noticed after submission; in the second case, we have since identified some issues that may contribute to missing the desired final distribution. This work demonstrated the correct initial behavior, and your comments will help us revise to meet those desired theoretical outcomes."
] | [
4,
4,
4,
-1,
-1,
-1
] | [
4,
4,
5,
-1,
-1,
-1
] | [
"iclr_2018_SyuWNMZ0W",
"iclr_2018_SyuWNMZ0W",
"iclr_2018_SyuWNMZ0W",
"H15HuyMlz",
"SJ0Oxotlf",
"B1X0w52xz"
] |
iclr_2018_HydnA1WCb | Gaussian Prototypical Networks for Few-Shot Learning on Omniglot | We propose a novel architecture for k-shot classification on the Omniglot dataset. Building on prototypical networks, we extend their architecture to what we call Gaussian prototypical networks. Prototypical networks learn a map between images and embedding vectors, and use their clustering for classification. In our model, a part of the encoder output is interpreted as a confidence region estimate about the embedding point, and expressed as a Gaussian covariance matrix. Our network then constructs a direction and class dependent distance metric on the embedding space, using uncertainties of individual data points as weights. We show that Gaussian prototypical networks are a preferred architecture over vanilla prototypical networks with an equivalent number of parameters. We report results consistent with state-of-the-art performance in 1-shot and 5-shot classification both in 5-way and 20-way regime on the Omniglot dataset. We explore artificially down-sampling a fraction of images in the training set, which improves our performance. Our experiments therefore lead us to hypothesize that Gaussian prototypical networks might perform better in less homogeneous, noisier datasets, which are commonplace in real world applications. | rejected-papers | The reviewers agree that the idea of utilizing covariance information in the few-shot setting is interesting. There are concerns with the novelty of the paper, as well as the correctness in terms of ensuring the covariance matrix is PSD in all cases. There are some concerns with the experimental evaluation as well. In this area, Omniglot is a good sanity check, but other baseline datasets like miniImagenet are necessary to determine if this approach is truly useful. | train | [
"rkHVojvez",
"r1LJyjOlM",
"BJ9tT6Fxz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents an interesting extension to Snell et al.'s prototypical networks, by introducing uncertainty through a parameterised estimation of covariance along side the image embeddings (means). Uncertainty may be particularly important in the few-shot learning case this paper examines, when it is helpful to extract more information from limited number of input samples.\n\nHowever, several important concepts in the paper are not well explained or motivated. For example, it is a bit misleading to use the word \"covariance\" throughout the paper, when the best model only employs a scalar estimate of the variance. A related, and potentially technical problem is in computing the prototype's mean and variance (section 3.3). Eq. 5 and 6 are not well motivated, and the claim of \"optimal\" under eq.6 is not explained. More importantly, eq. 5 and 6 do not use any covariance information (off-diagonal elements of S) --- as a result, the model is likely to ignore the covariance structure even when using full covariance estimate. The distance function (eq. 4) is d Mahalanobis distance, instead of \"linear Euclidean distance\". While the paper emphasises the importance of the form of loss function, the loss function used in the model is given without explanation (and using cross-entropy over distances looks hacky).\n\nIn addition, the experiments are too limited to support the claimed benefits from encoding uncertainty. Since the accuracies on omniglot data from recent models are already close to perfect, it is unclear whether the marginally improved number reported here is significant. In addition, more analysis may better support existing claims. For example, showing subsampled images indeed had higher uncertainty, rather than only the histogram for all data points.\n\nPros:\n-Interesting problem and interesting direction.\n-Considers a number of possible alternative models\n-Intuitive illustration in Fig. 1\n\nCons:\n-Misleading use of \"covariance\"\n-The several important concepts including prototype mean/variance, distance, and loss are not well motivated or explained\n-Evaluation is too limited",
"The paper extends the prototypical networks of Snell et al, NIPS 2017 for one shot learning. Snell et al use a soft kNN classification rule, typically used in standard metric learning work (e.g. NCA, MCML), over learned instance projections, i.e. distances are computed over the learned projections. Each class is represented by a class prototype which is given by the average of the projections of the class instances. Classification is done with soft k-NN on the class prototypes. The distance that is used is the Euclidean distance over the learned representations, i.e. (z-c)^T(z-c), where z is the projection of the x instance to be classified and c is a class prototype, computed as the average of the projections of the support instances of a given class.\n\nThe present paper extends the above work to include the learning of a Mahalanobis matrix, S, for each instance, in addition to learning its projection. Thus now the classification is based on the Mahalanobis distance: (z-c)^T S_c (z-c). On a conceptual level since S_c should be a PSD matrix it can be written as the square of some matrix, i.e. S_c = A_c^TA_c, then the Mahanalobis distance becomes (A_c z - A_c c)^T ( A_c z-A_c c), i.e. in addition to learning a projection as it is done in Snell et al, the authors now learn also a linear transformation matrix which is a function of the support points (i.e. the ones which give rise to the class prototypes). The interesting part here is that the linear projection is a function of the support points. I wonder though if such a transformation could not be learned by the vanilla prototypical networks simply by learning now a projection matrix A_z as a function of the query point z. I am not sure I see any reason why the vanilla prototypical networks cannot learn to project x directly to A_z z and why one would need to do this indirectly through the use of the Mahalanobis distance as proposed in this paper.\n\nOn a more technical level the properties of the learned Mahalanobis matrix, i.e. the fact that it should be PSD, are not really discussed neither how this can be enforced especially in the case where S is a full matrix (even though the authors state that this method was not further explored). If S is diagonal then the S generation methods a) b) c) in the end of section 3.1 will make sure that S is PSD, I do not think that this is the case with d) though.\n\nIn the definition of the prototypes the component wise weigthing (eq. 5) works when the Mahalanobis matrix is diagonal (even though the weighting should be done by the \\sqrt of it), how would it work if it was a full matrix is not clear.\n\nOn the experiments side the authors could have also experimented with miniImageNet and not only omniglot as is the standard practice in one shot learning papers. \n\nI am not sure I understand figure 3 in which the authors try to see what happens if instead of learning the Mahalanobis matrix one would learn a projection that would have as many additional dimensions as free elements in the Mahalanobis matrix. I would expect to see a comparison of the vanilla prototypical nets against their method for each one of the different scenarios of the free parameters of the S matrix, something like a ratio of accuracies of the two methods in order to establish whether learning the Mahalanobis matrix brings an improvement over the prototypical nets with an equal number of output parameters. \n\n",
"SUMMARY: This work is about prototype networks for image classification. The idea is to jointly embed an image and a \"confidence measure\" into a latent space, and to use these embeddings to define prototypes together with confidence estimates. A Gaussian model is used for representing these confidences as covariance matrices. Within a class, the inverse covariance matrices of all corresponding images are averaged to for the inverse class-specific matrix S-C, and this S_C defines the tensor in the Mahalanobis metric for measuring the distances to the prototype. \n\nEVALUATION:\nCLARITY: I found the paper difficult to read. In principle, the idea seems to be clear, but then the description and motivation of the model remains very vague. For instance, what is the the precise meaning of an image-specific covariance matrix (supported by just one point)? What is the motivation to just average the inverse covariance matrices to compute S_C? Why isn't the covariance matrix estimated in the usual way as the empirical covariance in the embedding space? \nNOVELTY: Honestly, I had difficulties to see which parts of this work could be sufficiently novel. The idea of using a Gaussian model and its associated Mahalanobis metric is certainly interesting, but also a time-honored concept. The experiments focus very specifically on the omniglot dataset, and it is not entirely clear to me what should be concluded from the results presented. Are you sure that there is any significant improvement over the models in (Snell et al, Mishra et al, Munkhandalai & Yu, Finn et al.)? \n\n\n"
] | [
4,
3,
3
] | [
4,
4,
4
] | [
"iclr_2018_HydnA1WCb",
"iclr_2018_HydnA1WCb",
"iclr_2018_HydnA1WCb"
] |
iclr_2018_ryH_bShhW | DOUBLY STOCHASTIC ADVERSARIAL AUTOENCODER | Any autoencoder network can be turned into a generative model by imposing an arbitrary prior distribution on its hidden code vector. Variational Autoencoder uses a KL divergence penalty to impose the prior, whereas Adversarial Autoencoder uses generative adversarial networks. A straightforward modification of Adversarial Autoencoder can be achieved by replacing the adversarial network with maximum mean discrepancy (MMD) network. This replacement leads to a new set of probabilistic autoencoder which is also discussed in our paper.
However, an essential challenge remains in both of these probabilistic autoencoders, namely that the only source of randomness at the output of encoder, is the training data itself. Lack of enough stochasticity can make the optimization problem non-trivial. As a result, they can lead to degenerate solutions where the generator collapses into sampling only a few modes.
Our proposal is to replace the adversary of the adversarial autoencoder by a space of {\it stochastic} functions. This replacement introduces a a new source of randomness which can be considered as a continuous control for encouraging {\it explorations}. This prevents the adversary from fitting too closely to the generator and therefore leads to more diverse set of generated samples. Consequently, the decoder serves as a better generative network which unlike MMD nets scales linearly with the amount of data. We provide mathematical and empirical evidence on how this replacement outperforms the pre-existing architectures. | rejected-papers | The reviewers all outlined concerns regarding novelty and the maturity of this work. It would be helpful to clarify the relation to doubly stochastic kernel machines as opposed to random kitchen sinks, and to provide more insight into how this stochasticity helps. Finally, the approach should be tried on more difficult image datasets. | val | [
"By7B42BxM",
"B1BsWE9lM",
"BJQGTw5lM",
"Hk9p9pV-G",
"BJxDZRVZM",
"SkJ9YnNbz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
"public"
] | [
"Thank you for the feedback, and I have read it.\n\nThe authors claimed that they used techniques in [6] in which I am not an expert for this. However I cannot find the comparison that the authors mentioned in the feedback, so I am not sure if the claim is true.\n\nI still recommend rejection for the paper, and as I said in the first review, the paper is not mature enough.\n\n==== original review ===\n\nThe paper describes a generative model that replaces the GAN loss in the adversarial auto-encoder with MMD loss. Although the author claim the novelty as adding noise to the discriminator, it seems to me that at least for the RBF case it just does the following:\n1. write down MMD as an integral probability metric (IPM)\n2. say the test function, which originally should be in an RKHS, will be approximated using random feature approximations.\n\nAlthough the authors explained the intuition a bit and showed some empirical results, I still don't see why this method should work better than directly minimising MMD. Also it is not preferred to look at the generated images and claim diversity, instead it's better to have some kind of quantitative metric such as the inception score.\n\nFinally, given the fact that we have too many GAN related papers now, I don't think the innovation contained in the paper (which is using random features) is good enough to be published at ICLR. Also the paper is not clearly written, and I would suggest better not to copy-past paragraphs in the abstract and intro.\n\nThat said, I would welcome for the authors feedback and see if I have misunderstood something.",
"\nIn this paper, the authors propose doubly stochastic adversarial autoencoder, which is essentially applying the doubly stochastic gradient for the variational form of maximum mean discrepancy. \n\nThe most severe issue is lacking novelty. It is a straightforward combination of existing work, therefore, the contribution of this work is rare. \n\nMoreover, some of the claims in the paper are not appropriate. For example, using random features to approximate the kernel function does not bring extra stochasticity. The random features are fixed once sampled from the base measure of the corresponding kernel. Basically, you can view the random feature approximation as a linear combination of fixed nonlinear basis which are sampled from some distribution. \n\nFinally, the experiments are promising. However, to be more convincing, more benchmarks, e.g., cifar10/100 and CelebA, are needed. ",
"This manuscript explores the idea of adding noise to the adversary's play in GAN dynamics over an RKHS. This is equivalent to adding noise to the gradient update, using the duality of reproducing kernels. Unfortunately, the evaluation here is wholly unsatisfactory to justify the manuscript's claims. No concrete practical algorithm specification is given (only a couple of ideas to inject noise listed), only a qualitative one on a 2-dimensional latent space in MNIST, and an inconclusive one using the much-doubted Parzen window KDE method. The idea as stated in the abstract and introduction may well be worth pursuing, but not on the evidence provided by the rest of the manuscript.",
"We hope our further explanations clear any confusion left in the paper.\n \n > Moreover, some of the claims in the paper are not appropriate. For example, using random features to approximate \n the kernel function does not bring extra stochasticity. The random features are fixed once sampled from the base \n measure of the corresponding kernel. Basically, you can view the random feature approximation as a linear \n combination of fixed nonlinear basis which are sampled from some distribution. \n\nAlthough we use the random feature approximation technique, but it is different from the well known paper of \"Random Features for Large-Scale Kernel Machines, A. Rahimi, B, Recht\". In fact, we did run experiments for this vanilla case (using random feature approximation) and did NOT lead to promising results. This case would be no different from the MMD case, which avoids the minimax nature of the problem altogether, as is explained in the first paragraph of section 3 of our paper. It comes with the extra benefit of linear computations but with no extra stochasticity. We totally agree. \n\nOur approach is more related to the doubly stochastic kernel machines, ref [6] cited in the paper. The introduced stochasticity is then the result of the stochastic functions as the adversary’s strategies. To back up our assertion on the extra stochasticity empirically, please refer to Fig. 2a. Please note that the proposed approach helps the encoder to recover a mixture of 2D-Gaussians despite having a 2D-Gaussian distribution as the prior. \n\nWe do hope the novelty of approach would be more clear after these comments. \n",
"We would like to thank the reviewer for the kind comments. \n\n -> Finally, given the fact that we have too many GAN related papers now I don't think the innovation contained in the \n paper (which is using random features) is good enough to be published at ICLR.\n \n\nAlthough we use the random feature approximation technique, but it is different from the well known paper of \"Random Features for Large-Scale Kernel Machines, A. Rahimi, B, Recht\". In fact, we did run experiments for this vanilla case (using random feature approximation) and did NOT lead to promising results. This case would be no different from the MMD case, which avoids the minimax nature of the problem altogether, as is explained in the first paragraph of section 3 of our paper. It comes with the extra benefit of linear computations but with no extra stochasticity. Our approach is more related to the doubly stochastic kernel machines, ref [6] cited in the paper.\n\n -> I still don't see why this method should work better than directly minimizing MMD. \n\nThe improvement of DS-AAE is because of the extra stochasticity introduced into the architecture. The adversary’s strategies are stochastic functions which then inject extra stochasticity into the architecture. To back up our assertion on the extra stochasticity empirically, please refer to Fig. 2a. Please note that the proposed approach helps the encoder to recover a mixture of 2D-Gaussians despite having a 2D-Gaussian distribution as the prior. \n\nWe do hope our further explanations clear any confusion left in the paper and that the novelty behind DS-AAE design would be more clear.",
"Thank you for the comments. We hope our further explanations clear any confusion left in the paper.\n\n -> This manuscript explores the idea of adding noise to the adversary's play in GAN dynamics over an RKHS. This is \n equivalent to adding noise to the gradient update, using the duality of reproducing kernels.\n\nThe approach is not equivalent to adding noise to the gradient update. The introduced stochasticity is the result of the stochastic functions as the adversary’s strategies. The introduced approach, however, can be perceived as a mechanism for smoothing the gradients. This is to mitigate the model collapse issue. In order to see how DS-AAE can address the mode collapse issue, please consider a case when there is a \"hole\" in the learned coding space (which would be expected in the course of training - The learned coding space is also visualized in Fig.2a and Fig. 2c after training). In such cases, the adversary cannot discriminate against the boundaries around the \"hole\" properly. This is because of the bumpy gradients terms. This leads to mode collapse issue, discussed at the introduction and is indeed the main motivation for proposing DS-AAE. The bumpy gradient terms can be avoided using DS-AAE architecture. This is mathematically explained at the bottom of page 3, right before Theorem 1. \n \n -> No concrete practical algorithm specification is given (only a couple of ideas to inject noise listed), only a qualitative \n one on a 2-dimensional latent space in MNIST, and an inconclusive one using the much-doubted Parzen window KDE \n method. T\n\nDimensionality of the hidden codes are 6 and 4 for the Fig.2b and Fig. 2d, respectively. Only figures 2.a and 2.c are on 2-dimensional latent space (for visualization purposes). More importantly, Fig. 2a shows that the introduced stochastically helps the encoder to recover a mixture of 2D-Gaussians despite having a 2D-Gaussian distribution as prior. \n\n"
] | [
3,
3,
2,
-1,
-1,
-1
] | [
4,
5,
5,
-1,
-1,
-1
] | [
"iclr_2018_ryH_bShhW",
"iclr_2018_ryH_bShhW",
"iclr_2018_ryH_bShhW",
"B1BsWE9lM",
"By7B42BxM",
"BJQGTw5lM"
] |